title
stringlengths 3
69
| text
stringlengths 901
96.7k
| relevans
float64 0.76
0.83
| popularity
float64 0.94
1
| ranking
float64 0.76
0.83
|
---|---|---|---|---|
Interdisciplinarity | Interdisciplinarity or interdisciplinary studies involves the combination of multiple academic disciplines into one activity (e.g., a research project). It draws knowledge from several fields like sociology, anthropology, psychology, economics, etc. It is related to an interdiscipline or an interdisciplinary field, which is an organizational unit that crosses traditional boundaries between academic disciplines or schools of thought, as new needs and professions emerge. Large engineering teams are usually interdisciplinary, as a power station or mobile phone or other project requires the melding of several specialties. However, the term "interdisciplinary" is sometimes confined to academic settings.
The term interdisciplinary is applied within education and training pedagogies to describe studies that use methods and insights of several established disciplines or traditional fields of study. Interdisciplinarity involves researchers, students, and teachers in the goals of connecting and integrating several academic schools of thought, professions, or technologies—along with their specific perspectives—in the pursuit of a common task. The epidemiology of HIV/AIDS or global warming requires understanding of diverse disciplines to solve complex problems. Interdisciplinary may be applied where the subject is felt to have been neglected or even misrepresented in the traditional disciplinary structure of research institutions, for example, women's studies or ethnic area studies. Interdisciplinarity can likewise be applied to complex subjects that can only be understood by combining the perspectives of two or more fields.
The adjective interdisciplinary is most often used in educational circles when researchers from two or more disciplines pool their approaches and modify them so that they are better suited to the problem at hand, including the case of the team-taught course where students are required to understand a given subject in terms of multiple traditional disciplines. Interdisciplinary education fosters cognitive flexibility and prepares students to tackle complex, real-world problems by integrating knowledge from multiple fields. This approach emphasizes active learning, critical thinking, and problem-solving skills, equipping students with the adaptability needed in an increasingly interconnected world. For example, the subject of land use may appear differently when examined by different disciplines, for instance, biology, chemistry, economics, geography, and politics.
Development
Although "interdisciplinary" and "interdisciplinarity" are frequently viewed as twentieth century terms, the concept has historical antecedents, most notably Greek philosophy. Julie Thompson Klein attests that "the roots of the concepts lie in a number of ideas that resonate through modern discourse—the ideas of a unified science, general knowledge, synthesis and the integration of knowledge", while Giles Gunn says that Greek historians and dramatists took elements from other realms of knowledge (such as medicine or philosophy) to further understand their own material. The building of Roman roads required men who understood surveying, material science, logistics and several other disciplines. Any broadminded humanist project involves interdisciplinarity, and history shows a crowd of cases, as seventeenth-century Leibniz's task to create a system of universal justice, which required linguistics, economics, management, ethics, law philosophy, politics, and even sinology.
Interdisciplinary programs sometimes arise from a shared conviction that the traditional disciplines are unable or unwilling to address an important problem. For example, social science disciplines such as anthropology and sociology paid little attention to the social analysis of technology throughout most of the twentieth century. As a result, many social scientists with interests in technology have joined science, technology and society programs, which are typically staffed by scholars drawn from numerous disciplines. They may also arise from new research developments, such as nanotechnology, which cannot be addressed without combining the approaches of two or more disciplines. Examples include quantum information processing, an amalgamation of quantum physics and computer science, and bioinformatics, combining molecular biology with computer science. Sustainable development as a research area deals with problems requiring analysis and synthesis across economic, social and environmental spheres; often an integration of multiple social and natural science disciplines. Interdisciplinary research is also key to the study of health sciences, for example in studying optimal solutions to diseases. Some institutions of higher education offer accredited degree programs in Interdisciplinary Studies.
At another level, interdisciplinarity is seen as a remedy to the harmful effects of excessive specialization and isolation in information silos. On some views, however, interdisciplinarity is entirely indebted to those who specialize in one field of study—that is, without specialists, interdisciplinarians would have no information and no leading experts to consult. Others place the focus of interdisciplinarity on the need to transcend disciplines, viewing excessive specialization as problematic both epistemologically and politically. When interdisciplinary collaboration or research results in new solutions to problems, much information is given back to the various disciplines involved. Therefore, both disciplinarians and interdisciplinarians may be seen in complementary relation to one another.
Barriers
Because most participants in interdisciplinary ventures were trained in traditional disciplines, they must learn to appreciate differences of perspectives and methods. For example, a discipline that places more emphasis on quantitative rigor may produce practitioners who are more scientific in their training than others; in turn, colleagues in "softer" disciplines who may associate quantitative approaches with difficulty grasp the broader dimensions of a problem and lower rigor in theoretical and qualitative argumentation. An interdisciplinary program may not succeed if its members remain stuck in their disciplines (and in disciplinary attitudes). Those who lack experience in interdisciplinary collaborations may also not fully appreciate the intellectual contribution of colleagues from those disciplines. From the disciplinary perspective, however, much interdisciplinary work may be seen as "soft", lacking in rigor, or ideologically motivated; these beliefs place barriers in the career paths of those who choose interdisciplinary work. For example, interdisciplinary grant applications are often refereed by peer reviewers drawn from established disciplines; interdisciplinary researchers may experience difficulty getting funding for their research. In addition, untenured researchers know that, when they seek promotion and tenure, it is likely that some of the evaluators will lack commitment to interdisciplinarity. They may fear that making a commitment to interdisciplinary research will increase the risk of being denied tenure.
Interdisciplinary programs may also fail if they are not given sufficient autonomy. For example, interdisciplinary faculty are usually recruited to a joint appointment, with responsibilities in both an interdisciplinary program (such as women's studies) and a traditional discipline (such as history). If the traditional discipline makes the tenure decisions, new interdisciplinary faculty will be hesitant to commit themselves fully to interdisciplinary work. Other barriers include the generally disciplinary orientation of most scholarly journals, leading to the perception, if not the fact, that interdisciplinary research is hard to publish. In addition, since traditional budgetary practices at most universities channel resources through the disciplines, it becomes difficult to account for a given scholar or teacher's salary and time. During periods of budgetary contraction, the natural tendency to serve the primary constituency (i.e., students majoring in the traditional discipline) makes resources scarce for teaching and research comparatively far from the center of the discipline as traditionally understood. For these same reasons, the introduction of new interdisciplinary programs is often resisted because it is perceived as a competition for diminishing funds.
Due to these and other barriers, interdisciplinary research areas are strongly motivated to become disciplines themselves. If they succeed, they can establish their own research funding programs and make their own tenure and promotion decisions. In so doing, they lower the risk of entry. Examples of former interdisciplinary research areas that have become disciplines, many of them named for their parent disciplines, include neuroscience, cybernetics, biochemistry and biomedical engineering. These new fields are occasionally referred to as "interdisciplines". On the other hand, even though interdisciplinary activities are now a focus of attention for institutions promoting learning and teaching, as well as organizational and social entities concerned with education, they are practically facing complex barriers, serious challenges and criticism. The most important obstacles and challenges faced by interdisciplinary activities in the past two decades can be divided into "professional", "organizational", and "cultural" obstacles.
Interdisciplinary studies and studies of interdisciplinarity
An initial distinction should be made between interdisciplinary studies, which can be found spread across the academy today, and the study of interdisciplinarity, which involves a much smaller group of researchers. The former is instantiated in thousands of research centers across the US and the world. The latter has one US organization, the Association for Interdisciplinary Studies (founded in 1979), two international organizations, the International Network of Inter- and Transdisciplinarity (founded in 2010) and the Philosophy of/as Interdisciplinarity Network (founded in 2009). The US's research institute devoted to the theory and practice of interdisciplinarity, the Center for the Study of Interdisciplinarity at the University of North Texas, was founded in 2008 but is closed as of 1 September 2014, the result of administrative decisions at the University of North Texas.
An interdisciplinary study is an academic program or process seeking to synthesize broad perspectives, knowledge, skills, interconnections, and epistemology in an educational setting. Interdisciplinary programs may be founded in order to facilitate the study of subjects which have some coherence, but which cannot be adequately understood from a single disciplinary perspective (for example, women's studies or medieval studies). More rarely, and at a more advanced level, interdisciplinarity may itself become the focus of study, in a critique of institutionalized disciplines' ways of segmenting knowledge.
In contrast, studies of interdisciplinarity raise to self-consciousness questions about how interdisciplinarity works, the nature and history of disciplinarity, and the future of knowledge in post-industrial society. Researchers at the Center for the Study of Interdisciplinarity have made the distinction between philosophy 'of' and 'as' interdisciplinarity, the former identifying a new, discrete area within philosophy that raises epistemological and metaphysical questions about the status of interdisciplinary thinking, with the latter pointing toward a philosophical practice that is sometimes called 'field philosophy'.
Perhaps the most common complaint regarding interdisciplinary programs, by supporters and detractors alike, is the lack of synthesis—that is, students are provided with multiple disciplinary perspectives but are not given effective guidance in resolving the conflicts and achieving a coherent view of the subject. Others have argued that the very idea of synthesis or integration of disciplines presupposes questionable politico-epistemic commitments. Critics of interdisciplinary programs feel that the ambition is simply unrealistic, given the knowledge and intellectual maturity of all but the exceptional undergraduate; some defenders concede the difficulty, but insist that cultivating interdisciplinarity as a habit of mind, even at that level, is both possible and essential to the education of informed and engaged citizens and leaders capable of analyzing, evaluating, and synthesizing information from multiple sources in order to render reasoned decisions.
While much has been written on the philosophy and promise of interdisciplinarity in academic programs and professional practice, social scientists are increasingly interrogating academic discourses on interdisciplinarity, as well as how interdisciplinarity actually works—and does not—in practice. Some have shown, for example, that some interdisciplinary enterprises that aim to serve society can produce deleterious outcomes for which no one can be held to account.
Politics of interdisciplinary studies
Since 1998, there has been an ascendancy in the value of interdisciplinary research and teaching and a growth in the number of bachelor's degrees awarded at U.S. universities classified as multi- or interdisciplinary studies. The number of interdisciplinary bachelor's degrees awarded annually rose from 7,000 in 1973 to 30,000 a year by 2005 according to data from the National Center of Educational Statistics (NECS). In addition, educational leaders from the Boyer Commission to Carnegie's President Vartan Gregorian to Alan I. Leshner, CEO of the American Association for the Advancement of Science have advocated for interdisciplinary rather than disciplinary approaches to problem-solving in the 21st century. This has been echoed by federal funding agencies, particularly the National Institutes of Health under the direction of Elias Zerhouni, who has advocated that grant proposals be framed more as interdisciplinary collaborative projects than single-researcher, single-discipline ones.
At the same time, many thriving longstanding bachelor's in interdisciplinary studies programs in existence for 30 or more years, have been closed down, in spite of healthy enrollment. Examples include Arizona International (formerly part of the University of Arizona), the School of Interdisciplinary Studies at Miami University, and the Department of Interdisciplinary Studies at Wayne State University; others such as the Department of Interdisciplinary Studies at Appalachian State University, and George Mason University's New Century College, have been cut back. Stuart Henry has seen this trend as part of the hegemony of the disciplines in their attempt to recolonize the experimental knowledge production of otherwise marginalized fields of inquiry. This is due to threat perceptions seemingly based on the ascendancy of interdisciplinary studies against traditional academia.
Examples
Communication science: Communication studies takes up theories, models, concepts, etc. of other, independent disciplines such as sociology, political science and economics and thus decisively develops them.
Environmental science: Environmental science is an interdisciplinary earth science aimed at addressing environmental issues such as global warming and pollution, and involves the use of a wide range of scientific disciplines including geology, chemistry, physics, ecology, and oceanography. Faculty members of environmental programs often collaborate in interdisciplinary teams to solve complex global environmental problems. Those who study areas of environmental policy such as environmental law, sustainability, and environmental justice, may also seek knowledge in the environmental sciences to better develop their expertise and understanding in their fields.
Knowledge management: Knowledge management discipline exists as a cluster of divergent schools of thought under an overarching knowledge management umbrella by building on works in computer science, economics, human resource management, information systems, organizational behavior, philosophy, psychology, and strategic management.
Liberal arts education: A select realm of disciplines that cut across the humanities, social sciences, and hard sciences, initially intended to provide a well-rounded education. Several graduate programs exist in some form of Master of Arts in Liberal Studies to continue to offer this interdisciplinary course of study.
Materials science: Field that combines the scientific and engineering aspects of materials, particularly solids. It covers the design, discovery and application of new materials by incorporating elements of physics, chemistry, and engineering.
Permaculture: A holistic design science that provides a framework for making design decisions in any sphere of human endeavor, but especially in land use and resource security.
Provenance research: Interdisciplinary research comes into play when clarifying the path of artworks into public and private art collections and also in relation to human remains in natural history collections.
Sports science: Sport science is an interdisciplinary science that researches the problems and manifestations in the field of sport and movement in cooperation with a number of other sciences, such as sociology, ethics, biology, medicine, biomechanics or pedagogy.
Transport sciences: Transport sciences are a field of science that deals with the relevant problems and events of the world of transport and cooperates with the specialised legal, ecological, technical, psychological or pedagogical disciplines in working out the changes of place of people, goods, messages that characterise them.<ref>Hendrik Ammoser, Mirko Hoppe: Glossary of Transport and Transport Sciences (PDF; 1,3 MB), published in the series Discussion Papers from the Institute of Economics and Transport, Technische Universität Dresden. Dresden 2006. </ref>
Venture research: Venture research is an interdisciplinary research area located in the human sciences that deals with the conscious entering into and experiencing of borderline situations. For this purpose, the findings of evolutionary theory, cultural anthropology, social sciences, behavioral research, differential psychology, ethics or pedagogy are cooperatively processed and evaluated.Siegbert A. Warwitz: Vom Sinn des Wagens. Why people take on dangerous challenges. In: German Alpine Association (ed.): Berg 2006. Tyrolia Publishing House. Munich-Innsbruck-Bolzano. P. 96-111.
Historical examples
There are many examples of when a particular idea, almost in the same period, arises in different disciplines. One case is the shift from the approach of focusing on "specialized segments of attention" (adopting one particular perspective), to the idea of "instant sensory awareness of the whole", an attention to the "total field", a "sense of the whole pattern, of form and function as a unity", an "integral idea of structure and configuration". This has happened in painting (with cubism), physics, poetry, communication and educational theory. According to Marshall McLuhan, this paradigm shift was due to the passage from an era shaped by mechanization, which brought sequentiality, to the era shaped by the instant speed of electricity, which brought simultaneity.
Efforts to simplify and defend the concept
An article in the Social Science Journal attempts to provide a simple, common-sense, definition of interdisciplinarity, bypassing the difficulties of defining that concept and obviating the need for such related concepts as transdisciplinarity, pluridisciplinarity, and multidisciplinary:
In turn, interdisciplinary richness of any two instances of knowledge, research, or education can be ranked by weighing four variables: number of disciplines involved, the "distance" between them, the novelty of any particular combination, and their extent of integration.
Interdisciplinary knowledge and research are important because:
"Creativity often requires interdisciplinary knowledge.
Immigrants often make important contributions to their new field.
Disciplinarians often commit errors which can be best detected by people familiar with two or more disciplines.
Some worthwhile topics of research fall in the interstices among the traditional disciplines.
Many intellectual, social, and practical problems require interdisciplinary approaches.
Interdisciplinary knowledge and research serve to remind us of the unity-of-knowledge ideal.
Interdisciplinarians enjoy greater flexibility in their research.
More so than narrow disciplinarians, interdisciplinarians often treat themselves to the intellectual equivalent of traveling in new lands.
Interdisciplinarians may help breach communication gaps in the modern academy, thereby helping to mobilize its enormous intellectual resources in the cause of greater social rationality and justice.
By bridging fragmented disciplines, interdisciplinarians might play a role in the defense of academic freedom."
Quotations
See also
Commensurability (philosophy of science)
Double degree
Encyclopedism
Holism
Holism in science
Integrative learning
Interdiscipline
Interdisciplinary arts
Interdisciplinary teaching
Interprofessional education
Meta-functional expertise
Methodology
Polymath
Science of team science
Social ecological model
Science and technology studies (STS)
Synoptic philosophy
Systems theory
Thematic learning
Periodic table of human sciences in Tinbergen's four questions
Transdisciplinarity
References
Further reading
Association for Interdisciplinary Studies
Center for the Study of Interdisciplinarity
Centre for Interdisciplinary Research in the Arts (University of Manchester)
College for Interdisciplinary Studies, University of British Columbia, Vancouver, British Columbia, Canada
Frank, Roberta: Interdisciplitarity': The First Half Century", Issues in Integrative Studies 6 (1988): 139-151.
Frodeman, R., Klein, J.T., and Mitcham, C. Oxford Handbook of Interdisciplinarity. Oxford University Press, 2010.
The Evergreen State College, Olympia, Washington
Gram Vikas (2007) Annual Report, p. 19.
Hang Seng Centre for Cognitive Studies
Indiresan, P.V. (1990) Managing Development: Decentralisation, Geographical Socialism And Urban Replication. India: Sage
Interdisciplinary Arts Department, Columbia College Chicago
Interdisciplinarity and tenure/
Interdisciplinary Studies Project, Harvard University School of Education, Project Zero
Klein, Julie Thompson (1996) Crossing Boundaries: Knowledge, Disciplinarities, and Interdisciplinarities (University Press of Virginia)
Klein, Julie Thompson (2006) "Resources for interdisciplinary studies." Change, (Mark/April). 52–58
Klein, Julie Thompson and Thorsten Philipp (2023), "Interdisciplinarity" in Handbook Transdisciplinary Learning. Eds. Thorsten Philipp und Tobias Schmohl, 195-204. Bielefeld: transcript. doi: 10.14361/9783839463475-021.
Kockelmans, Joseph J. editor (1979) Interdisciplinarity and Higher Education, The Pennsylvania State University Press .
Yifang Ma, Roberta Sinatra, Michael Szell, Interdisciplinarity: A Nobel Opportunity, November 2018
Gerhard Medicus Gerhard Medicus: Being Human – Bridging the Gap between the Sciences of Body and Mind, Berlin 2017 VWB]
Moran, Joe. (2002). Interdisciplinarity.
Morson, Gary Saul and Morton O. Schapiro (2017). Cents and Sensibility: What Economics Can Learn from the Humanities. (Princeton University Press)
NYU Gallatin School of Individualized Study, New York, NY
Poverty Action Lab
Rhoten, D. (2003). A multi-method analysis of the social and technical conditions for interdisciplinary collaboration.
School of Social Ecology at the University of California, Irvine
Siskin, L.S. & Little, J.W. (1995). The Subjects in Question. Teachers College Press. about the departmental organization of high schools and efforts to change that.
Stiglitz, Joseph (2002) Globalisation and its Discontents, United States of America, W.W. Norton and Company
Sumner, A and M. Tribe (2008) International Development Studies: Theories and Methods in Research and Practice, London: Sage
Thorbecke, Eric. (2006) "The Evolution of the Development Doctrine, 1950–2005". UNU-WIDER Research Paper No. 2006/155. United Nations University, World Institute for Development Economics Research
Trans- & inter-disciplinary science approaches- A guide to on-line resources on integration and trans- and inter-disciplinary approaches.
Truman State University's Interdisciplinary Studies Program
Peter Weingart and Nico Stehr, eds. 2000. Practicing Interdisciplinarity (University of Toronto Press)
External links
Association for Interdisciplinary Studies
National Science Foundation Workshop Report: Interdisciplinary Collaboration in Innovative Science and Engineering Fields''
Rethinking Interdisciplinarity online conference, organized by the Institut Nicod, CNRS, Paris [broken]
Center for the Study of Interdisciplinarity at the University of North Texas
Labyrinthe. Atelier interdisciplinaire, a journal (in French), with a special issue on La Fin des Disciplines?
Rupkatha Journal on Interdisciplinary Studies in Humanities: An Online Open Access E-Journal, publishing articles on a number of areas
Article about interdisciplinary modeling (in French with an English abstract)
Wolf, Dieter. Unity of Knowledge, an interdisciplinary project
Soka University of America has no disciplinary departments and emphasizes interdisciplinary concentrations in the Humanities, Social and Behavioral Sciences, International Studies, and Environmental Studies.
SystemsX.ch – The Swiss Initiative in Systems Biology
Tackling Your Inner 5-Year-Old: Saving the world requires an interdisciplinary perspective
Academia
Academic discipline interactions
Knowledge
Occupations
Pedagogy
Philosophy of education | 0.787407 | 0.996882 | 0.784952 |
Scenario planning | Scenario planning, scenario thinking, scenario analysis, scenario prediction and the scenario method all describe a strategic planning method that some organizations use to make flexible long-term plans. It is in large part an adaptation and generalization of classic methods used by military intelligence.
In the most common application of the method, analysts generate simulation games for policy makers. The method combines known facts, such as demographics, geography and mineral reserves, with military, political, and industrial information, and key driving forces identified by considering social, technical, economic, environmental, and political ("STEEP") trends.
In business applications, the emphasis on understanding the behavior of opponents has been reduced while more attention is now paid to changes in the natural environment. At Royal Dutch Shell for example, scenario planning has been described as changing mindsets about the exogenous part of the world prior to formulating specific strategies.
Scenario planning may involve aspects of systems thinking, specifically the recognition that many factors may combine in complex ways to create sometimes surprising futures (due to non-linear feedback loops). The method also allows the inclusion of factors that are difficult to formalize, such as novel insights about the future, deep shifts in values, and unprecedented regulations or inventions. Systems thinking used in conjunction with scenario planning leads to plausible scenario storylines because the causal relationship between factors can be demonstrated. These cases, in which scenario planning is integrated with a systems thinking approach to scenario development, are sometimes referred to as "dynamic scenarios".
Critics of using a subjective and heuristic methodology to deal with uncertainty and complexity argue that the technique has not been examined rigorously, nor influenced sufficiently by scientific evidence. They caution against using such methods to "predict" based on what can be described as arbitrary themes and "forecasting techniques".
A challenge and a strength of scenario-building is that "predictors are part of the social context about which they are trying to make a prediction and may influence that context in the process". As a consequence, societal predictions can become self-destructing. For example, a scenario in which a large percentage of a population will become HIV infected based on existing trends may cause more people to avoid risky behavior and thus reduce the HIV infection rate, invalidating the forecast (which might have remained correct if it had not been publicly known). Or, a prediction that cybersecurity will become a major issue may cause organizations to implement more secure cybersecurity measures, thus limiting the issue.
Principle
Crafting scenarios
Combinations and permutations of fact and related social changes are called "scenarios". Scenarios usually include plausible, but unexpectedly important, situations and problems that exist in some nascent form in the present day. Any particular scenario is unlikely. However, futures studies analysts select scenario features so they are both possible and uncomfortable. Scenario planning helps policy-makers and firms anticipate change, prepare responses, and create more robust strategies.
Scenario planning helps a firm anticipate the impact of different scenarios and identify weaknesses. When anticipated years in advance, those weaknesses can be avoided or their impacts reduced more effectively than when similar real-life problems are considered under the duress of an emergency. For example, a company may discover that it needs to change contractual terms to protect against a new class of risks, or collect cash reserves to purchase anticipated technologies or equipment. Flexible business continuity plans with "PREsponse protocols" can help cope with similar operational problems and deliver measurable future value.
Zero-sum game scenarios
Strategic military intelligence organizations also construct scenarios. The methods and organizations are almost identical, except that scenario planning is applied to a wider variety of problems than merely military and political problems.
As in military intelligence, the chief challenge of scenario planning is to find out the real needs of policy-makers, when policy-makers may not themselves know what they need to know, or may not know how to describe the information that they really want.
Good analysts design wargames so that policy makers have great flexibility and freedom to adapt their simulated organisations. Then these simulated organizations are "stressed" by the scenarios as a game plays out. Usually, particular groups of facts become more clearly important. These insights enable intelligence organizations to refine and repackage real information more precisely to better serve the policy-makers' real-life needs. Usually the games' simulated time runs hundreds of times faster than real life, so policy-makers experience several years of policy decisions, and their simulated effects, in less than a day.
This chief value of scenario planning is that it allows policy-makers to make and learn from mistakes without risking career-limiting failures in real life. Further, policymakers can make these mistakes in a safe, unthreatening, game-like environment, while responding to a wide variety of concretely presented situations based on facts. This is an opportunity to "rehearse the future", an opportunity that does not present itself in day-to-day operations where every action and decision counts.
How military scenario planning or scenario thinking is done
Decide on the key question to be answered by the analysis. By doing this, it is possible to assess whether scenario planning is preferred over the other methods. If the question is based on small changes or a very small number of elements, other more formalized methods may be more useful.
Set the time and scope of the analysis. Take into consideration how quickly changes have happened in the past, and try to assess to what degree it is possible to predict common trends in demographics, product life cycles. A usual timeframe can be five to 10 years.
Identify major stakeholders. Decide who will be affected and have an interest in the possible outcomes. Identify their current interests, whether and why these interests have changed over time in the past.
Map basic trends and driving forces. This includes industry, economic, political, technological, legal, and societal trends. Assess to what degree these trends will affect your research question. Describe each trend, how and why it will affect the organisation. In this step of the process, brainstorming is commonly used, where all trends that can be thought of are presented before they are assessed, to capture possible group thinking and tunnel vision.
Find key uncertainties. Map the driving forces on two axes, assessing each force on an uncertain/(relatively) predictable and important/unimportant scale. All driving forces that are considered unimportant are discarded. Important driving forces that are relatively predictable (ex. demographics) can be included in any scenario, so the scenarios should not be based on these. This leaves you with a number of important and unpredictable driving forces. At this point, it is also useful to assess whether any linkages between driving forces exist, and rule out any "impossible" scenarios (ex. full employment and zero inflation).
Check for the possibility to group the linked forces and if possible, reduce the forces to the two most important. (To allow the scenarios to be presented in a neat xy-diagram)
Identify the extremes of the possible outcomes of the two driving forces and check the dimensions for consistency and plausibility. Three key points should be assessed:
Time frame: are the trends compatible within the time frame in question?
Internal consistency: do the forces describe uncertainties that can construct probable scenarios.
Vs the stakeholders: are any stakeholders currently in disequilibrium compared to their preferred situation, and will this evolve the scenario? Is it possible to create probable scenarios when considering the stakeholders? This is most important when creating macro-scenarios where governments, large organisations et al. will try to influence the outcome.
Define the scenarios, plotting them on a grid if possible. Usually, two to four scenarios are constructed. The current situation does not need to be in the middle of the diagram (inflation may already be low), and possible scenarios may keep one (or more) of the forces relatively constant, especially if using three or more driving forces. One approach can be to create all positive elements into one scenario and all negative elements (relative to the current situation) in another scenario, then refining these. In the end, try to avoid pure best-case and worst-case scenarios.
Write out the scenarios. Narrate what has happened and what the reasons can be for the proposed situation. Try to include good reasons why the changes have occurred as this helps the further analysis. Finally, give each scenario a descriptive (and catchy) name to ease later reference.
Assess the scenarios. Are they relevant for the goal? Are they internally consistent? Are they archetypical? Do they represent relatively stable outcome situations?
Identify research needs. Based on the scenarios, assess where more information is needed. Where needed, obtain more information on the motivations of stakeholders, possible innovations that may occur in the industry and so on.
Develop quantitative methods. If possible, develop models to help quantify consequences of the various scenarios, such as growth rate, cash flow etc. This step does of course require a significant amount of work compared to the others, and may be left out in back-of-the-envelope-analyses.
Converge towards decision scenarios. Retrace the steps above in an iterative process until you reach scenarios which address the fundamental issues facing the organization. Try to assess upsides and downsides of the possible scenarios.
Use by managers
The basic concepts of the process are relatively simple. In terms of the overall approach to forecasting, they can be divided into three main groups of activities (which are, generally speaking, common to all long range forecasting processes):
Environmental analysis
Scenario planning
Corporate strategy
The first of these groups quite simply comprises the normal environmental analysis. This is almost exactly the same as that which should be undertaken as the first stage of any serious long-range planning. However, the quality of this analysis is especially important in the context of scenario planning.
The central part represents the specific techniques – covered here – which differentiate the scenario forecasting process from the others in long-range planning.
The final group represents all the subsequent processes which go towards producing the corporate strategy and plans. Again, the requirements are slightly different but in general they follow all the rules of sound long-range planning.
Applications
Business
In the past, strategic plans have often considered only the "official future", which was usually a straight-line graph of current trends carried into the future. Often the trend lines were generated by the accounting department, and lacked discussions of demographics, or qualitative differences in social conditions.
These simplistic guesses are surprisingly good most of the time, but fail to consider qualitative social changes that can affect a business or government. Paul J. H. Schoemaker offers a strong managerial case for the use of scenario planning in business and had wide impact.
The approach may have had more impact outside Shell than within, as many others firms and consultancies started to benefit as well from scenario planning. Scenario planning is as much art as science, and prone to a variety of traps (both in process and content) as enumerated by Paul J. H. Schoemaker. More recently scenario planning has been discussed as a tool to improve the strategic agility, by cognitively preparing not only multiple scenarios but also multiple consistent strategies.
Military
Scenario planning is also extremely popular with military planners. Most states' department of war maintains a continuously updated series of strategic plans to cope with well-known military or strategic problems. These plans are almost always based on scenarios, and often the plans and scenarios are kept up-to-date by war games, sometimes played out with real troops. This process was first carried out (arguably the method was invented by) the Prussian general staff of the mid-19th century.
Finance
In economics and finance, a financial institution might use scenario analysis to forecast several possible scenarios for the economy (e.g. rapid growth, moderate growth, slow growth) and for financial returns (for bonds, stocks, cash, etc.) in each of those scenarios. It might consider sub-sets of each of the possibilities. It might further seek to determine correlations and assign probabilities to the scenarios (and sub-sets if any). Then it will be in a position to consider how to distribute assets between asset types (i.e. asset allocation); the institution can also calculate the scenario-weighted expected return (which figure will indicate the overall attractiveness of the financial environment). It may also perform stress testing, using adverse scenarios.
Depending on the complexity of the problem, scenario analysis can be a demanding exercise. It can be difficult to foresee what the future holds (e.g. the actual future outcome may be entirely unexpected), i.e. to foresee what the scenarios are, and to assign probabilities to them; and this is true of the general forecasts never mind the implied financial market returns. The outcomes can be modeled mathematically/statistically e.g. taking account of possible variability within single scenarios as well as possible relationships between scenarios. In general, one should take care when assigning probabilities to different scenarios as this could invite a tendency to consider only the scenario with the highest probability.
Geopolitics
In politics or geopolitics, scenario analysis involves reflecting on the possible alternative paths of a social or political environment and possibly diplomatic and war risks.
History of use by academic and commercial organizations
Most authors attribute the introduction of scenario planning to Herman Kahn through his work for the US Military in the 1950s at the RAND Corporation where he developed a technique of describing the future in stories as if written by people in the future. He adopted the term "scenarios" to describe these stories. In 1961 he founded the Hudson Institute where he expanded his scenario work to social forecasting and public policy. One of his most controversial uses of scenarios was to suggest that a nuclear war could be won. Though Kahn is often cited as the father of scenario planning, at the same time Kahn was developing his methods at RAND, Gaston Berger was developing similar methods at the Centre d’Etudes Prospectives which he founded in France. His method, which he named 'La Prospective', was to develop normative scenarios of the future which were to be used as a guide in formulating public policy. During the mid-1960s various authors from the French and American institutions began to publish scenario planning concepts such as 'La Prospective' by Berger in 1964 and 'The Next Thirty-Three Years' by Kahn and Wiener in 1967. By the 1970s scenario planning was in full swing with a number of institutions now established to provide support to business including the Hudson Foundation, the Stanford Research Institute (now SRI International), and the SEMA Metra Consulting Group in France. Several large companies also began to embrace scenario planning including DHL Express, Dutch Royal Shell and General Electric.
Possibly as a result of these very sophisticated approaches, and of the difficult techniques they employed (which usually demanded the resources of a central planning staff), scenarios earned a reputation for difficulty (and cost) in use. Even so, the theoretical importance of the use of alternative scenarios, to help address the uncertainty implicit in long-range forecasts, was dramatically underlined by the widespread confusion which followed the Oil Shock of 1973. As a result, many of the larger organizations started to use the technique in one form or another. By 1983 Diffenbach reported that 'alternate scenarios' were the third most popular technique for long-range forecasting – used by 68% of the large organizations he surveyed.
Practical development of scenario forecasting, to guide strategy rather than for the more limited academic uses which had previously been the case, was started by Pierre Wack in 1971 at the Royal Dutch Shell group of companies – and it, too, was given impetus by the Oil Shock two years later. Shell has, since that time, led the commercial world in the use of scenarios – and in the development of more practical techniques to support these. Indeed, as – in common with most forms of long-range forecasting – the use of scenarios has (during the depressed trading conditions of the last decade) reduced to only a handful of private-sector organisations, Shell remains almost alone amongst them in keeping the technique at the forefront of forecasting.
There has only been anecdotal evidence offered in support of the value of scenarios, even as aids to forecasting; and most of this has come from one company – Shell. In addition, with so few organisations making consistent use of them – and with the timescales involved reaching into decades – it is unlikely that any definitive supporting evidenced will be forthcoming in the foreseeable future. For the same reasons, though, a lack of such proof applies to almost all long-range planning techniques. In the absence of proof, but taking account of Shell's well documented experiences of using it over several decades (where, in the 1990s, its then CEO ascribed its success to its use of such scenarios), can be significant benefit to be obtained from extending the horizons of managers' long-range forecasting in the way that the use of scenarios uniquely does.
Process
The part of the overall process which is radically different from most other forms of long-range planning is the central section, the actual production of the scenarios. Even this, though, is relatively simple, at its most basic level. As derived from the approach most commonly used by Shell, it follows six steps:
Decide drivers for change/assumptions
Bring drivers together into a viable framework
Produce 7–9 initial mini-scenarios
Reduce to 2–3 scenarios
Draft the scenarios
Identify the issues arising
Step 1 – decide assumptions/drivers for change
The first stage is to examine the results of environmental analysis to determine which are the most important factors that will decide the nature of the future environment within which the organisation operates. These factors are sometimes called 'variables' (because they will vary over the time being investigated, though the terminology may confuse scientists who use it in a more rigorous manner). Users tend to prefer the term 'drivers' (for change), since this terminology is not laden with quasi-scientific connotations and reinforces the participant's commitment to search for those forces which will act to change the future. Whatever the nomenclature, the main requirement is that these will be informed assumptions.
This is partly a process of analysis, needed to recognise what these 'forces' might be. However, it is likely that some work on this element will already have taken place during the preceding environmental analysis. By the time the formal scenario planning stage has been reached, the participants may have already decided – probably in their sub-conscious rather than formally – what the main forces are.
In the ideal approach, the first stage should be to carefully decide the overall assumptions on which the scenarios will be based. Only then, as a second stage, should the various drivers be specifically defined. Participants, though, seem to have problems in separating these stages.
Perhaps the most difficult aspect though, is freeing the participants from the preconceptions they take into the process with them. In particular, most participants will want to look at the medium term, five to ten years ahead rather than the required longer-term, ten or more years ahead. However, a time horizon of anything less than ten years often leads participants to extrapolate from present trends, rather than consider the alternatives which might face them. When, however, they are asked to consider timescales in excess of ten years they almost all seem to accept the logic of the scenario planning process, and no longer fall back on that of extrapolation. There is a similar problem with expanding participants horizons to include the whole external environment.
Brainstorming
In any case, the brainstorming which should then take place, to ensure that the list is complete, may unearth more variables – and, in particular, the combination of factors may suggest yet others.
A very simple technique which is especially useful at this – brainstorming – stage, and in general for handling scenario planning debates is derived from use in Shell where this type of approach is often used. An especially easy approach, it only requires a conference room with a bare wall and copious supplies of 3M Post-It Notes.
The six to ten people ideally taking part in such face-to-face debates should be in a conference room environment which is isolated from outside interruptions. The only special requirement is that the conference room has at least one clear wall on which Post-It notes will stick. At the start of the meeting itself, any topics which have already been identified during the environmental analysis stage are written (preferably with a thick magic marker, so they can be read from a distance) on separate Post-It Notes. These Post-It Notes are then, at least in theory, randomly placed on the wall. In practice, even at this early stage the participants will want to cluster them in groups which seem to make sense. The only requirement (which is why Post-It Notes are ideal for this approach) is that there is no bar to taking them off again and moving them to a new cluster.
A similar technique – using 5" by 3" index cards – has also been described (as the 'Snowball Technique'), by Backoff and Nutt, for grouping and evaluating ideas in general.
As in any form of brainstorming, the initial ideas almost invariably stimulate others. Indeed, everyone should be encouraged to add their own Post-It Notes to those on the wall. However it differs from the 'rigorous' form described in 'creative thinking' texts, in that it is much slower paced and the ideas are discussed immediately. In practice, as many ideas may be removed, as not being relevant, as are added. Even so, it follows many of the same rules as normal brainstorming and typically lasts the same length of time – say, an hour or so only.
It is important that all the participants feel they 'own' the wall – and are encouraged to move the notes around themselves. The result is a very powerful form of creative decision-making for groups, which is applicable to a wide range of situations (but is especially powerful in the context of scenario planning). It also offers a very good introduction for those who are coming to the scenario process for the first time. Since the workings are largely self-evident, participants very quickly come to understand exactly what is involved.
Important and uncertain
This step is, though, also one of selection – since only the most important factors will justify a place in the scenarios. The 80:20 Rule here means that, at the end of the process, management's attention must be focused on a limited number of most important issues. Experience has proved that offering a wider range of topics merely allows them to select those few which interest them, and not necessarily those which are most important to the organisation.
In addition, as scenarios are a technique for presenting alternative futures, the factors to be included must be genuinely 'variable'. They should be subject to significant alternative outcomes. Factors whose outcome is predictable, but important, should be spelled out in the introduction to the scenarios (since they cannot be ignored). The Important Uncertainties Matrix, as reported by Kees van der Heijden of Shell, is a useful check at this stage.
At this point it is also worth pointing out that a great virtue of scenarios is that they can accommodate the input from any other form of forecasting. They may use figures, diagrams or words in any combination. No other form of forecasting offers this flexibility.
Step 2 – bring drivers together into a viable framework
The next step is to link these drivers together to provide a meaningful framework. This may be obvious, where some of the factors are clearly related to each other in one way or another. For instance, a technological factor may lead to market changes, but may be constrained by legislative factors. On the other hand, some of the 'links' (or at least the 'groupings') may need to be artificial at this stage. At a later stage more meaningful links may be found, or the factors may then be rejected from the scenarios. In the most theoretical approaches to the subject, probabilities are attached to the event strings. This is difficult to achieve, however, and generally adds little – except complexity – to the outcomes.
This is probably the most (conceptually) difficult step. It is where managers' 'intuition' – their ability to make sense of complex patterns of 'soft' data which more rigorous analysis would be unable to handle – plays an important role. There are, however, a range of techniques which can help; and again the Post-It-Notes approach is especially useful:
Thus, the participants try to arrange the drivers, which have emerged from the first stage, into groups which seem to make sense to them. Initially there may be many small groups. The intention should, therefore, be to gradually merge these (often having to reform them from new combinations of drivers to make these bigger groups work). The aim of this stage is eventually to make 6–8 larger groupings; 'mini-scenarios'. Here the Post-It Notes may be moved dozens of times over the length – perhaps several hours or more – of each meeting. While this process is taking place the participants will probably want to add new topics – so more Post-It Notes are added to the wall. In the opposite direction, the unimportant ones are removed (possibly to be grouped, again as an 'audit trail' on another wall). More important, the 'certain' topics are also removed from the main area of debate – in this case they must be grouped in clearly labelled area of the main wall.
As the clusters – the 'mini-scenarios' – emerge, the associated notes may be stuck to each other rather than individually to the wall; which makes it easier to move the clusters around (and is a considerable help during the final, demanding stage to reducing the scenarios to two or three).
The great benefit of using Post-It Notes is that there is no bar to participants changing their minds. If they want to rearrange the groups – or simply to go back (iterate) to an earlier stage – then they strip them off and put them in their new position.
Step 3 – produce initial mini-scenarios
The outcome of the previous step is usually between seven and nine logical groupings of drivers. This is usually easy to achieve. The 'natural' reason for this may be that it represents some form of limit as to what participants can visualise.
Having placed the factors in these groups, the next action is to work out, very approximately at this stage, what is the connection between them. What does each group of factors represent?
Step 4 – reduce to two or three scenarios
The main action, at this next stage, is to reduce the seven to nine mini-scenarios/groupings detected at the previous stage to two or three larger scenarios
There is no theoretical reason for reducing to just two or three scenarios, only a practical one. It has been found that the managers who will be asked to use the final scenarios can only cope effectively with a maximum of three versions! Shell started, more than three decades ago, by building half a dozen or more scenarios – but found that the outcome was that their managers selected just one of these to concentrate on. As a result, the planners reduced the number to three, which managers could handle easily but could no longer so easily justify the selection of only one! This is the number now recommended most frequently in most of the literature.
Complementary scenarios
As used by Shell, and as favoured by a number of the academics, two scenarios should be complementary; the reason being that this helps avoid managers 'choosing' just one, 'preferred', scenario – and lapsing once more into single-track forecasting (negating the benefits of using 'alternative' scenarios to allow for alternative, uncertain futures). This is, however, a potentially difficult concept to grasp, where managers are used to looking for opposites; a good and a bad scenario, say, or an optimistic one versus a pessimistic one – and indeed this is the approach (for small businesses) advocated by Foster. In the Shell approach, the two scenarios are required to be equally likely, and between them to cover all the 'event strings'/drivers. Ideally they should not be obvious opposites, which might once again bias their acceptance by users, so the choice of 'neutral' titles is important. For example, Shell's two scenarios at the beginning of the 1990s were titled 'Sustainable World' and 'Global Mercantilism'[xv]. In practice, we found that this requirement, much to our surprise, posed few problems for the great majority, 85%, of those in the survey; who easily produced 'balanced' scenarios. The remaining 15% mainly fell into the expected trap of 'good versus bad'. We have found that our own relatively complex (OBS) scenarios can also be made complementary to each other; without any great effort needed from the teams involved; and the resulting two scenarios are both developed further by all involved, without unnecessary focusing on one or the other.
Testing
Having grouped the factors into these two scenarios, the next step is to test them, again, for viability. Do they make sense to the participants? This may be in terms of logical analysis, but it may also be in terms of intuitive 'gut-feel'. Once more, intuition often may offer a useful – if academically less respectable – vehicle for reacting to the complex and ill-defined issues typically involved. If the scenarios do not intuitively 'hang together', why not? The usual problem is that one or more of the assumptions turns out to be unrealistic in terms of how the participants see their world. If this is the case then you need to return to the first step – the whole scenario planning process is above all an iterative one (returning to its beginnings a number of times until the final outcome makes the best sense).
Step 5 – write the scenarios
The scenarios are then 'written up' in the most suitable form. The flexibility of this step often confuses participants, for they are used to forecasting processes which have a fixed format. The rule, though, is that you should produce the scenarios in the form most suitable for use by the managers who are going to base their strategy on them. Less obviously, the managers who are going to implement this strategy should also be taken into account. They will also be exposed to the scenarios, and will need to believe in these. This is essentially a 'marketing' decision, since it will be very necessary to 'sell' the final results to the users. On the other hand, a not inconsiderable consideration may be to use the form the author also finds most comfortable. If the form is alien to him or her the chances are that the resulting scenarios will carry little conviction when it comes to the 'sale'.
Most scenarios will, perhaps, be written in word form (almost as a series of alternative essays about the future); especially where they will almost inevitably be qualitative which is hardly surprising where managers, and their audience, will probably use this in their day to day communications. Some, though use an expanded series of lists and some enliven their reports by adding some fictional 'character' to the material – perhaps taking literally the idea that they are stories about the future – though they are still clearly intended to be factual. On the other hand, they may include numeric data and/or diagrams – as those of Shell do (and in the process gain by the acid test of more measurable 'predictions').
Step 6 – identify issues arising
The final stage of the process is to examine these scenarios to determine what are the most critical outcomes; the 'branching points' relating to the 'issues' which will have the greatest impact (potentially generating 'crises') on the future of the organisation. The subsequent strategy will have to address these – since the normal approach to strategy deriving from scenarios is one which aims to minimise risk by being 'robust' (that is it will safely cope with all the alternative outcomes of these 'life and death' issues) rather than aiming for performance (profit) maximisation by gambling on one outcome.
Use of scenarios
Scenarios may be used in a number of ways:
a) Containers for the drivers/event strings
Most basically, they are a logical device, an artificial framework, for presenting the individual factors/topics (or coherent groups of these) so that these are made easily available for managers' use – as useful ideas about future developments in their own right – without reference to the rest of the scenario. It should be stressed that no factors should be dropped, or even given lower priority, as a result of producing the scenarios. In this context, which scenario contains which topic (driver), or issue about the future, is irrelevant.
b) Tests for consistency
At every stage it is necessary to iterate, to check that the contents are viable and make any necessary changes to ensure that they are; here the main test is to see if the scenarios seem to be internally consistent – if they are not then the writer must loop back to earlier stages to correct the problem. Though it has been mentioned previously, it is important to stress once again that scenario building is ideally an iterative process. It usually does not just happen in one meeting – though even one attempt is better than none – but takes place over a number of meetings as the participants gradually refine their ideas.
c) Positive perspectives
Perhaps the main benefit deriving from scenarios, however, comes from the alternative 'flavors' of the future their different perspectives offer. It is a common experience, when the scenarios finally emerge, for the participants to be startled by the insight they offer – as to what the general shape of the future might be – at this stage it no longer is a theoretical exercise but becomes a genuine framework (or rather set of alternative frameworks) for dealing with that.
Scenario planning compared to other techniques
The flowchart to the right provides a process for classifying a phenomenon as a scenario in the intuitive logics tradition.Scenario planning differs from contingency planning, sensitivity analysis and computer simulations.
Contingency planning is a "What if" tool, that only takes into account one uncertainty. However, scenario planning considers combinations of uncertainties in each scenario. Planners also try to select especially plausible but uncomfortable combinations of social developments.
Sensitivity analysis analyzes changes in one variable only, which is useful for simple changes, while scenario planning tries to expose policy makers to significant interactions of major variables.
While scenario planning can benefit from computer simulations, scenario planning is less formalized, and can be used to make plans for qualitative patterns that show up in a wide variety of simulated events.
During the past 5 years, computer supported Morphological Analysis has been employed as aid in scenario development by the Swedish Defence Research Agency in Stockholm. This method makes it possible to create a multi-variable morphological field which can be treated as an inference model – thus integrating scenario planning techniques with contingency analysis and sensitivity analysis.
Scenario analysis
Scenario analysis is a process of analyzing future events by considering alternative possible outcomes (sometimes called "alternative worlds"). Thus, scenario analysis, which is one of the main forms of projection, does not try to show one exact picture of the future. Instead, it presents several alternative future developments. Consequently, a scope of possible future outcomes is observable. Not only are the outcomes observable, also the development paths leading to the outcomes. In contrast to prognoses, the scenario analysis is not based on extrapolation of the past or the extension of past trends. It does not rely on historical data and does not expect past observations to remain valid in the future. Instead, it tries to consider possible developments and turning points, which may only be connected to the past. In short, several scenarios are fleshed out in a scenario analysis to show possible future outcomes. Each scenario normally combines optimistic, pessimistic, and more and less probable developments. However, all aspects of scenarios should be plausible. Although highly discussed, experience has shown that around three scenarios are most appropriate for further discussion and selection. More scenarios risks making the analysis overly complicated. Scenarios are often confused with other tools and approaches to planning. The flowchart to the right provides a process for classifying a phenomenon as a scenario in the intuitive logics tradition.
Principle
Scenario-building is designed to allow improved decision-making by allowing deep consideration of outcomes and their implications.
A scenario is a tool used during requirements analysis to describe a specific use of a proposed system. Scenarios capture the system, as viewed from the outside
Scenario analysis can also be used to illuminate "wild cards." For example, analysis of the possibility of the earth being struck by a meteor suggests that whilst the probability is low, the damage inflicted is so high that the event is much more important (threatening) than the low probability (in any one year) alone would suggest. However, this possibility is usually disregarded by organizations using scenario analysis to develop a strategic plan since it has such overarching repercussions.
Combination of Delphi and scenarios
Scenario planning concerns planning based on the systematic examination of the future by picturing plausible and consistent images of that future. The Delphi method attempts to develop systematically expert opinion consensus concerning future developments and events. It is a judgmental forecasting procedure in the form of an anonymous, written, multi-stage survey process, where feedback of group opinion is provided after each round.
Numerous researchers have stressed that both approaches are best suited to be combined. Due to their process similarity, the two methodologies can be easily combined. The output of the different phases of the Delphi method can be used as input for the scenario method and vice versa. A combination makes a realization of the benefits of both tools possible. In practice, usually one of the two tools is considered the dominant methodology and the other one is added on at some stage.
The variant that is most often found in practice is the integration of the Delphi method into the scenario process (see e.g. Rikkonen, 2005; von der Gracht, 2008;). Authors refer to this type as Delphi-scenario (writing), expert-based scenarios, or Delphi panel derived scenarios. Von der Gracht (2010) is a scientifically valid example of this method. Since scenario planning is “information hungry”, Delphi research can deliver valuable input for the process. There are various types of information output of Delphi that can be used as input for scenario planning. Researchers can, for example, identify relevant events or developments and, based on expert opinion, assign probabilities to them. Moreover, expert comments and arguments provide deeper insights into relationships of factors that can, in turn, be integrated into scenarios afterwards. Also, Delphi helps to identify extreme opinions and dissent among the experts. Such controversial topics are particularly suited for extreme scenarios or wildcards.
In his doctoral thesis, Rikkonen (2005) examined the utilization of Delphi techniques in scenario planning and, concretely, in construction of scenarios. The author comes to the conclusion that the Delphi technique has instrumental value in providing different alternative futures and the argumentation of scenarios. It is therefore recommended to use Delphi in order to make the scenarios more profound and to create confidence in scenario planning. Further benefits lie in the simplification of the scenario writing process and the deep understanding of the interrelations between the forecast items and social factors.
Critique
While there is utility in weighting hypotheses and branching potential outcomes from them, reliance on scenario analysis without reporting some parameters of measurement accuracy (standard errors, confidence intervals of estimates, metadata, standardization and coding, weighting for non-response, error in reportage, sample design, case counts, etc.) is a poor second to traditional prediction. Especially in “complex” problems, factors and assumptions do not correlate in lockstep fashion. Once a specific sensitivity is undefined, it may call the entire study into question.
It is faulty logic to think, when arbitrating results, that a better hypothesis will render empiricism unnecessary. In this respect, scenario analysis tries to defer statistical laws (e.g., Chebyshev's inequality Law), because the decision rules occur outside a constrained setting. Outcomes are not permitted to “just happen”; rather, they are forced to conform to arbitrary hypotheses ex post, and therefore there is no footing on which to place expected values. In truth, there are no ex ante expected values, only hypotheses, and one is left wondering about the roles of modeling and data decision. In short, comparisons of "scenarios" with outcomes are biased by not deferring to the data; this may be convenient, but it is indefensible.
“Scenario analysis” is no substitute for complete and factual exposure of survey error in economic studies. In traditional prediction, given the data used to model the problem, with a reasoned specification and technique, an analyst can state, within a certain percentage of statistical error, the likelihood of a coefficient being within a certain numerical bound. This exactitude need not come at the expense of very disaggregated statements of hypotheses. R Software, specifically the module “WhatIf,” (in the context, see also Matchit and Zelig) has been developed for causal inference, and to evaluate counterfactuals. These programs have fairly sophisticated treatments for determining model dependence, in order to state with precision how sensitive the results are to models not based on empirical evidence.
Another challenge of scenario-building is that "predictors are part of the social context about which they are trying to make a prediction and may influence that context in the process". As a consequence, societal predictions can become self-destructing. For example, a scenario in which a large percentage of a population will become HIV infected based on existing trends may cause more people to avoid risky behavior and thus reduce the HIV infection rate, invalidating the forecast (which might have remained correct if it had not been publicly known). Or, a prediction that cybersecurity will become a major issue may cause organizations to implement more secure cybersecurity measures, thus limiting the issue.
Critique of Shell's use of scenario planning
In the 1970s, many energy companies were surprised by both environmentalism and the OPEC cartel, and thereby lost billions of dollars of revenue by mis-investment. The dramatic financial effects of these changes led at least one organization, Royal Dutch Shell, to implement scenario planning. The analysts of this company publicly estimated that this planning process made their company the largest in the world. However other observers of Shell's use of scenario planning have suggested that few if any significant long-term business advantages accrued to Shell from the use of scenario methodology. Whilst the intellectual robustness of Shell's long term scenarios was seldom in doubt their actual practical use was seen as being minimal by many senior Shell executives. A Shell insider has commented "The scenario team were bright and their work was of a very high intellectual level. However neither the high level "Group scenarios" nor the country level scenarios produced with operating companies really made much difference when key decisions were being taken".
The use of scenarios was audited by Arie de Geus's team in the early 1980s and they found that the decision-making processes following the scenarios were the primary cause of the lack of strategic implementation ), rather than the scenarios themselves. Many practitioners today spend as much time on the decision-making process as on creating the scenarios themselves.
See also
Decentralized planning (economics)
Hoshin Kanri#Hoshin planning
Futures studies
Futures techniques
Global Scenario Group
Jim Dator (Hawaii Research Center for Futures Studies)
Resilience (organizational)
Robust decision-making
Scenario (computing)
Similar terminology
Feedback loop
System dynamics (also known as Stock and flow)
System thinking
Analogous concepts
Delphi method, including Real-time Delphi
Game theory
Horizon scanning
Morphological analysis
Rational choice theory
Stress testing
Twelve leverage points
Examples
Climate change mitigation scenarios – possible futures in which global warming is reduced by deliberate actions
Dynamic Analysis and Replanning Tool
Energy modeling – the process of building computer models of energy systems
Pentagon Papers
References
Additional Bibliography
D. Erasmus, The future of ICT in financial services: The Rabobank ICT scenarios (2008).
M. Godet, Scenarios and Strategic Management, Butterworths (1987).
M. Godet, From Anticipation to Action: A Handbook of Strategic Prospective. Paris: Unesco, (1993).
Adam Kahane, Solving Tough Problems: An Open Way of Talking, Listening, and Creating New Realities (2007)
H. Kahn, The Year 2000, Calman-Levy (1967).
Herbert Meyer, "Real World Intelligence", Weidenfeld & Nicolson, 1987,
National Intelligence Council (NIC) , "Mapping the Global Future", 2005,
M. Lindgren & H. Bandhold, Scenario planning – the link between future and strategy, Palgrave Macmillan, 2003
G. Wright& G. Cairns, Scenario thinking: practical approaches to the future, Palgrave Macmillan, 2011
A. Schuehly, F. Becker t& F. Klein, Real Time Strategy: When Strategic Foresight Meets Artificial Intelligence, Emerald, 2020*
A. Ruser, Sociological Quasi-Labs: The Case for Deductive Scenario Development, Current Sociology Vol63(2): 170-181, https://journals.sagepub.com/doi/pdf/10.1177/0011392114556581
Scientific journals
Foresight
Futures
Futures & Foresight Science
Journal of Futures Studies
Technological Forecasting and Social Change
External links
Wikifutures wiki; Scenario page—wiki also includes several scenarios (GFDL licensed)
ScenarioThinking.org —more than 100 scenarios developed on various global issues, on a wiki for public use
Shell Scenarios Resources—Resources on what scenarios are, Shell's new and old scenario's, explorer's guide and other scenario resources
Learn how to use Scenario Manager in Excel to do Scenario Analysis
Systems Innovation (SI) courseware
Further reading
"Learning from the Future: Competitive Foresight Scenarios", Liam Fahey and Robert M. Randall, Published by John Wiley and Sons, 1997, , Google book
"Shirt-sleeve approach to long-range plans.", Linneman, Robert E, Kennell, John D.; Harvard Business Review; Mar/Apr77, Vol. 55 Issue 2, p141
Business models
Futures techniques
Military strategy
Risk analysis
Risk management
Strategic management
Systems thinking
Systems engineering
Types of marketing | 0.794356 | 0.988018 | 0.784839 |
Heterotrophic nutrition | Heterotrophic nutrition is a mode of nutrition in which organisms depend upon other organisms for food to survive. They can't make their own food like Green plants. Heterotrophic organisms have to take in all the organic substances they need to survive.
All animals, certain types of fungi, and non-photosynthesizing plants are heterotrophic. In contrast, green plants, red algae, brown algae, and cyanobacteria are all autotrophs, which use photosynthesis to produce their own food from sunlight. Some fungi may be saprotrophic, meaning they will extracellularly secrete enzymes onto their food to be broken down into smaller, soluble molecules which can diffuse back into the fungus.
Description
All eukaryotes except for green plants and algae are unable to manufacture their own food: They obtain food from other organisms. This mode of nutrition is also known as heterotrophic nutrition.
All heterotrophs (except blood and gut parasites) have to convert solid food into soluble compounds which are capable of being absorbed (digestion). Then the soluble products of digestion for the organism are being broken down for the release of energy (respiration). All heterotrophs depend on autotrophs for their nutrition. Heterotrophic organisms have only four types of nutrition.
Footnotes
References
Trophic ecology
Biological interactions | 0.794286 | 0.988061 | 0.784803 |
Citizen science | Citizen science (similar to community science, crowd science, crowd-sourced science, civic science, participatory monitoring, or volunteer monitoring) is research conducted with participation from the general public, or amateur/nonprofessional researchers or participants for science, social science and many other disciplines. There are variations in the exact definition of citizen science, with different individuals and organizations having their own specific interpretations of what citizen science encompasses. Citizen science is used in a wide range of areas of study including ecology, biology and conservation, health and medical research, astronomy, media and communications and information science.
There are different applications and functions of citizen science in research projects. Citizen science can be used as a methodology where public volunteers help in collecting and classifying data, improving the scientific community's capacity. Citizen science can also involve more direct involvement from the public, with communities initiating projects researching environment and health hazards in their own communities. Participation in citizen science projects also educates the public about the scientific process and increases awareness about different topics. Some schools have students participate in citizen science projects for this purpose as a part of the teaching curriculums.
Background
The first use of the term "citizen science" can be found in a January 1989 issue of MIT Technology Review, which featured three community-based labs studying environmental issues. In the 21st century, the number of citizen science projects, publications, and funding opportunities has increased. Citizen science has been used more over time, a trend helped by technological advancements. Digital citizen science platforms, such as Zooniverse, store large amounts of data for many projects and are a place where volunteers can learn how to contribute to projects. For some projects, participants are instructed to collect and enter data, such as what species they observed, into large digital global databases. For other projects, participants help classify data on digital platforms. Citizen science data is also being used to develop machine learning algorithms. An example is using volunteer-classified images to train machine learning algorithms to identify species. While global participation and global databases are found on online platforms, not all locations always have the same amount of data from contributors. Concerns over potential data quality issues, such as measurement errors and biases, in citizen science projects are recognized in the scientific community and there are statistical solutions and best practices available which can help.
Definition
The term "citizen science" has multiple origins, as well as differing concepts. "Citizen" is used in the general sense, as meaning in "citizen of the world", or the general public, rather than the legal term citizen of sovereign countries. It was first defined independently in the mid-1990s by Rick Bonney in the United States and Alan Irwin in the United Kingdom. Alan Irwin, a British sociologist, defines citizen science as "developing concepts of scientific citizenship which foregrounds the necessity of opening up science and science policy processes to the public". Irwin sought to reclaim two dimensions of the relationship between citizens and science: 1) that science should be responsive to citizens' concerns and needs; and 2) that citizens themselves could produce reliable scientific knowledge. The American ornithologist Rick Bonney, unaware of Irwin's work, defined citizen science as projects in which nonscientists, such as amateur birdwatchers, voluntarily contributed scientific data. This describes a more limited role for citizens in scientific research than Irwin's conception of the term.
The terms citizen science and citizen scientists entered the Oxford English Dictionary (OED) in June 2014. "Citizen science" is defined as "scientific work undertaken by members of the general public, often in collaboration with or under the direction of professional scientists and scientific institutions". "Citizen scientist" is defined as: (a) "a scientist whose work is characterized by a sense of responsibility to serve the best interests of the wider community (now rare)"; or (b) "a member of the general public who engages in scientific work, often in collaboration with or under the direction of professional scientists and scientific institutions; an amateur scientist". The first use of the term "citizen scientist" can be found in the magazine New Scientist in an article about ufology from October 1979.
Muki Haklay cites, from a policy report for the Wilson Center entitled "Citizen Science and Policy: A European Perspective", an alternate first use of the term "citizen science" by R. Kerson in the magazine MIT Technology Review from January 1989. Quoting from the Wilson Center report: "The new form of engagement in science received the name 'citizen science'. The first recorded example of the use of the term is from 1989, describing how 225 volunteers across the US collected rain samples to assist the Audubon Society in an acid-rain awareness raising campaign."
A Green Paper on Citizen Science was published in 2013 by the European Commission's Digital Science Unit and Socientize.eu, which included a definition for citizen science, referring to "the general public engagement in scientific research activities when citizens actively contribute to science either with their intellectual effort or surrounding knowledge or with their tools and resources. Participants provide experimental data and facilities for researchers, raise new questions and co-create a new scientific culture."
Citizen science may be performed by individuals, teams, or networks of volunteers. Citizen scientists often partner with professional scientists to achieve common goals. Large volunteer networks often allow scientists to accomplish tasks that would be too expensive or time-consuming to accomplish through other means.
Many citizen-science projects serve education and outreach goals. These projects may be designed for a formal classroom environment or an informal education environment such as museums.
Citizen science has evolved over the past four decades. Recent projects place more emphasis on scientifically sound practices and measurable goals for public education. Modern citizen science differs from its historical forms primarily in the access for, and subsequent scale of, public participation; technology is credited as one of the main drivers of the recent explosion of citizen science activity.
In March 2015, the Office of Science and Technology Policy published a factsheet entitled "Empowering Students and Others through Citizen Science and Crowdsourcing". Quoting: "Citizen science and crowdsourcing projects are powerful tools for providing students with skills needed to excel in science, technology, engineering, and math (STEM). Volunteers in citizen science, for example, gain hands-on experience doing real science, and in many cases take that learning outside of the traditional classroom setting". The National Academies of Science cites SciStarter as a platform offering access to more than 2,700 citizen science projects and events, as well as helping interested parties access tools that facilitate project participation.
In May 2016, a new open-access journal was started by the Citizen Science Association along with Ubiquity Press called Citizen Science: Theory and Practice (CS:T&P). Quoting from the editorial article titled "The Theory and Practice of Citizen Science: Launching a New Journal", "CS:T&P provides the space to enhance the quality and impact of citizen science efforts by deeply exploring the citizen science concept in all its forms and across disciplines. By examining, critiquing, and sharing findings across a variety of citizen science endeavors, we can dig into the underpinnings and assumptions of citizen science and critically analyze its practice and outcomes."
In February 2020, Timber Press, an imprint of Workman Publishing Company, published The Field Guide to Citizen Science as a practical guide for anyone interested in getting started with citizen science.
Alternative definitions
Other definitions for citizen science have also been proposed. For example, Bruce Lewenstein of Cornell University's Communication and S&TS departments describes three possible definitions:
The participation of nonscientists in the process of gathering data according to specific scientific protocols and in the process of using and interpreting that data.
The engagement of nonscientists in true decision-making about policy issues that have technical or scientific components.
The engagement of research scientists in the democratic and policy process.
Scientists and scholars who have used other definitions include Frank N. von Hippel, Stephen Schneider, Neal Lane and Jon Beckwith. Other alternative terminologies proposed are "civic science" and "civic scientist".
Further, Muki Haklay offers an overview of the typologies of the level of citizen participation in citizen science, which range from "crowdsourcing" (level 1), where the citizen acts as a sensor, to "distributed intelligence" (level 2), where the citizen acts as a basic interpreter, to "participatory science", where citizens contribute to problem definition and data collection (level 3), to "extreme citizen science", which involves collaboration between the citizen and scientists in problem definition, collection and data analysis.
A 2014 Mashable article defines a citizen scientist as: "Anybody who voluntarily contributes his or her time and resources toward scientific research in partnership with professional scientists."
In 2016, the Australian Citizen Science Association released their definition, which states "Citizen science involves public participation and collaboration in scientific research with the aim to increase scientific knowledge."
In 2020, a group of birders in the Pacific Northwest of North America, eBird Northwest, has sought to rename "citizen science" to the use of "community science", "largely to avoid using the word 'citizen' when we want to be inclusive and welcoming to any birder or person who wants to learn more about bird watching, regardless of their citizen status."
Related fields
In a Smart City era, Citizen Science relays on various web-based tools, such as WebGIS, and becomes Cyber Citizen Science. Some projects, such as SETI@home, use the Internet to take advantage of distributed computing. These projects are generally passive. Computation tasks are performed by volunteers' computers and require little involvement beyond initial setup. There is disagreement as to whether these projects should be classified as citizen science.
The astrophysicist and Galaxy Zoo co-founder Kevin Schawinski stated: "We prefer to call this [Galaxy Zoo] citizen science because it's a better description of what you're doing; you're a regular citizen but you're doing science. Crowd sourcing sounds a bit like, well, you're just a member of the crowd and you're not; you're our collaborator. You're pro-actively involved in the process of science by participating."
Compared to SETI@home, "Galaxy Zoo volunteers do real work. They're not just passively running something on their computer and hoping that they'll be the first person to find aliens. They have a stake in science that comes out of it, which means that they are now interested in what we do with it, and what we find."
Citizen policy may be another result of citizen science initiatives. Bethany Brookshire (pen name SciCurious) writes: "If citizens are going to live with the benefits or potential consequences of science (as the vast majority of them will), it's incredibly important to make sure that they are not only well informed about changes and advances in science and technology, but that they also ... are able to ... influence the science policy decisions that could impact their lives." In "The Rightful Place of Science: Citizen Science", editors Darlene Cavalier and Eric Kennedy highlight emerging connections between citizen science, civic science, and participatory technology assessment.
Benefits and limitations
The general public's involvement in scientific projects has become a means of encouraging curiosity and greater understanding of science while providing an unprecedented engagement between professional scientists and the general public. In a research report published by the U.S. National Park Service in 2008, Brett Amy Thelen and Rachel K. Thiet mention the following concerns, previously reported in the literature, about the validity of volunteer-generated data:
Some projects may not be suitable for volunteers, for instance, when they use complex research methods or require a great deal of (often repetitive) work.
If volunteers lack proper training in research and monitoring protocols, the data they collect might introduce bias into the dataset.
The question of data accuracy, in particular, remains open. John Losey, who created the Lost Ladybug citizen science project, has argued that the cost-effectiveness of citizen science data can outweigh data quality issues, if properly managed.
In December 2016, authors M. Kosmala, A. Wiggins, A. Swanson and B. Simmons published a study in the journal Frontiers in Ecology and the Environment called "Assessing Data Quality in Citizen Science". The abstract describes how ecological and environmental citizen science projects have enormous potential to advance science. Citizen science projects can influence policy and guide resource management by producing datasets that are otherwise not feasible to generate. In the section "In a Nutshell" (pg3), four condensed conclusions are stated. They are:
They conclude that as citizen science continues to grow and mature, a key metric of project success they expect to see will be a growing awareness of data quality. They also conclude that citizen science will emerge as a general tool helping "to collect otherwise unobtainable high-quality data in support of policy and resource management, conservation monitoring, and basic science."
A study of Canadian lepidoptera datasets published in 2018 compared the use of a professionally curated dataset of butterfly specimen records with four years of data from a citizen science program, eButterfly. The eButterfly dataset was used as it was determined to be of high quality because of the expert vetting process used on site, and there already existed a dataset covering the same geographic area consisting of specimen data, much of it institutional. The authors note that, in this case, citizen science data provides both novel and complementary information to the specimen data. Five new species were reported from the citizen science data, and geographic distribution information was improved for over 80% of species in the combined dataset when citizen science data was included.
Several recent studies have begun to explore the accuracy of citizen science projects and how to predict accuracy based on variables like expertise of practitioners. One example is a 2021 study by Edgar Santos-Fernandez and Kerrie Mengersen of the British Ecological Society, who utilized a case study which used recent R and Stan programming software to offer ratings of the accuracy of species identifications performed by citizen scientists in Serengeti National Park, Tanzania. This provided insight into possible problems with processes like this which include, "discriminatory power and guessing behaviour". The researchers determined that methods for rating the citizen scientists themselves based on skill level and expertise might make studies they conduct more easy to analyze.
Studies that are simple in execution are where citizen science excels, particularly in the field of conservation biology and ecology. For example, in 2019, Sumner et al. compared the data of vespid wasp distributions collected by citizen scientists with the 4-decade, long-term dataset established by the BWARS. They set up the Big Wasp Survey from 26 August to 10 September 2017, inviting citizen scientists to trap wasps and send them for identification by experts where data was recorded. The results of this study showed that the campaign garnered over 2,000 citizen scientists participating in data collection, identifying over 6,600 wasps. This experiment provides strong evidence that citizen science can generate potentially high-quality data comparable to that of expert data collection, within a shorter time frame. Although the experiment was to originally test the strength of citizen science, the team also learned more about Vespidae biology and species distribution in the United Kingdom. With this study, the simple procedure enabled citizen science to be executed in a successful manner. A study by J. Cohn describes that volunteers can be trained to use equipment and process data, especially considering that a large proportion of citizen scientists are individuals who are already well-versed in the field of science.
The demographics of participants in citizen science projects are overwhelmingly White adults, of above-average income, having a university degree. Other groups of volunteers include conservationists, outdoor enthusiasts, and amateur scientists. As such, citizen scientists are generally individuals with a pre-understanding of the scientific method and how to conduct sensible and just scientific analysis.
Ethics
Various studies have been published that explore the ethics of citizen science, including issues such as intellectual property and project design.(e.g.) The Citizen Science Association (CSA), based at the Cornell Lab of Ornithology, and the European Citizen Science Association (ECSA), based in the Museum für Naturkunde in Berlin, have working groups on ethics and principles.
In September 2015, ECSA published its Ten Principles of Citizen Science, which have been developed by the "Sharing best practice and building capacity" working group of ECSA, led by the Natural History Museum, London with input from many members of the association.
The medical ethics of internet crowdsourcing has been questioned by Graber & Graber in the Journal of Medical Ethics. In particular, they analyse the effect of games and the crowdsourcing project Foldit. They conclude: "games can have possible adverse effects, and that they manipulate the user into participation".
In March 2019, the online journal Citizen Science: Theory and Practice launched a collection of articles on the theme of Ethical Issues in Citizen Science. The articles are introduced with (quoting): "Citizen science can challenge existing ethical norms because it falls outside of customary methods of ensuring that research is conducted ethically. What ethical issues arise when engaging the public in research? How have these issues been addressed, and how should they be addressed in the future?"
In June 2019, East Asian Science, Technology and Society: An International Journal (EASTS) published an issue titled "Citizen Science: Practices and Problems" which contains 15 articles/studies on citizen science, including many relevant subjects of which ethics is one. Quoting from the introduction "Citizen, Science, and Citizen Science": "The term citizen science has become very popular among scholars as well as the general public, and, given its growing presence in East Asia, it is perhaps not a moment too soon to have a special issue of EASTS on the topic."
Use of citizen science volunteers as de facto unpaid laborers by some commercial ventures have been criticized as exploitative.
Ethics in citizen science in the health and welfare field, has been discussed in terms of protection versus participation. Public involvement researcher Kristin Liabo writes that health researcher might, in light of their ethics training, be inclined to exclude vulnerable individuals from participation, to protect them from harm. However, she argues these groups are already likely to be excluded from participation in other arenas, and that participation can be empowering and a possibility to gain life skills that these individuals need. Whether or not to become involved should be a decision these individuals should be involved in and not a researcher decision.
Economic worth
In the research paper "Can citizen science enhance public understanding of science?" by Bonney et al. 2016, statistics which analyse the economic worth of citizen science are used, drawn from two papers: i) Sauermann and Franzoni 2015, and
ii) Theobald et al. 2015. In "Crowd science user contribution patterns and their implications" by Sauermann and Franzoni (2015), seven projects from the Zooniverse web portal are used to estimate the monetary value of the citizen science that had taken place. The seven projects are: Solar Stormwatch, Galaxy Zoo Supernovae, Galaxy Zoo Hubble, Moon Zoo, Old Weather, The Milky Way Project and Planet Hunters. Using data from 180 days in 2010, they find a total of 100,386 users participated, contributing 129,540 hours of unpaid work. Estimating at a rate of $12 an hour (an undergraduate research assistant's basic wage), the total contributions amount to $1,554,474, an average of $222,068 per project. The range over the seven projects was from $22,717 to $654,130.
In "Global change and local solutions: Tapping the unrealized potential of citizen science for biodiversity research" by Theobald et al. 2015, the authors surveyed 388 unique biodiversity-based projects. Quoting: "We estimate that between 1.36 million and 2.28 million people volunteer annually in the 388 projects we surveyed, though variation is great" and that "the range of in-kind contribution of the volunteerism in our 388 citizen science projects as between $667 million to $2.5 billion annually."
Worldwide participation in citizen science continues to grow. A list of the top five citizen science communities compiled by Marc Kuchner and Kristen Erickson in July 2018 shows a total of 3.75 million participants, although there is likely substantial overlap between the communities.
Relations with education and academia
There have been studies published which examine the place of citizen science within education.(e.g.) Teaching aids can include books and activity or lesson plans.(e.g.). Some examples of studies are:
From the Second International Handbook of Science Education, a chapter entitled: "Citizen Science, Ecojustice, and Science Education: Rethinking an Education from Nowhere", by Mueller and Tippins (2011), acknowledges in the abstract that: "There is an emerging emphasis in science education on engaging youth in citizen science." The authors also ask: "whether citizen science goes further with respect to citizen development." The abstract ends by stating that the "chapter takes account of the ways educators will collaborate with members of the community to effectively guide decisions, which offers promise for sharing a responsibility for democratizing science with others."
From the journal Democracy and Education, an article entitled: "Lessons Learned from Citizen Science in the Classroom" by authors Gray, Nicosia and Jordan (GNJ; 2012) gives a response to a study by Mueller, Tippins and Bryan (MTB) called "The Future of Citizen Science". GNJ begins by stating in the abstract that "The Future of Citizen Science": "provides an important theoretical perspective about the future of democratized science and K12 education." But GRB state: "However, the authors (MTB) fail to adequately address the existing barriers and constraints to moving community-based science into the classroom." They end the abstract by arguing: "that the resource constraints of scientists, teachers, and students likely pose problems to moving true democratized science into the classroom."
In 2014, a study was published called "Citizen Science and Lifelong Learning" by R. Edwards in the journal Studies in the Education of Adults. Edwards begins by writing in the abstract that citizen science projects have expanded over recent years and engaged citizen scientists and professionals in diverse ways. He continues: "Yet there has been little educational exploration of such projects to date." He describes that "there has been limited exploration of the educational backgrounds of adult contributors to citizen science". Edwards explains that citizen science contributors are referred to as volunteers, citizens or as amateurs. He ends the abstract: "The article will explore the nature and significance of these different characterisations and also suggest possibilities for further research."
In the journal Microbiology and Biology Education a study was published by Shah and Martinez (2015) called "Current Approaches in Implementing Citizen Science in the Classroom". They begin by writing in the abstract that citizen science is a partnership between inexperienced amateurs and trained scientists. The authors continue: "With recent studies showing a weakening in scientific competency of American students, incorporating citizen science initiatives in the curriculum provides a means to address deficiencies". They argue that combining traditional and innovative methods can help provide a practical experience of science. The abstract ends: "Citizen science can be used to emphasize the recognition and use of systematic approaches to solve problems affecting the community."
In November 2017, authors Mitchell, Triska and Liberatore published a study in PLOS One titled "Benefits and Challenges of Incorporating Citizen Science into University Education". The authors begin by stating in the abstract that citizen scientists contribute data with the expectation that it will be used. It reports that citizen science has been used for first year university students as a means to experience research. They continue: "Surveys of more than 1500 students showed that their environmental engagement increased significantly after participating in data collection and data analysis." However, only a third of students agreed that data collected by citizen scientists was reliable. A positive outcome of this was that the students were more careful of their own research. The abstract ends: "If true for citizen scientists in general, enabling participants as well as scientists to analyse data could enhance data quality, and so address a key constraint of broad-scale citizen science programs."
Citizen science has also been described as challenging the "traditional hierarchies and structures of knowledge creation".
History
While citizen science developed at the end of the 20th century, characteristics of citizen science are not new. Prior to the 20th century, science was often the pursuit of gentleman scientists, amateur or self-funded researchers such as Sir Isaac Newton, Benjamin Franklin, and Charles Darwin. Women citizen scientists from before the 20th century include Florence Nightingale who "perhaps better embodies the radical spirit of citizen science". Before the professionalization of science by the end of the 19th century, most pursued scientific projects as an activity rather than a profession itself, an example being amateur naturalists in the 18th and 19th centuries.
During the British colonization of North America, American Colonists recorded the weather, offering much of the information now used to estimate climate data and climate change during this time period. These people included John Campanius Holm, who recorded storms in the mid-1600s, as well as George Washington, Thomas Jefferson, and Benjamin Franklin who tracked weather patterns during America's founding. Their work focused on identifying patterns by amassing their data and that of their peers and predecessors, rather than specific professional knowledge in scientific fields. Some consider these individuals to be the first citizen scientists, some consider figures such as Leonardo da Vinci and Charles Darwin to be citizen scientists, while others feel that citizen science is a distinct movement that developed later on, building on the preceding history of science.
By the mid-20th century, however, science was dominated by researchers employed by universities and government research laboratories. By the 1970s, this transformation was being called into question. Philosopher Paul Feyerabend called for a "democratization of science". Biochemist Erwin Chargaff advocated a return to science by nature-loving amateurs in the tradition of Descartes, Newton, Leibniz, Buffon, and Darwin—science dominated by "amateurship instead of money-biased technical bureaucrats".
A study from 2016 indicates that the largest impact of citizen science is in research on biology, conservation and ecology, and is utilized mainly as a methodology of collecting and classifying data.
Amateur astronomy
Astronomy has long been a field where amateurs have contributed throughout time, all the way up to the present day.
Collectively, amateur astronomers observe a variety of celestial objects and phenomena sometimes with equipment that they build themselves. Common targets of amateur astronomers include the Moon, planets, stars, comets, meteor showers, and a variety of deep-sky objects such as star clusters, galaxies, and nebulae. Observations of comets and stars are also used to measure the local level of artificial skyglow. One branch of amateur astronomy, amateur astrophotography, involves the taking of photos of the night sky. Many amateurs like to specialize in the observation of particular objects, types of objects, or types of events that interest them.
The American Association of Variable Star Observers has gathered data on variable stars for educational and professional analysis since 1911 and promotes participation beyond its membership on its Citizen Sky website.
Project PoSSUM is a relatively new organization, started in March 2012, which trains citizen scientists of many ages to go on polar suborbital missions. On these missions, they study noctilucent clouds with remote sensing, which reveals interesting clues about changes in the upper atmosphere and the ozone due to climate change. This is a form of citizen science which trains younger generations to be ambitious, participating in intriguing astronomy and climate change science projects even without a professional degree.
Butterfly counts
Butterfly counts have a long tradition of involving individuals in the study of butterflies' range and their relative abundance. Two long-running programs are the UK Butterfly Monitoring Scheme (started in 1976) and the North American Butterfly Association's Butterfly Count Program (started in 1975). There are various protocols for monitoring butterflies and different organizations support one or more of transects, counts and/or opportunistic sightings. eButterfly is an example of a program designed to capture any of the three types of counts for observers in North America. Species-specific programs also exist, with monarchs the prominent example. Two examples of this involve the counting of monarch butterflies during the fall migration to overwintering sites in Mexico: (1) Monarch Watch is a continent-wide project, while (2) the Cape May Monarch Monitoring Project is an example of a local project. The Austrian project Viel-Falter investigated if and how trained and supervised pupils are able to systematically collect data about the occurrence of diurnal butterflies, and how this data could contribute to a permanent butterfly monitoring system. Despite substantial identification uncertainties for some species or species groups, the data collected by pupils was successfully used to predict the general habitat quality for butterflies.
Ornithology
Citizen science projects have become increasingly focused on providing benefits to scientific research. The North American Bird Phenology Program (historically called the Bird Migration and Distribution records) may have been the earliest collective effort of citizens collecting ornithological information in the U.S. The program, dating back to 1883, was started by Wells Woodbridge Cooke. Cooke established a network of observers around North America to collect bird migration records. The Audubon Society's Christmas Bird Count, which began in 1900, is another example of a long-standing tradition of citizen science which has persisted to the present day, now containing a collection of six million handwritten migration observer cards that date back to the 19th century. Participants input this data into an online database for analysis. Citizen scientists help gather data that will be analyzed by professional researchers, and can be used to produce bird population and biodiversity indicators.
Raptor migration research relies on the data collected by the hawkwatching community. This mostly volunteer group counts migrating accipiters, buteos, falcons, harriers, kites, eagles, osprey, vultures and other raptors at hawk sites throughout North America during the spring and fall seasons. The daily data is uploaded to hawkcount.org where it can be viewed by professional scientists and the public.
Other programs in North America include Project FeederWatch, which is affiliated with the Cornell Lab of Ornithology.
Such indices can be useful tools to inform management, resource allocation, policy and planning. For example, European breeding bird survey data provide input for the Farmland Bird Index, adopted by the European Union as a structural indicator of sustainable development. This provides a cost-effective alternative to government monitoring.
Similarly, data collected by citizen scientists as part of BirdLife Australia's has been analysed to produce the first-ever Australian Terrestrial Bird Indices.
In the UK, the Royal Society for the Protection of Birds collaborated with a children’s TV show to create a national birdwatching day in 1979; the campaign has continued for over 40 years and in 2024, over 600,000 people counted almost 10 million birds during the Big Garden Birdwatch weekend.
Most recently, more programs have sprung up worldwide, including NestWatch, a bird species monitoring program which tracks data on reproduction. This might include studies on when and how often nesting occurs, counting eggs laid and how many hatch successfully, and what proportion of hatchlings survive infancy. Participation in this program is extremely easy for the general public to join. Using the recently created nest watch app which is available on almost all devices, anyone can begin to observe their local species, recording results every 3 to 4 days within the app. This forms a continually-growing database which researchers can view and utilize to understand trends within specific bird populations.
Citizen oceanography
The concept of citizen science has been extended to the ocean environment for characterizing ocean dynamics and tracking marine debris. For example, the mobile app Marine Debris Tracker is a joint partnership of National Oceanic and Atmospheric Administration and the University of Georgia. Long term sampling efforts such as the continuous plankton recorder has been fitted on ships of opportunity since 1931. Plankton collection by sailors and subsequent genetic analysis was pioneered in 2013 by Indigo V Expeditions as a way to better understand marine microbial structure and function.
Coral reefs
Citizen science in coral reef studies developed in the 21st century.
Underwater photography has become more popular since the development of moderate priced digital cameras with waterproof housings in the early 2000s, resulting on millions of pictures posted every year on various websites and social media. This mass of documentation has great scientific potential, as millions of tourists possess a much superior coverage power than professional scientists, who cannot spend so much time in the field.
As a consequence, several participative sciences programs have been developed, supported by geotagging and identification web sites such as iNaturalist. The Monitoring through many eyes project collates thousands of underwater images of the Great Barrier Reef and provides an interface for elicitation of reef health indicators.
The National Oceanic and Atmospheric Administration (NOAA) also offers opportunities for volunteer participation. By taking measurements in The United States' National Marine Sanctuaries, citizens contribute data to marine biology projects. In 2016, NOAA benefited from 137,000 hours of research.
There also exist protocols for auto-organization and self-teaching aimed at biodiversity-interested snorkelers, in order for them to turn their observations into sound scientific data, available for research. This kind of approach has been successfully used in Réunion island, allowing for tens of new records and even new species.
Freshwater fish
Aquarium hobbyists and their respective organizations are very passionate about fish conservation and often more knowledgeable about specific fish species and groups than scientific researchers. They have played an important role in the conservation of freshwater fishes by discovering new species, maintaining extensive databases with ecological information on thousands of species (such as for catfish, Mexican freshwater fishes, killifishes, cichlids), and successfully keeping and providing endangered and extinct-in-the-wild species for conservation projects. The CARES (Conservation, Awareness, Recognition, Encouragement, and Support) preservation program is the largest hobbyist organization containing over 30 aquarium societies and international organizations, and encourages serious aquarium hobbyists to devote tank space to the most threatened or extinct-in-the-wild species to ensure their survival for future generations.
Amphibians
Citizen scientists also work to monitor and conserve amphibian populations. One recent project is FrogWatch USA, organized by the Association of Zoos and Aquariums. Participants are invited to educate themselves on their local wetlands and help to save amphibian populations by reporting the data on the calls of local frogs and toads. The project already has over 150,000 observations from more than 5000 contributors. Participants are trained by program coordinators to identify calls and utilize this training to report data they find between February and August of each "monitoring season". Data is used to monitor diversity, invasion, and long-term shifts in population health within these frog and toad communities.
Rocky reefs
Reef Life Survey is a marine life monitoring programme based in Hobart, Tasmania. The project uses recreational divers that have been trained to make fish and invertebrate counts, using an approximate 50 m constant depth transect of tropical and temperate reefs, which might include coral reefs. Reef Life Survey is international in its scope, but the data collectors are predominantly from Australia. The database is available to marine ecology researchers, and is used by several marine protected area managements in Australia, New Zealand, American Samoa and the eastern Pacific. Its results have also been included in the Australian Ocean DATA Network.
Agriculture
Farmer participation in experiments has a long tradition in agricultural science. There are many opportunities for citizen engagement in different parts of food systems. Citizen science is actively used for crop variety selection for climate adaptation, involving thousands of farmers. Citizen science has also played a role in furthering sustainable agriculture.
Art history
Citizen science has a long tradition in natural science. Today, citizen science projects can also be found in various fields of science like art history. For example, the Zooniverse project AnnoTate is a transcription tool developed to enable volunteers to read and transcribe the personal papers of British-born and émigré artists. The papers are drawn from the Tate Archive. Another example of citizen science in art history is ARTigo. ARTigo collects semantic data on artworks from the footprints left by players of games featuring artwork images. From these footprints, ARTigo automatically builds a semantic search engine for artworks.
Biodiversity
Citizen science has made significant contributions to the analysis of biodiversity across the world. A majority of data collected has been focused primarily on species occurrence, abundance and phenology, with birds being primarily the most popular group observed. There is growing efforts to expand the use of citizen science across other fields. Past data on biodiversity has had limitations in the quantity of data to make any meaningful broad connections to losses in biodiversity. Recruiting citizens already out in the field opens a tremendous amount of new data. For example, thousands of farmers reporting the changes in biodiversity in their farms over many years has provided a large amount of relevant data concerning the effect of different farming methods on biodiversity. Another example, is WomSAT, a citizen science project that collects data on wombat roadkill and sarcoptic mange incidence and distribution, to support conservation efforts for the species.
Citizen science can be used to great effect in addition to the usual scientific methods in biodiversity monitoring. The typical active method of species detection is able to collect data on the broad biodiversity of areas while citizen science approaches has shown to be more effective at identifying invasive species. In combination, this provides an effective strategy of monitoring the changes in biodiversity of ecosystems.
Health and welfare
In the research fields of health and welfare, citizen science is often discussed in other terms, such as "public involvement", "user engagement", or "community member involvement". However the meaning is similar to citizen science, with the exception that citizens are not often involved in collecting data but more often involved in prioritisation of research ideas and improving methodology, e.g. survey questions. In the last decades, researchers and funders have gained awareness of the benefits from involving citizens in the research work, but the involvement of citizens in a meaningful way is not a common practice. There is an ongoing discussion on how to evaluate citizen science in health and welfare research.
One aspect to consider in citizen science in health and welfare, that stands out compared to in other academic fields, is who to involve. When research concerns human experiences, representation of a group becomes important. While it is commonly acknowledged that the people involved need to have lived experience of the concerned topic, representation is still an issue, and researchers are debating whether this is a useful concept in citizen science.
Modern technology
Newer technologies have increased the options for citizen science. Citizen scientists can build and operate their own instruments to gather data for their own experiments or as part of a larger project. Examples include amateur radio, amateur astronomy, Six Sigma Projects, and Maker activities. Scientist Joshua Pearce has advocated for the creation of open-source hardware based scientific equipment that both citizen scientists and professional scientists, which can be replicated by digital manufacturing techniques such as 3D printing. Multiple studies have shown this approach radically reduces scientific equipment costs. Examples of this approach include water testing, nitrate and other environmental testing, basic biology and optics. Groups such as Public Lab, which is a community where citizen scientists can learn how to investigate environmental concerns using inexpensive DIY techniques, embody this approach.
Video technology is much used in scientific research. The Citizen Science Center in the Nature Research Center wing of the North Carolina Museum of Natural Sciences has exhibits on how to get involved in scientific research and become a citizen scientist. For example, visitors can observe birdfeeders at the Prairie Ridge Ecostation satellite facility via live video feed and record which species they see.
Since 2005, the Genographic Project has used the latest genetic technology to expand our knowledge of the human story, and its pioneering use of DNA testing to engage and involve the public in the research effort has helped to create a new breed of "citizen scientist". Geno 2.0 expands the scope for citizen science, harnessing the power of the crowd to discover new details of human population history. This includes supporting, organization and dissemination of personal DNA testing. Like amateur astronomy, citizen scientists encouraged by volunteer organizations like the International Society of Genetic Genealogy have provided valuable information and research to the professional scientific community.
With unmanned aerial vehicles, further citizen science is enabled. One example is the ESA's AstroDrone smartphone app for gathering robotic data with the Parrot AR.Drone.
Citizens in Space (CIS), a project of the United States Rocket Academy, seeks to combine citizen science with citizen space exploration. CIS is training citizen astronauts to fly as payload operators on suborbital reusable spacecraft that are now in development. CIS will also be developing, and encouraging others to develop, citizen-science payloads to fly on suborbital vehicles. CIS has already acquired a contract for 10 flights on the Lynx suborbital vehicle, being developed by XCOR Aerospace, and plans to acquire additional flights on XCOR Lynx and other suborbital vehicles in the future.
CIS believes that "The development of low-cost reusable suborbital spacecraft will be the next great enabler, allowing citizens to participate in space exploration and space science."
The website CitizenScience.gov was started by the U.S. government to "accelerate the use of crowdsourcing and citizen science" in the United States. Following the internet's rapid increase of citizen science projects, this site is one of the most prominent resource banks for citizen scientists and government supporters alike. It features three sections: a catalog of existing citizen science projects which are federally supported, a toolkit to help federal officials as they develop and maintain their future projects, and several other resources and projects. This was created as the result of a mandate within the Crowdsourcing and Citizen Science Act of 2016 (15 USC 3724).
Internet
The Internet has been a boon to citizen science, particularly through gamification. One of the first Internet-based citizen science experiments was NASA's Clickworkers, which enabled the general public to assist in the classification of images, greatly reducing the time to analyze large data sets. Another was the Citizen Science Toolbox, launched in 2003, of the Australian Coastal Collaborative Research Centre. Mozak is a game in which players create 3D reconstructions from images of actual human and mouse neurons, helping to advance understanding of the brain. One of the largest citizen science games is Eyewire, a brain-mapping puzzle game developed at the Massachusetts Institute of Technology that now has over 200,000 players. Another example is Quantum Moves, a game developed by the Center for Driven Community Research at Aarhus University, which uses online community efforts to solve quantum physics problems. The solutions found by players can then be used in the lab to feed computational algorithms used in building a scalable quantum computer.
More generally, Amazon's Mechanical Turk is frequently used in the creation, collection, and processing of data by paid citizens. There is controversy as to whether or not the data collected through such services is reliable, as it is subject to participants' desire for compensation. However, use of Mechanical Turk tends to quickly produce more diverse participant backgrounds, as well as comparably accurate data when compared to traditional collection methods.
The internet has also enabled citizen scientists to gather data to be analyzed by professional researchers. Citizen science networks are often involved in the observation of cyclic events of nature (phenology), such as effects of global warming on plant and animal life in different geographic areas, and in monitoring programs for natural-resource management. On BugGuide.Net, an online community of naturalists who share observations of arthropod, amateurs and professional researchers contribute to the analysis. By October 2022, BugGuide has over 1,886,513 images submitted by 47,732 contributors.
Not counting iNaturalist and eBird, the Zooniverse is home to the internet's largest, most popular and most successful citizen science projects. The Zooniverse and the suite of projects it contains is produced, maintained and developed by the Citizen Science Alliance (CSA). The member institutions of the CSA work with many academic and other partners around the world to produce projects that use the efforts and ability of volunteers to help scientists and researchers deal with the flood of data that confronts them. On 29 June 2015, the Zooniverse released a new software version with a project-building tool allowing any registered user to create a project. Project owners may optionally complete an approval process to have their projects listed on the Zooniverse site and promoted to the Zooniverse community. A NASA/JPL picture to the right gives an example from one of Zooniverse's projects The Milky Way Project.
The website CosmoQuest has as its goal "To create a community of people bent on together advancing our understanding of the universe; a community of people who are participating in doing science, who can explain why what they do matters, and what questions they are helping to answer."
CrowdCrafting enables its participants to create and run projects where volunteers help with image classification, transcription, geocoding and more. The platform is powered by PyBossa software, a free and open-source framework for crowdsourcing.
Project Soothe is a citizen science research project based at the University of Edinburgh. The aim of this research is to create a bank of soothing images, submitted by members of the public, which can be used to help others through psychotherapy and research in the future. Since 2015, Project Soothe has received over 600 soothing photographs from people in 23 countries. Anyone aged 12 years or over is eligible to participate in this research in two ways: (1) By submitting soothing photos that they have taken with a description of why the images make them feel soothed (2) By rating the photos that have been submitted by people worldwide for their soothability.
The internet has allowed for many individuals to share and upload massive amounts of data. Using the internet citizen observatories have been designed as a platform to both increase citizen participation and knowledge of their surrounding environment by collecting whatever relevant data is focused by the program. The idea is making it easier and more exciting for citizens to get and stay involved in local data collection.
The invention of social media has aided in providing massive amounts of information from the public to create citizen science programs. In a case study by Andrea Liberatore, Erin Bowkett, Catriona J. MacLeod, Eric Spurr, and Nancy Longnecker, the New Zealand Garden Bird Survey is conducted as one such project with the aid of social media. It examines the influence of utilizing a Facebook group to collect data from citizen scientists as the researchers work on the project over the span of a year. The authors claim that this use of social media greatly helps with the efficiency of this study and makes the atmosphere feel more communal.
Smartphone
The bandwidth and ubiquity afforded by smartphones has vastly expanded the opportunities for citizen science. Examples include iNaturalist, the San Francisco project, the WildLab, Project Noah, and Aurorasurus. Due to their ubiquity, for example, Twitter, Facebook, and smartphones have been useful for citizen scientists, having enabled them to discover and propagate a new type of aurora dubbed "STEVE" in 2016.
There are also apps for monitoring birds, marine wildlife and other organisms, and the "Loss of the Night".
"The Crowd and the Cloud" is a four-part series broadcast during April 2017, which examines citizen science. It shows how smartphones, computers and mobile technology enable regular citizens to become part of a 21st-century way of doing science. The programs also demonstrate how citizen scientists help professional scientists to advance knowledge, which helps speed up new discoveries and innovations. The Crowd & The Cloud is based upon work supported by the U.S. National Science Foundation.
Seismology
Since 1975, in order to improve earthquake detection and collect useful information, the European-Mediterranean Seismological Centre monitors the visits of earthquake eyewitnesses to its website and relies on Facebook and Twitter. More recently, they developed the LastQuake mobile application which notifies users about earthquakes occurring around the world, alerts people when earthquakes hit near them, gathers citizen seismologists' testimonies to estimate the felt ground shaking and possible damages.
Hydrology
Citizen science has been used to provide valuable data in hydrology (catchment science), notably flood risk, water quality, and water resource management. A growth in internet use and smartphone ownership has allowed users to collect and share real-time flood-risk information using, for example, social media and web-based forms. Although traditional data collection methods are well-established, citizen science is being used to fill the data gaps on a local level, and is therefore meaningful to individual communities. Data collected from citizen science can also compare well to professionally collected data. It has been demonstrated that citizen science is particularly advantageous during a flash flood because the public are more likely to witness these rarer hydrological events than scientists.
Plastics and pollution
Citizen science includes projects that help monitor plastics and their associated pollution. These include The Ocean Cleanup, #OneLess, The Big Microplastic Survey, EXXpedition and Alliance to End Plastic Waste. Ellipsis seeks to map the distribution of litter using aerial data mapping by unmanned aerial vehicles and machine learning software. A Zooniverse project called The Plastic Tide (now finished) helped train an algorithm used by Ellipsis.
Examples of relevant articles (by date):
Citizen Science Promotes Environmental Engagement: (quote) "Citizen science projects are rapidly gaining popularity among the public, in which volunteers help gather data on species that can be used by scientists in research. And it's not just adults who are involved in these projects – even kids have collected high-quality data in the US."
Tackling Microplastics on Our Own: (quote) "Plastics, ranging from the circles of soda can rings to microbeads the size of pinheads, are starting to replace images of sewage for a leading cause of pollution – especially in the ocean". Further, "With recent backing from the Crowdsourcing and Citizen Science Act, citizen science is increasingly embraced as a tool by US Federal agencies."
Citizen Scientists Are Tracking Plastic Pollution Worldwide: (quote) "Scientists who are monitoring the spread of tiny pieces of plastic throughout the environment are getting help from a small army of citizen volunteers – and they're finding bits of polymer in some of the most remote parts of North America."
Artificial intelligence and citizen scientists: Powering the clean-up of Asia Pacific's beaches:(quote) "The main objective is to support citizen scientists cleaning up New Zealand beaches and get a better understanding of why litter is turning up, so preventive and proactive action can be taken."
Citizen science could help address Canada's plastic pollution problem: (quote) "But citizen engagement and participation in science goes beyond beach cleanups, and can be used as a tool to bridge gaps between communities and scientists. These partnerships between scientists and citizen scientists have produced real world data that have influenced policy changes."
Examples of relevant scientific studies or books include (by date):
Distribution and abundance of small plastic debris on beaches in the SE Pacific (Chile): a study supported by a citizen science project: (quote) "The citizen science project 'National Sampling of Small Plastic Debris' was supported by schoolchildren from all over Chile who documented the distribution and abundance of small plastic debris on Chilean beaches. Thirty-nine schools and nearly 1,000 students from continental Chile and Easter Island participated in the activity."
Incorporating citizen science to study plastics in the environment: (quote) "Taking advantage of public interest in the impact of plastic on the marine environment, successful Citizen Science (CS) programs incorporate members of the public to provide repeated sampling for time series as well as synoptic collections over wide geographic regions."
Marine anthropogenic litter on British beaches: A 10-year nationwide assessment using citizen science data: (quote) "Citizen science projects, whereby members of the public gather information, offer a low-cost method of collecting large volumes of data with considerable temporal and spatial coverage. Furthermore, such projects raise awareness of environmental issues and can lead to positive changes in behaviours and attitudes."
Determining Global Distribution of Microplastics by Combining Citizen Science and In-Depth Case Studies: (quote) "Our first project involves the general public through citizen science. Participants collect sand samples from beaches using a basic protocol, and we subsequently extract and quantify microplastics in a central laboratory using the standard operating procedure."
Risk Perception of Plastic Pollution: Importance of Stakeholder Involvement and Citizen Science: (quote) "The chapter finally discusses how risk perception can be improved by greater stakeholder involvement and utilization of citizen science and thereby improve the foundation for timely and efficient societal measures."
Assessing the citizen science approach as tool to increase awareness on the marine litter problem: (quote) "This paper provides a quantitative assessment of students' attitude and behaviors towards marine litter before and after their participation to SEACleaner, an educational and citizen science project devoted to monitor macro- and micro-litter in an Area belonging to Pelagos Sanctuary."
Spatial trends and drivers of marine debris accumulation on shorelines in South Eleuthera, The Bahamas using citizen science: (quote) "This study measured spatial distribution of marine debris stranded on beaches in South Eleuthera, The Bahamas. Citizen science, fetch modeling, relative exposure index and predictive mapping were used to determine marine debris source and abundance."
Making citizen science count: Best practices and challenges of citizen science projects on plastics in aquatic environments: (quote) "Citizen science is a cost-effective way to gather data over a large geographical range while simultaneously raising public awareness on the problem".
White and wonderful? Microplastics prevail in snow from the Alps to the Arctic: (quote) "In March 2018, five samples were taken at different locations on Svalbard (Fig. 1A and Table 1) by citizen scientists embarking on a land expedition by ski-doo (Aemalire project). The citizens were instructed on contamination prevention and equipped with protocol forms, prerinsed 2-liter stainless steel containers (Ecotanca), a porcelain mug, a steel spoon, and a soup ladle for sampling."
Citizen sensing
Citizen sensing can be a form of citizen science: (quote) "The work of citizen sensing, as a form of citizen science, then further transforms Stengers's notion of the work of science by moving the experimental facts and collectives where scientific work is undertaken out of the laboratory of experts and into the world of citizens." Similar sensing activities include Crowdsensing and participatory monitoring. While the idea of using mobile technology to aid this sensing is not new, creating devices and systems that can be used to aid regulation has not been straightforward. Some examples of projects that include citizen sensing are:
Citizen Sense (2013–2018): (quote) "Practices of monitoring and sensing environments have migrated to everyday participatory applications, where users of smart phones and networked devices are able to engage with modes of environmental observation and data collection."
Breathe Project: (quote) "We use the best available science and technology to better understand the quality of the air we breathe and provide opportunities for citizens to engage and take action."
The Bristol Approach to Citizen Sensing: (quote) "Citizen Sensing is about empowering people and places to understand and use smart tech and data from sensors to tackle the issues they care about, connect with other people who can help, and take positive, practical action."
Luftdaten.info: (quote) "You and thousands of others around the world install self-built sensors on the outside their home. Luftdaten.info generates a continuously updated particular matter map from the transmitted data."
CitiSense: (quote) "CitiSense aims to co-develop a participatory risk management system (PRMS) with citizens, local authorities and organizations which enables them to contribute to advanced climate services and enhanced urban climate resilience as well as receive recommendations that support their security."
A group of citizen scientists in a community-led project targeting toxic smoke from wood burners in Bristol, has recorded 11 breaches of World Health Organization daily guidelines for ultra-fine particulate pollution over a period of six months.
In a £7M programme funded by water regulator Ofwat, citizen scientists are being trained to test for pollution and over-abstraction in 10 river catchment areas in the UK. Sensors will be used and the information gathered will be available in a central visualisation platform. The project is led by The Rivers Trust and United Utilities and includes volunteers such as anglers testing the rivers they use. The Angling Trust provides the pollution sensors, with Kristian Kent from the Trust saying: "Citizen science is a reality of the world in the future, so they’re not going to be able to just sweep it under the carpet."
COVID-19 pandemic
Resources for computer science and scientific crowdsourcing projects concerning COVID-19 can be found on the internet or as apps. Some such projects are listed below:
The distributed computing project Folding@home launched a program in March 2020 to assist researchers around the world who were working on finding a cure and learning more about the coronavirus pandemic. The initial wave of projects were meant to simulate potentially druggable protein targets from SARS-CoV-2 (and also its predecessor and close relation SARS-CoV, about which there is significantly more data available). In 2024, the project has been extended to look at other health issues including Alzheimer’s and cancer. The project asks volunteers to download the app and donate computing power for simulations.
The distributed computing project Rosetta@home also joined the effort in March 2020. The project uses computers of volunteers to model SARS-CoV-2 virus proteins to discover possible drug targets or create new proteins to neutralize the virus. Researchers revealed that with the help of Rosetta@home, they had been able to "accurately predict the atomic-scale structure of an important coronavirus protein weeks before it could be measured in the lab." In 2022, the parent Boinc company thanked contributors for donating their computer power and helping work on the de novo protein design including vaccine development.
The OpenPandemics – COVID-19 project is a partnership between Scripps Research and IBM's World Community Grid for a distributed computing project that "will automatically run a simulated experiment in the background [of connected home PCs] which will help predict the effectiveness of a particular chemical compound as a possible treatment for COVID-19". The project asked volunteers to donate unused computing power. In 2024, the project was looking at targeting the DNA polymerase of the cytomegalovirus to identify binders.
The Eterna OpenVaccine project enables video game players to "design an mRNA encoding a potential vaccine against the novel coronavirus." In mid-2021, it was noted that the project had helped create a library of potential vaccine molecules to be tested at Stanford University; SU researchers also noted that importance of volunteers discussing the games and exchanging ideas.
In March 2020, the EU-Citizen.Science project had "a selection of resources related to the current COVID19 pandemic. It contains links to citizen science and crowdsourcing projects"
The COVID-19 Citizen Science project was "a new initiative by University of California, San Francisco physician-scientists" that "will allow anyone in the world age 18 or over to become a citizen scientist advancing understanding of the disease." By 2024, the Eureka platform had over 100,000 participants.
The CoronaReport digital journalism project was "a citizen science project which democratizes the reporting on the Coronavirus, and makes these reports accessible to other citizens." It was developed by the University of Edinburgh and asked people affected by Covid to share the social effects of the pandemic.
The COVID Symptom Tracker was a crowdsourced study of the symptoms of the virus. It was created in the UK by King’s College London and Guy’s and St Thomas’ Hospitals. It had two million downloads by April 2020. Within three months, information from the app had helped identify six variations of Covid. Government funding ended in early 2022, but due to the large number of volunteers, Zoe decided to continue the work to study general health. By February 2023, over 75,000 people had downloaded the renamed Zoe Habit Tracker.
The Covid Near You epidemiology tool "uses crowdsourced data to visualize maps to help citizens and public health agencies identify current and potential hotspots for the recent pandemic coronavirus, COVID-19." The site was launched in Boston in March 2020; at the end of 2020 it was rebranded to Outbreaks Near Me and tracked both Covid and flu.
The We-Care project is a novel initiative by University of California, Davis researchers that uses anonymity and crowdsourced information to alert infected users and slow the spread of COVID-19.
COVID Radar was an app in the Netherlands, active between April 2020 and February 2022, with which users anonymously answered a short daily questionnaire asking about their symptoms, behavior, coronavirus test results, and vaccination status. Symptoms and behavior were visualized on a map and users received feedback on their individual risk and behaviors relative to the national mean. The app had over 250,000 users, who filled out the questionnaire over 8.5 million times. Research from this app continued to be used in 2024.
For coronavirus studies and information that can help enable citizen science, many online resources are available through open access and open science websites, including an intensive care medicine e-book chapter hosted by EMCrit and portals run by the Cambridge University Press, the Europe branch of the Scholarly Publishing and Academic Resources Coalition, The Lancet, John Wiley and Sons, and Springer Nature.
There have been suggestions that the pandemic and subsequent lockdown has boosted the public’s awareness and interest in citizen science, with more people around the world having the motivation and the time to become involved in helping to investigate the illness and potentially move on to other areas of research.
Around the world
The Citizen Science Global Partnership was created in 2022; the partnership brings together networks from Australia, Africa, Asia, Europe, South America and the USA.
Africa
In South Africa (SA), citizen science projects include: the Stream Assessment Scoring System (miniSASS) which "encourages enhanced catchment management for water security in a climate stressed society."
The South African National Biodiversity Institute is partnered with iNaturalist as a platform for biodiversity observations using digital photography and geolocation technology to monitor biodiversity. Such partnerships can reduce duplication of effort, help standardise procedures and make the data more accessible.
Also in SA, "Members of the public, or 'citizen scientists' are helping researchers from the University of Pretoria to identify Phytophthora species present in the fynbos."
In June 2016, citizen science experts from across East Africa gathered in Nairobi, Kenya, for a symposium organised by the Tropical Biology Association (TBA) in partnership with the Centre for Ecology & Hydrology (CEH). The aim was "to harness the growing interest and expertise in East Africa to stimulate new ideas and collaborations in citizen science." Rosie Trevelyan of the TBA said: "We need to enhance our knowledge about the status of Africa's species and the threats facing them. And scientists can't do it all on their own. At the same time, citizen science is an extremely effective way of connecting people more closely to nature and enrolling more people in conservation action".
The website Zooniverse hosts several African citizen science projects, including: Snapshot Serengeti, Wildcam Gorongosa and Jungle Rhythms.
Nigeria has the Ibadan Bird Club whose to aim is to "exchange ideas and share knowledge about birds, and get actively involved in the conservation of birds and biodiversity."
In Namibia, Giraffe Spotter.org is "project that will provide people with an online citizen science platform for giraffes".
Within the Republic of the Congo, the territories of an indigenous people have been mapped so that "the Mbendjele tribe can protect treasured trees from being cut down by logging companies". An Android open-source app called Sapelli was used by the Mbendjele which helped them map "their tribal lands and highlighted trees that were important to them, usually for medicinal reasons or religious significance. Congolaise Industrielle des Bois then verified the trees that the tribe documented as valuable and removed them from its cutting schedule. The tribe also documented illegal logging and poaching activities."
In West Africa, the eradication of the recent outbreak of Ebola virus disease was partly helped by citizen science. "Communities learnt how to assess the risks posed by the disease independently of prior cultural assumptions, and local empiricism allowed cultural rules to be reviewed, suspended or changed as epidemiological facts emerged." "Citizen science is alive and well in all three Ebola-affected countries. And if only a fraction of the international aid directed at rebuilding health systems were to be redirected towards support for citizen science, that might be a fitting memorial to those who died in the epidemic."
The CitSci Africa Association held its International Conference in February 2024 in Nairobi.
Asia
The Hong Kong Birdwatching Society was established in 1957, and is the only local civil society aiming at appreciating and conserving Hong Kong birds and their natural environment. Their bird surveys go back to 1958, and they carry out a number of Citizen Science events such as their yearly sparrow census.
The Bird Count India partnership consists of a large number of organizations and groups involved in birdwatching and bird surveys. They coordinate a number of Citizen Science projects such as the Kerala Bird Atlas and Mysore city Bird Atlas that map the distribution and abundance of birds of entire Indian states.
The Taiwan Roadkill Observation Network was founded in 2011 and has more than 16,000 members as of 2019. It is a citizen science project where roadkill across Taiwan is photographed and sent to the Endemic Species Research Institute for study. Its primary goal has been to set up an eco-friendly path to mitigate roadkill challenges and popularize a national discourse on environmental issues and civil participation in scientific research. The members of the Taiwan Roadkill Observation Network volunteer to observe animals' corpses that are by caused by roadkill or by other reasons. Volunteers can then upload pictures and geographic locations of the roadkill to an internet database or send the corpses to the Endemic Species Research as specimens.Because members come from different areas of the island, the collection of data serves as an animal distribution map of the island. According to the geographical data and pictures of corpses collected by the members, the community itself and the sponsor, the Endemic Species Center could find out the hotspots and the reasons for the animals' deaths. One of the most renowned cases is that the community successfully detected rabies cases due to the huge collection of data. The corpses of Melogale moschata had accumulated for years and are thought to be carriers of rabies. Alarmed by this, the government authority took actions to prevent the prevalence of rabies in Taiwan.In another case in 2014, some citizen scientists discovered birds that had died from unknown causes near an agricultural area. The Taiwan Roadkill Observation Network cooperated with National Pingtung University of Science and Technology and engaged citizen scientists to collect bird corpses. The volunteers collected 250 bird corpses for laboratory tests, which confirmed that the bird deaths were attributable to pesticides used on crops. This prompted the Taiwanese government to restrict pesticides, and the Bill of Pesticide Management amendment was passed after the third reading in the Legislative Yuan, establishing a pesticide control system. The results indicated that Taiwan Roadkill Observation Network had developed a set of shared working methods and jointly completed certain actions. Furthermore, the community of the Taiwan Roadkill Observation Network had made real changes to road design to avoid roadkill, improved the management of usage of pesticide, epidemic prevention, as well as other examples. By mid-2024, volunteers had observed over 293,000 animals. The network, the largest citizen science project in Taiwan, noted that more than half of roadkill were amphibians (eg, frogs), while one third are reptiles and birds.
The AirBox Project was launched in Taiwan to create a participatory ecosystem with a focus on PM2.5 monitoring through AirBox devices. By the end of 2014, the public had paid more attention to the PM2.5 levels because the air pollution problem had become worse, especially in central and southern Taiwan. High PM2.5 levels are harmful to our health, with respiratory problems as an example. These pollution levels aroused public concern and led to an intensive debate about air pollution sources. Some experts suggested that air quality was affected by pollutants from mainland China, while some environmentalists believed that it was the result of industrialization, because of, for example, exhaust fumes from local power plants or factories. However, no one knew the answer because of insufficient data.Dr. Ling-Jyh Chen, a researcher of the Institute of Information Science, Academia Sinica, launched The AirBox Project. His original idea was inspired by a popular Taiwanese slogan "Save Your Environment by Yourself". As an expert in a Participatory Sensing system, he decided to take this ground-up approach to collect PM2.5 level data, and thus through open data and data analysis to have a better understanding of the possible air pollution sources. Using this ecosystem, huge amounts of data was collected from AirBox devices. This data was instantly available online, informing people of PM2.5 levels. They could then take the proper actions, such as wearing a mask or staying at home, preventing themselves from going out into the polluted environment.Data can also be analyzed to understand the possible sources of pollution and provide recommendations for improving the situation. There are four main steps to this project: i) Develop the AirBox device. Developing a device that could correctly collect the data of the PM2.5 level was time-consuming. It had taken more than three years to develop an AirBox that can be easily used, but with both high accuracy and low cost. ii) The widespread installation of AirBoxes. In the beginning, very few people were willing to install it at their homes because of their concerns about the possible harm to their health, power consumption and maintenance. Because of this, AirBoxes were only installed in a relatively small area. But with help from Taiwan's LASS (Location Aware Sensing System) community, AirBoxes appeared in all parts of Taiwan. As of February 2017, there are more than 1,600 AirBoxes installed in more than 27 countries. iii) Open Source and Data Analysis. All measurement results were released and visualized in real-time to the public through different media. Data can be analyzed to trace pollution sources. By December 2019, there were over 4,000 AirBoxes installed across the country.
Japan has a long history of citizen science involvement, the 1,200-year-old tradition of collecting records on cherry blossom flowering probably being the world's longest-running citizen science project. One of the most influential citizen science projects has also come out of Japan: Safecast. Dedicated to open citizen science for the environment, Safecast was established in the wake of the Fukushima nuclear disaster, and produces open hardware sensors for radiation and air-pollution mapping. Presenting this data via a global open data network and maps
As technology and public interest grew, the CitizenScience.Asia group was set up in 2022; it grew from an initial hackathon in Hong Kong which worked on the 2016 Zika scare. The network is part of Citizen Science Global Partnership.
Europe
The English naturalist Charles Darwin (1809–1882) is widely regarded to have been one of the earliest citizen science contributors in Europe (see ). A century later, citizen science was experienced by adolescents in Italy during the 1980s, working on urban energy usages and air pollution.
In his book "Citizen Science", Alan Irwin considers the role that scientific expertise can play in bringing the public and science together and building a more scientifically active citizenry, empowering individuals to contribute to scientific development. Since then a citizen science green paper was published in 2013, and European Commission policy directives have included citizen science as one of five strategic areas with funding allocated to support initiatives through the 'Science With and For Society (SwafS)', a strand of the Horizon 2020 programme. This includes significant awards such as the EU Citizen Science Project, which is creating a hub for knowledge sharing, coordination, and action. The European Citizen Science Association (ECSA) was set up in 2014 to encourage the growth of citizen science across Europe, to increase public participation in scientific processes, mainly by initiating and supporting citizen science projects as well as conducting research. ECSA has a membership of over 250 individual and organisational members from over 30 countries across the European Union and beyond.
Examples of citizen science organisations and associations based in Europe include the Biosphere Expeditions (Ireland), Bürger schaffen Wissen (Germany), Citizen Science Lab at Leiden University (Netherlands), Ibercivis (See External Links), Österreich forscht (Austria). Other organisations can be found here: EU Citizen Science.
The European Citizen Science Association was created in 2014.
In 2023, the European Union Prize for Citizen Science was established. Bestowed through Ars Electronica, the prize was designed to honor, present and support "outstanding projects whose social and political impact advances the further development of a pluralistic, inclusive and sustainable society in Europe".
Latin America
In 2015, the Asháninka people from Apiwtxa, which crosses the border between Brazil and Peru, began using the Android app Sapelli to monitor their land. The Ashaninka have "faced historical pressures of disease, exploitation and displacement, and today still face the illegal invasion of their lands by loggers and hunters. This monitoring project shows how the Apiwtxa Ashaninka from the Kampa do Rio Amônia Indigenous Territory, Brazil, are beginning to use smartphones and technological tools to monitor these illegal activities more effectively."
In Argentina, two smartphone Android applications are available for citizen science. i) AppEAR has been developed at the Institute of Limnology and was launched in May 2016. Joaquín Cochero is a researcher who developed an "application that appeals to the collaboration of users of mobile devices in collecting data that allow the study of aquatic ecosystems" (translation). Cochero stated: "Not much of citizen science in Argentina, just a few more oriented to astronomy specific cases. As ours is the first. And I have volunteers from different parts of the country that are interested in joining together to centralize data. That's great because these types of things require many people participate actively and voluntarily" (translation). ii) eBird was launched in 2013, and has so far identified 965 species of birds. eBird in Argentina is "developed and managed by the Cornell Lab of Ornithology at Cornell University, one of the most important ornithological institutions in the world, and locally presented recently with the support of the Ministry of Science, Technology and Productive Innovation of the Nation (MINCyT)" (translation).
Projects in Brazil include: i) Platform and mobile app 'Missions' has been developed by IBM in their São Paulo research lab with Brazil's Ministry for Environment and Innovation (BMEI). Sergio Borger, an IBM team lead in São Paulo, devised the crowdsourced approach when BMEI approached the company in 2010. They were looking for a way to create a central repository for the rainforest data. Users can upload photos of a plant species and its components, enter its characteristics (such as color and size), compare it against a catalog photo and classify it. The classification results are juried by crowdsourced ratings. ii) Exoss Citizen Science is a member of Astronomers Without Borders and seeks to explore the southern sky for new meteors and radiants. Users can report meteor fireballs through uploading pictures on to a webpage or by linking to YouTube. iii) The Information System on Brazilian Biodiversity (SiBBr) was launched in 2014 "aiming to encourage and facilitate the publication, integration, access and use of information about the biodiversity of the country." Their initial goal "was to gather 2.5 million occurrence records of species from biological collections in Brazil and abroad up to the end of 2016. It is now expected that SiBBr will reach nine million records in 2016." Andrea Portela said: "In 2016, we will begin with the citizen science. They are tools that enable anyone, without any technical knowledge, to participate. With this we will achieve greater engagement with society. People will be able to have more interaction with the platform, contribute and comment on what Brazil has." iv) The Brazilian Marine Megafauna Project (Iniciativa Pro Mar) is working with the European CSA towards its main goal, which is the "sensibilization of society for marine life issues" and concerns about pollution and the over-exploitation of natural resources. Having started as a project monitoring manta ray, it now extends to whale shark and educating schools and divers within the Santos area. Its social media activities include a live streaming of a citizen science course to help divers identify marine megafauna. v) A smartphone app called Plantix has been developed by the Leibniz Centre for Agricultural Landscape Research (ZALF) which helps Brazilian farmers discover crop diseases quicker and helps fight them more efficiently. Brazil is a very large agricultural exporter, but between 10 and 30% of crops fail because of disease. "The database currently includes 175 frequently occurring crop diseases and pests as well as 40,000 photos. The identification algorithm of the app improves with every image which records a success rate of over 90 per cent as of approximately 500 photos per crop disease." vi) In an Atlantic Ocean forest region in Brazil, an effort to map the genetic riches of soil is under way. The Drugs From Dirt initiative, based at the Rockefeller University, seeks to turn up bacteria that yield new types of antibiotics – the Brazilian region being particularly rich in potentially useful bacterial genes. Approximately a quarter of the 185 soil samples have been taken by Citizen Scientists without which the project could not run.
In Chile citizen science projects include (some websites in Spanish): i) Testing new cancer therapies with scientists from the Science Foundation for Life. ii) Monitoring the population of the Chilean bumblebee. iii) Monitoring the invasive ladybird Chinita arlequín. iv) Collecting rain water data. v) Monitoring various pollinating fly populations. vi) Providing information and field data on the abundance and distribution of various species of rockfish. vii) Investigating the environmental pollution by plastic litter.
Projects in Colombia include (some websites in Spanish): i) The Communications Project of the Humboldt Institute along with the Organization for Education and Environmental Protection initiated projects in the Bogotá wetlands of Cordoba and El Burro, which have a lot of biodiversity. ii) In the Model Forest of Risaralda, the Colombia promotes citizen participation in research related to how the local environment is adapting to climate change. The first meeting took place in the Flora and Fauna Sanctuary Otún Quimbaya. iii) The Citizen Network Environmental Monitoring (CLUSTER), based in the city of Bucaramanga, seeks to engage younger students in data science, who are trained in building weather stations with open repositories based on free software and open hardware data. iv) The Symposium on Biodiversity has adapted the CS tool iNaturalist for use in Colombia. v) The Sinchi Amazonic Institute of Scientific Research seeks to encourage the development and diffusion of knowledge, values and technologies on the management of natural resources for ethnic groups in the Amazon. This research should further the use of participatory action research schemes and promoting participation communities.
Since 2010, the Pacific Biodiversity Institute (PBI) seeks "volunteers to help identify, describe and protect wildland complexes and roadless areas in South America". The PBI "are engaged in an ambitious project with our Latin American conservation partners to map all the wildlands in South America, to evaluate their contribution to global biodiversity and to share and disseminate this information."
In Mexico, a citizen science project has monitored rainfall data that is linked to a hydrologic payment for ecosystem services project.
Conferences
The first Conference on Public Participation in Scientific Research was held in Portland, Oregon, in August 2012. Citizen science is now often a theme at large conferences, such as the annual meeting of the American Geophysical Union.
In 2010, 2012 and 2014 there were three Citizen Cyberscience summits, organised by the Citizen Cyberscience Centre in Geneva and University College London. The 2014 summit was hosted in London and attracted over 300 participants.
In November 2015, the ETH Zürich and University of Zürich hosted an international meeting on the "Challenges and Opportunities in Citizen Science".
The first citizen science conference hosted by the Citizen Science Association was in San Jose, California, in February 2015 in partnership with the AAAS conference. The Citizen Science Association conference, CitSci 2017, was held in Saint Paul, Minnesota, United States, between 17 and 20 May 2017. The conference had more than 600 attendees. The next CitSci was in March 2019 in Raleigh, North Carolina.
The platform "Österreich forscht" hosts the annual Austrian citizen science conference since 2015.
In popular culture
Barbara Kingsolver’s 2012 novel Flight Behaviour looks at the effects of citizen science on a housewife in Appalachia, when her interest in butterflies brings her into contact with scientists and academics.
See also
List of citizen science projects
List of crowdsourcing projects
List of volunteer computing projects
(produced by artists that are non-institutionalized, in the same way as citizen scientists)
References
Further reading
"The Mozzie Monitors program marks the first time formal mosquito trapping has been combined with citizen science." (Australian project)
Dick Kasperowsik (interviewed by Ulrich Herb): Citizen Science as democratization of science? In: telepolis, 2016, 27 August
Ridley, Matt. (8 February 2012) "Following the Crowd to Citizen Science". The Wall Street Journal
Young, Jeffrey R. (28 May 2010). "Crowd Science Reaches New Heights", The Chronicle of Higher Education
External links
"Controversy over the term 'citizen science'". CBC News. 13 August 2021. Retrieved 15 April 2023.
Crowdsourcing
Human-based computation
Open science | 0.789031 | 0.994636 | 0.784799 |
Self-sustainability | Self-sustainability and self-sufficiency are overlapping states of being in which a person, being, or system needs little or no help from, or interaction with others. Self-sufficiency entails the self being enough (to fulfill needs), and a self-sustaining entity can maintain self-sufficiency indefinitely. These states represent types of personal or collective autonomy. A self-sufficient economy is one that requires little or no trade with the outside world and is called an autarky.
Description
Self-sustainability is a type of sustainable living in which nothing is consumed other than what is produced by the self-sufficient individuals. Examples of attempts at self-sufficiency in North America include simple living, food storage, homesteading, off-the-grid, survivalism, DIY ethic, and the back-to-the-land movement.
Practices that enable or aid self-sustainability include autonomous building, permaculture, sustainable agriculture, and renewable energy. The term is also applied to limited forms of self-sustainability, for example growing one's own food or becoming economically independent of state subsidies. The self-sustainability of an electrical installation measures its degree of grid independence and is defined as the ratio between the amount of locally produced energy that is locally consumed, either directly or after storage, and the total consumption.
A system is self-sustaining (or self-sufficient) if it can maintain itself by independent effort. The system self-sustainability is:
the degree at which the system can sustain itself without external support
the fraction of time in which the system is self-sustaining
Self-sustainability is considered one of the "ilities" and is closely related to sustainability and availability. In the economics literature, a system that has the quality of being self-sustaining is also referred to as an autarky.
Examples
Political states
Autarky exists whenever an entity can survive or continue its activities without external assistance. Autarky is not necessarily economic. For example, a military autarky would be a state that could defend itself without help from another country.
Labor
According to the Idaho Department of Labor, an employed adult shall be considered self-sufficient if the family income exceeds 200% of the Office of Management and Budget poverty income level guidelines.
Peer-to-peer swarming
In peer-to-peer swarming systems, a swarm is self-sustaining if all the blocks of its files are available among peers (excluding seeds and publishers).
Discussion
Self-sustainability and survivability
Whereas self-sustainability is a quality of one's independence, survivability applies to the future maintainability of one's self-sustainability and indeed one's existence. Many believe that more self-sustainability guarantees a higher degree of survivability. However, just as many oppose this, arguing that it is not self-sustainability that is essential for survivability, but on the contrary specialization and thus dependence.
Consider the first two examples presented above. Among countries, commercial treats are as important as self-sustainability. An autarky is usually inefficient. Among people, social ties have been shown to be correlated to happiness and success as much as self-sustainability.
See also
Autarchism
Cottagecore
Eating your own dog food
Five Acres and Independence
Food sovereignty
Homesteading
Individualism
Juche
List of system quality attributes
Localism
Rugged individualism
Self-help
Tiny house movement
Vegetable farming
Notes and references
External links
Foundation for Self-Sufficiency in Central America
"Self-sustainability strategies for Development Initiatives: What is self-sustainability and why is it so important?"
Applied probability | 0.792838 | 0.989808 | 0.784757 |
Biomass (energy) | In the context of energy production, biomass is matter from recently living (but now dead) organisms which is used for bioenergy production. Examples include wood, wood residues, energy crops, agricultural residues including straw, and organic waste from industry and households. Wood and wood residues is the largest biomass energy source today. Wood can be used as a fuel directly or processed into pellet fuel or other forms of fuels. Other plants can also be used as fuel, for instance maize, switchgrass, miscanthus and bamboo. The main waste feedstocks are wood waste, agricultural waste, municipal solid waste, and manufacturing waste. Upgrading raw biomass to higher grade fuels can be achieved by different methods, broadly classified as thermal, chemical, or biochemical.
The climate impact of bioenergy varies considerably depending on where biomass feedstocks come from and how they are grown. For example, burning wood for energy releases carbon dioxide. Those emissions can be significantly offset if the trees that were harvested are replaced by new trees in a well-managed forest, as the new trees will remove carbon dioxide from the air as they grow. However, the farming of biomass feedstocks can reduce biodiversity, degrade soils and take land out of food production. It may also consume water for irrigation and fertilisers.
Terminology
Biomass (in the context of energy generation) is matter from recently living (but now dead) organisms which is used for bioenergy production. There are variations in how such biomass for energy is defined, e.g. only from plants, or from plants and algae, or from plants and animals. The vast majority of biomass used for bioenergy does come from plants. Bioenergy is a type of renewable energy with potential to assist with climate change mitigation.
Some people use the terms biomass and biofuel interchangeably, but it is now more common to consider biofuel to be a liquid or gaseous fuel used for transportation, as defined by government authorities in the US and EU. From that perspective, biofuel is a subset of biomass.
The European Union's Joint Research Centre defines solid biofuel as raw or processed organic matter of biological origin used for energy, such as firewood, wood chips, and wood pellets.
Types and uses
Different types of biomass are used for different purposes:
Primary biomass sources that are appropriate for heat or electricity generation but not for transport include: wood, wood residues, wood pellets, agricultural residues, organic waste.
Biomass that is processed into transport fuels can come from corn, sugar cane, and soy.
Biomass is categorized either as biomass harvested directly for energy (primary biomass), or as residues and waste: (secondary biomass).
Biomass harvested directly for energy
The main biomass types harvested directly for energy is wood, some food crops and all perennial energy crops. One third of the global forest area of 4 billion hectares is used for wood production or other commercial purposes, and forests provide 85% of all biomass used for energy globally. In the EU, forests provide 60% of all biomass used for energy, with wood residues and waste being the largest source.
Woody biomass used for energy often consists of trees and bushes harvested for traditional cooking and heating purposes, particularly in developing countries, with 25 EJ per year used globally for these purposes. This practice is highly polluting. The World Health Organization (WHO) estimates that cooking-related pollution causes 3.8 million annual deaths. The United Nations Sustainable Development Goal 7 aims for the traditional use of biomass for cooking to be phased out by 2030. Short-rotation coppices and short-rotation forests are also harvested directly for energy, providing 4 EJ of energy, and are considered sustainable. The potential for these crops and perennial energy crops to provide at least 25 EJ annually by 2050 is estimated.
Food crops harvested for energy include sugar-producing crops (such as sugarcane), starch-producing crops (such as maize), and oil-producing crops (such as rapeseed). Sugarcane is a perennial crop, while corn and rapeseed are annual crops. Sugar- and starch-producing crops are used to make bioethanol, and oil-producing crops are used to make biodiesel. The United States is the largest producer of bioethanol, while the European Union is the largest producer of biodiesel. The global production of bioethanol and biodiesel provides 2.2 and 1.5 EJ of energy per year, respectively. Biofuel made from food crops harvested for energy is also known as "first-generation" or "traditional" biofuel and has relatively low emission savings.
The IPCC estimates that between 0.32 and 1.4 billion hectares of marginal land are suitable for bioenergy worldwide.
Biomass in the form of residues and waste
Residues and waste are by-products from biological material harvested mainly for non-energy purposes. The most important by-products are wood residues, agricultural residues and municipal/industrial waste:
Wood residues are by-products from forestry operations or from the wood processing industry. Had the residues not been collected and used for bioenergy, they would have decayed (and therefore produced emissions) on the forest floor or in landfills, or been burnt (and produced emissions) at the side of the road in forests or outside wood processing facilities.
The by-products from forestry operations are called logging residues or forest residues, and consist of tree tops, branches, stumps, damaged or dying or dead trees, irregular or bent stem sections, thinnings (small trees that are cleared away in order to help the bigger trees grow large), and trees removed to reduce wildfire risk. The extraction level of logging residues differ from region to region, but there is an increasing interest in using this feedstock, since the sustainable potential is large (15 EJ annually). 68% of the total forest biomass in the EU consists of wood stems, and 32% consists of stumps, branches and tops.
The by-products from the wood processing industry are called wood processing residues and consist of cut offs, shavings, sawdust, bark, and black liquor. Wood processing residues have a total energy content of 5.5 EJ annually. Wood pellets are mainly made from wood processing residues, and have a total energy content of 0.7 EJ. Wood chips are made from a combination of feedstocks, and have a total energy content of 0.8 EJ.
The energy content in agricultural residues used for energy is approximately 2 EJ. However, agricultural residues has a large untapped potential. The energy content in the global production of agricultural residues has been estimated to 78 EJ annually, with the largest share from straw (51 EJ). Others have estimated between 18 and 82 EJ. The use of agricultural residues and waste that is both sustainable and economically feasible is expected to increase to between 37 and 66 EJ in 2030.
Municipal waste produced 1.4 EJ and industrial waste 1.1 EJ. Wood waste from cities and industry also produced 1.1 EJ. The sustainable potential for wood waste has been estimated to 2–10 EJ. IEA recommends a dramatic increase in waste utilization to 45 EJ annually in 2050.
Biomass conversion
Raw biomass can be upgraded into better and more practical fuel simply by compacting it (e.g. wood pellets), or by different conversions broadly classified as thermal, chemical, and biochemical. Biomass conversion reduces the transport costs as it is cheaper to transport high density commodities.
Thermal conversion
Thermal upgrading produces solid, liquid or gaseous fuels, with heat as the dominant conversion driver. The basic alternatives are torrefaction, pyrolysis, and gasification, these are separated principally by how far the chemical reactions involved are allowed to proceed. The advancement of the chemical reactions is mainly controlled by how much oxygen is available, and the conversion temperature.
Torrefaction is a mild form of pyrolysis where organic materials are heated to 400–600 °F (200–300 °C) in a no–to–low oxygen environment. The heating process removes (via gasification) the parts of the biomass that has the lowest energy content, while the parts with the highest energy content remain. That is, approximately 30% of the biomass is converted to gas during the torrefaction process, while 70% remains, usually in the form of compacted pellets or briquettes. This solid product is water resistant, easy to grind, non-corrosive, and it contains approximately 85% of the original biomass energy. Basically the mass part has shrunk more than the energy part, and the consequence is that the calorific value of torrefied biomass increases significantly, to the extent that it can compete with coals used for electricity generation (steam/thermal coals). The energy density of the most common steam coals today is 22–26 GJ/t. There are other less common, more experimental or proprietary thermal processes that may offer benefits, such as hydrothermal upgrading (sometimes called "wet" torrefaction.) The hydrothermal upgrade path can be used for both low and high moisture content biomass, e.g. aqueous slurries.
Pyrolysis entails heating organic materials to 800–900 °F (400–500 °C) in the near complete absence of oxygen. Biomass pyrolysis produces fuels such as bio-oil, charcoal, methane, and hydrogen. Hydrotreating is used to process bio-oil (produced by fast pyrolysis) with hydrogen under elevated temperatures and pressures in the presence of a catalyst to produce renewable diesel, renewable gasoline, and renewable jet fuel.
Gasification entails heating organic materials to 1,400–1700 °F (800–900 °C) with injections of controlled amounts of oxygen and/or steam into the vessel to produce a carbon monoxide and hydrogen rich gas called synthesis gas or syngas. Syngas can be used as a fuel for diesel engines, for heating, and for generating electricity in gas turbines. It can also be treated to separate the hydrogen from the gas, and the hydrogen can be burned or used in fuel cells. The syngas can be further processed to produce liquid fuels using the Fischer-Tropsch synthesis process.
Chemical conversion
A range of chemical processes may be used to convert biomass into other forms, such as to produce a fuel that is more practical to store, transport and use, or to exploit some property of the process itself. Many of these processes are based in large part on similar coal-based processes, such as the Fischer-Tropsch synthesis. A chemical conversion process known as transesterification is used for converting vegetable oils, animal fats, and greases into fatty acid methyl esters (FAME), which are used to produce biodiesel.
Biochemical conversion
Biochemical processes have developed in nature to break down the molecules of which biomass is composed, and many of these can be harnessed. In most cases, microorganisms are used to perform the conversion. The processes are called anaerobic digestion, fermentation, and composting.
Fermentation converts biomass into bioethanol, and anaerobic digestion converts biomass into renewable natural gas (biogas). Bioethanol is used as a vehicle fuel. Renewable natural gas—also called biogas or biomethane—is produced in anaerobic digesters at sewage treatment plants and at dairy and livestock operations. It also forms in and may be captured from solid waste landfills. Properly treated renewable natural gas has the same uses as fossil fuel natural gas.
Climate impacts
Short-term vs long-term climate benefits
Regarding the issue of climate consequences for modern bioenergy, IPCC states: "Life-cycle GHG emissions of modern bioenergy alternatives are usually lower than those for fossil fuels." Consequently, most of IPCC's GHG mitigation pathways include substantial deployment of bioenergy technologies.
Some research groups state that even if the European and North American forest carbon stock is increasing, it simply takes too long for harvested trees to grow back. Bioenergy from sources with high payback and parity times take a long time to have an impact on climate change mitigation. They therefore suggest that the EU should adjust its sustainability criteria so that only renewable energy with carbon payback times of less than 10 years is defined as sustainable, for instance wind, solar, biomass from wood residues and tree thinnings that would otherwise be burnt or decompose relatively fast, and biomass from short rotation coppicing (SRC).
The IPCC states: "While individual stands in a forest may be either sources or sinks, the forest carbon balance is determined by the sum of the net balance of all stands." IPCC also state that the only universally applicable approach to carbon accounting is the one that accounts for both carbon emissions and carbon removals (absorption) for managed lands (e.g. forest landscapes.) When the total is calculated, natural disturbances like fires and insect infestations are subtracted, and what remains is the human influence.
IEA Bioenergy state that an exclusive focus on the short-term make it harder to achieve efficient carbon mitigation in the long term, and compare investments in new bioenergy technologies with investments in other renewable energy technologies that only provide emission reductions after 2030, for instance the scaling-up of battery manufacturing or the development of rail infrastructure. Forest carbon emission avoidance strategies give a short-term mitigation benefit, but the long-term benefits from sustainable forestry activities provide ongoing forest product and energy resources.
Most of IPCC's GHG mitigation pathways include substantial deployment of bioenergy technologies. Limited or no bioenergy pathways leads to increased climate change or shifting bioenergy's mitigation load to other sectors. In addition, mitigation cost increases.
Carbon accounting system boundaries
Carbon positive scenarios are likely to be net emitters of CO2, carbon negative projects are net absorbers of CO2, while carbon neutral projects balance emissions and absorption equally.
It is common to include alternative scenarios (also called "reference scenarios" or "counterfactuals") for comparison. The alternative scenarios range from scenarios with only modest changes compared to the existing project, all the way to radically different ones (i.e. forest protection or "no-bioenergy" counterfactuals.) Generally, the difference between scenarios is seen as the actual carbon mitigation potential of the scenarios.
In addition to the choice of alternative scenario, other choices has to be made as well. The so-called "system boundaries" determine which carbon emissions/absorptions that will be included in the actual calculation, and which that will be excluded. System boundaries include temporal, spatial, efficiency-related and economic boundaries.
For example, the actual carbon intensity of bioenergy varies with biomass production techniques and transportation lengths.
Temporal system boundaries
The temporal boundaries define when to start and end carbon counting. Sometimes "early" events are included in the calculation, for instance carbon absorption going on in the forest before the initial harvest. Sometimes "late" events are included as well, for instance emissions caused by end-of-life activities for the infrastructure involved, e.g. demolition of factories. Since the emission and absorption of carbon related to a project or scenario changes with time, the net carbon emission can either be presented as time-dependent (for instance a curve which moves along a time axis), or as a static value; this shows average emissions calculated over a defined time period.
The time-dependent net emission curve will typically show high emissions at the beginning (if the counting starts when the biomass is harvested.) Alternatively, the starting point can be moved back to the planting event; in this case the curve can potentially move below zero (into carbon negative territory) if there is no carbon debt from land use change to pay back, and in addition more and more carbon is absorbed by the planted trees. The emission curve then spikes upward at harvest. The harvested carbon is then being distributed into other carbon pools, and the curve moves in tandem with the amount of carbon that is moved into these new pools (Y axis), and the time it takes for the carbon to move out of the pools and return to the forest via the atmosphere (X axis). As described above, the carbon payback time is the time it takes for the harvested carbon to be returned to the forest, and the carbon parity time is the time it takes for the carbon stored in two competing scenarios to reach the same level.
The static carbon emission value is produced by calculating the average annual net emission for a specific time period. The specific time period can be the expected lifetime of the infrastructure involved (typical for life cycle assessments; LCA's), policy relevant time horizons inspired by the Paris agreement (for instance remaining time until 2030, 2050 or 2100), time spans based on different global warming potentials (GWP; typically 20 or 100 years), or other time spans. In the EU, a time span of 20 years is used when quantifying the net carbon effects of a land use change. Generally in legislation, the static number approach is preferred over the dynamic, time-dependent curve approach. The number is expressed as a so-called "emission factor" (net emission per produced energy unit, for instance kg CO2e per GJ), or even simpler as an average greenhouse gas savings percentage for specific bioenergy pathways. The EU's published greenhouse gas savings percentages for specific bioenergy pathways used in the Renewable Energy Directive (RED) and other legal documents are based on life cycle assessments (LCA's).
Spatial system boundaries
The spatial boundaries define "geographical" borders for carbon emission/absorption calculations. The two most common spatial boundaries for CO2 absorption and emission in forests are 1.) along the edges of a particular forest stand and 2.) along the edges of a whole forest landscape, which include many forest stands of increasing age (the forest stands are harvested and replanted, one after the other, over as many years as there are stands.) A third option is the so-called increasing stand level carbon accounting method. The researcher has to decide whether to focus on the individual stand, an increasing number of stands, or the whole forest landscape. The IPCC recommends landscape-level carbon accounting.
Further, the researcher has to decide whether emissions from direct/indirect land use change should be included in the calculation. Most researchers include emissions from direct land use change, for instance the emissions caused by cutting down a forest in order to start some agricultural project there instead. The inclusion of indirect land use change effects is more controversial, as they are difficult to quantify accurately. Other choices involve defining the likely spatial boundaries of forests in the future.
Efficiency-related system boundaries
The efficiency-related boundaries define a range of fuel substitution efficiencies for different biomass-combustion pathways. Different supply chains emit different amounts of carbon per supplied energy unit, and different combustion facilities convert the chemical energy stored in different fuels to heat or electrical energy with different efficiencies. The researcher has to know about this and choose a realistic efficiency range for the different biomass-combustion paths under consideration. The chosen efficiencies are used to calculate so-called "displacement factors" – single numbers that shows how efficient fossil carbon is substituted by biogenic carbon. If for instance 10 tonnes of carbon are combusted with an efficiency half that of a modern coal plant, only 5 tonnes of coal would actually be counted as displaced (displacement factor 0.5).
Generally, fuel burned in inefficient (old or small) combustion facilities gets assigned lower displacement factors than fuel burned in efficient (new or large) facilities, since more fuel has to be burned (and therefore more CO2 released) in order to produce the same amount of energy.
The displacement factor varies with the carbon intensity of both the biomass fuel and the displaced fossil fuel. If or when bioenergy can achieve negative emissions (e.g. from afforestation, energy grass plantations and/or bioenergy with carbon capture and storage (BECCS), or if fossil fuel energy sources with higher emissions in the supply chain start to come online (e.g. because of fracking, or increased use of shale gas), the displacement factor will start to rise. On the other hand, if or when new baseload energy sources with lower emissions than fossil fuels start to come online, the displacement factor will start to drop. Whether a displacement factor change is included in the calculation or not, depends on whether or not it is expected to take place within the time period covered by the relevant scenario's temporal system boundaries.
Economic system boundaries
The economic boundaries define which market effects to include in the calculation, if any. Changed market conditions can lead to small or large changes in carbon emissions and absorptions from supply chains and forests, for instance changes in forest area as a response to changes in demand. Macroeconomic events/policy changes can have impacts on forest carbon stock. Like with indirect land use changes, economic changes can be difficult to quantify however, so some researchers prefer to leave them out of the calculation.
System boundary impacts
The chosen system boundaries are very important for the calculated results. Shorter payback/parity times are calculated when fossil carbon intensity, forest growth rate and biomass conversion efficiency increases, or when the initial forest carbon stock and/or harvest level decreases. Shorter payback/parity times are also calculated when the researcher choose landscape level over stand level carbon accounting (if carbon accounting starts at the harvest rather than at the planting event.) Conversely, longer payback/parity times are calculated when carbon intensity, growth rate and conversion efficiency decreases, or when the initial carbon stock and/or harvest level increases, or the researcher choose stand level over landscape level carbon accounting.
Critics argue that unrealistic system boundary choices are made, or that narrow system boundaries lead to misleading conclusions. Others argue that the wide range of results shows that there is too much leeway available and that the calculations therefore are useless for policy development. EU's Join Research Center agrees that different methodologies produce different results, but also argue that this is to be expected, since different researchers consciously or unconsciously choose different alternative scenarios/methodologies as a result of their ethical ideals regarding man's optimal relationship with nature. The ethical core of the sustainability debate should be made explicit by researchers, rather than hidden away.
Comparisons of GHG emissions at the point of combustion
GHG emissions per produced energy unit at the point of combustion depend on moisture content in the fuel, chemical differences between fuels and conversion efficiencies. For example, raw biomass can have higher moisture content compared to some common coal types. When this is the case, more of the wood's inherent energy must be spent solely on evaporating moisture, compared to the drier coal, which means that the amount of CO2 emitted per unit of produced heat will be higher.
Many biomass-only combustion facilities are relatively small and inefficient, compared to the typically much larger coal plants. Further, raw biomass (for instance wood chips) can have higher moisture content than coal (especially if the coal has been dried). When this is the case, more of the wood's inherent energy must be spent solely on evaporating moisture, compared to the drier coal, which means that the amount of CO2 emitted per unit produced heat will be higher. This moisture problem can be mitigated by modern combustion facilities.
Forest biomass on average produces 10-16% more CO2 than coal. However, focusing on gross emissions misses the point, what counts is the net climate effect from emissions and absorption, taken together. IEA Bioenergy concludes that the additional CO2 from biomass "[...] is irrelevant if the biomass is derived from sustainably managed forests."
Climate impacts expressed as varying with time
The use of boreal stemwood harvested exclusively for bioenergy have a positive climate impact only in the long term, while the use of wood residues have a positive climate impact also in the short to medium term.
Short carbon payback/parity times are produced when the most realistic no-bioenergy scenario is a traditional forestry scenario where "good" wood stems are harvested for lumber production, and residues are burned or left behind in the forest or in landfills. The collection of such residues provides material which "[...] would have released its carbon (via decay or burning) back to the atmosphere anyway (over time spans defined by the biome's decay rate) [...]." In other words, payback and parity times depend on the decay speed. The decay speed depends on a.) location (because decay speed is "[...] roughly proportional to temperature and rainfall [...]"), and b.) the thickness of the residues. Residues decay faster in warm and wet areas, and thin residues decay faster than thick residues. Thin residues in warm and wet temperate forests therefore have the fastest decay, while thick residues in cold and dry boreal forests have the slowest decay. If the residues instead are burned in the no-bioenergy scenario, e.g. outside the factories or at roadside in the forests, emissions are instant. In this case, parity times approach zero.
Like other scientists, the JRC staff note the high variability in carbon accounting results, and attribute this to different methodologies. In the studies examined, the JRC found carbon parity times of 0 to 400 years for stemwood harvested exclusively for bioenergy, depending on different characteristics and assumptions for both the forest/bioenergy system and the alternative fossil system, with the emission intensity of the displaced fossil fuels seen as the most important factor, followed by conversion efficiency and biomass growth rate/rotation time. Other factors relevant for the carbon parity time are the initial carbon stock and the existing harvest level; both higher initial carbon stock and higher harvest level means longer parity times. Liquid biofuels have high parity times because about half of the energy content of the biomass is lost in the processing.
Climate impacts expressed as static numbers
EU's Joint Research Centre has examined a number of bioenergy emission estimates found in literature, and calculated greenhouse gas savings percentages for bioenergy pathways in heat production, transportation fuel production and electricity production, based on those studies. The calculations are based on the attributional LCA accounting principle. It includes all supply chain emissions, from raw material extraction, through energy and material production and manufacturing, to end-of-life treatment and final disposal. It also includes emissions related to the production of the fossil fuels used in the supply chain. It excludes emission/absorption effects that takes place outside its system boundaries, for instance market related, biogeophysical (e.g. albedo), and time-dependent effects. The authors conclude that "[m]ost bio-based commodities release less GHG than fossil products along their supply chain; but the magnitude of GHG emissions vary greatly with logistics, type of feedstocks, land and ecosystem management, resource efficiency, and technology."
Because of the varied climate mitigation potential for different biofuel pathways, governments and organizations set up different certification schemes to ensure that biomass use is sustainable, for instance the RED (Renewable Energy Directive) in the EU and the ISO standard 13065 by the International Organization for Standardization. In the US, the RFS (Renewables Fuel Standard) limit the use of traditional biofuels and defines the minimum life-cycle GHG emissions that are acceptable. Biofuels are considered traditional if they achieve up to 20% GHG emission reduction compared to the petrochemical equivalent, advanced if they save at least 50%, and cellulosic if the save more than 60%.
The EU's Renewable Energy Directive (RED) states that the typical greenhouse gas emissions savings when replacing fossil fuels with wood pellets from forest residues for heat production varies between 69% and 77%, depending on transport distance: When the distance is between 0 and 2500 km, emission savings is 77%. Emission savings drop to 75% when the distance is between 2500 and 10 000 km, and to 69% when the distance is above 10 000 km. When stemwood is used, emission savings varies between 70% and 77%, depending on transport distance. When wood industry residues are used, savings varies between 79% and 87%.
Since the long payback and parity times calculated for some forestry projects is seen as a non-issue for energy crops (except in the cases mentioned above), researchers instead calculate static climate mitigation potentials for these crops, using LCA-based carbon accounting methods. A particular energy crop-based bioenergy project is considered carbon positive, carbon neutral or carbon negative based on the total amount of CO2 equivalent emissions and absorptions accumulated throughout its entire lifetime: If emissions during agriculture, processing, transport and combustion are higher than what is absorbed (and stored) by the plants, both above and below ground, during the project's lifetime, the project is carbon positive. Likewise, if total absorption is higher than total emissions, the project is carbon negative. In other words, carbon negativity is possible when net carbon accumulation more than compensates for net lifecycle greenhouse gas emissions.
Typically, perennial crops sequester more carbon than annual crops because the root buildup is allowed to continue undisturbed over many years. Also, perennial crops avoid the yearly tillage procedures (plowing, digging) associated with growing annual crops. Tilling helps the soil microbe populations to decompose the available carbon, producing CO2.
There is now (2018) consensus in the scientific community that "[...] the GHG [greenhouse gas] balance of perennial bioenergy crop cultivation will often be favourable [...]", also when considering the implicit direct and indirect land use changes.
Albedo and evapotranspiration
Environmental impacts
The environmental impacts of biomass production need to be taken into account. For instance in 2022, IEA stated that "bioenergy is an important pillar of decarbonisation in the energy transition as a near zero-emission fuel", and that "more efforts are needed to accelerate modern bioenergy deployment to get on track with the Net Zero Scenario [....] while simultaneously ensuring that bioenergy production does not incur negative social and environmental consequences."
Sustainable forestry and forest protection
IPCC states that there is disagreement about whether the global forest is shrinking or not, and quote research indicating that tree cover has increased 7.1% between 1982 and 2016. The IPCC writes: "While above-ground biomass carbon stocks are estimated to be declining in the tropics, they are increasing globally due to increasing stocks in temperate and boreal forests [...]."
Old trees have a very high carbon absorption rate, and felling old trees means that this large potential for future carbon absorption is lost. There is also a loss of soil carbon due to the harvest operations.
Old trees absorb more CO2 than young trees, because of the larger leaf area in full grown trees. However, the old forest (as a whole) will eventually stop absorbing CO2 because CO2 emissions from dead trees cancel out the remaining living trees' CO2 absorption. The old forest (or forest stands) are also vulnerable for natural disturbances that produces CO2. The IPCC found that "[...] landscapes with older forests have accumulated more carbon but their sink strength is diminishing, while landscapes with younger forests contain less carbon but they are removing CO2 from the atmosphere at a much higher rate [...]."
The IPCC states that the net climate effect from conversion of unmanaged to managed forest can be positive or negative, depending on circumstances. The carbon stock is reduced, but since managed forests grow faster than unmanaged forests, more carbon is absorbed. Positive climate effects are produced if the harvested biomass is used efficiently. There is a tradeoff between the benefits of having a maximized forest carbon stock, not absorbing any more carbon, and the benefits of having a portion of that carbon stock "unlocked", and instead working as a renewable fossil fuel replacement tool, for instance in sectors which are difficult or expensive to decarbonize.
The "competition" between locked-away and unlocked forest carbon might be won by the unlocked carbon: "In the long term, using sustainably produced forest biomass as a substitute for carbon-intensive products and fossil fuels provides greater permanent reductions in atmospheric CO2 than preservation does."
IEA Bioenergy writes: "forests managed for producing sawn timber, bioenergy and other wood products can make a greater contribution to climate change mitigation than forests managed for conservation alone." Three reasons are given:
reducing ability to act as a carbon sink when the forest matures.
Wood products can replace other materials that emitted more GHGs during production.
"Carbon in forests is vulnerable to loss through natural events such as insect infestations or wildfires"
Data from FAO show that most wood pellets are produced in regions dominated by sustainably managed forests, such as Europe and North America. Europe (including Russia) produced 54% of the world's wood pellets in 2019, and the forest carbon stock in this area increased from 158.7 to 172.4 Gt between 1990 and 2020. In the EU, above-ground forest biomass increases with 1.3% per year on average, however the increase is slowing down because the forests are maturing.
United Kingdom Emissions Trading System allows operators of CO2 generating installations to apply zero emissions factor for the fraction used for non-energy purposes, while energy purposes (electricity generation, heating) require additional sustainability certification on the biomass used.
Biodiversity
Biomass production for bioenergy can have negative impacts on biodiversity. Oil palm and sugar cane are examples of crops that have been linked to reduced biodiversity. In addition, changes in biodiversity also impacts primary production which naturally effects decomposition and soil heterotrophic organisms.
Win-win scenarios (good for climate, good for biodiversity) include:
Increased use of whole trees from coppice forests, increased use of thin forest residues from boreal forests with slow decay rates, and increased use of all kinds of residues from temperate forests with faster decay rates;
Multi-functional bioenergy landscapes, instead of expansion of monoculture plantations;
Afforestation of former agricultural land with mixed or naturally regenerating forests.
Win-lose scenarios (good for the climate, bad for biodiversity) include afforestation on ancient, biodiversity-rich grassland ecosystems which were never forests, and afforestation of former agricultural land with monoculture plantations.
Lose-win scenarios (bad for the climate, good for biodiversity) include natural forest expansion on former agricultural land.
Lose-lose scenarios include increased use of thick forest residues like stumps from some boreal forests with slow decay rates, and conversion of natural forests into forest plantations.
Pollution
Other problems are pollution of soil and water from fertiliser/pesticide use, and emission of ambient air pollutants, mainly from open field burning of residues.
The traditional use of wood in cook stoves and open fires produces pollutants, which can lead to severe health and environmental consequences. However, a shift to modern bioenergy contribute to improved livelihoods and can reduce land degradation and impacts on ecosystem services. According to the IPCC, there is strong evidence that modern bioenergy have "large positive impacts" on air quality. Traditional bioenergy is inefficient and the phasing out of this energy source has both large health benefits and large economic benefits. When combusted in industrial facilities, most of the pollutants originating from woody biomass reduce by 97-99%, compared to open burning. Combustion of woody biomass produces lower amounts of particulate matter than coal for the same amount of electricity generated.
See also
Bioenergetics
Bioenergy Action Plan
Bioenergy with carbon capture and storage
Biomass heating system
Biomass to liquid
Bioproducts
Biorefinery
Biochar
Cogeneration
Carbon footprint
Energy forestry
Pellet fuel
Solid fuel
Renewable energy transition
World Bioenergy Association
References
Sources
IPCC reports
IEA reports
Other sources
Quotes and comments
External links
Biomass explained (U.S. Energy Information Administration)
Biomass Energy (National Geographic)
Bioenergy
Renewable energy
Sustainable energy | 0.787514 | 0.996263 | 0.784572 |
Natural resource management | Natural resource management (NRM) is the management of natural resources such as land, water, soil, plants and animals, with a particular focus on how management affects the quality of life for both present and future generations (stewardship).
Natural resource management deals with managing the way in which people and natural landscapes interact. It brings together natural heritage management, land use planning, water management, bio-diversity conservation, and the future sustainability of industries like agriculture, mining, tourism, fisheries and forestry. It recognizes that people and their livelihoods rely on the health and productivity of our landscapes, and their actions as stewards of the land play a critical role in maintaining this health and productivity.
Natural resource management specifically focuses on a scientific and technical understanding of resources and ecology and the Life-supporting capacity of those resources. Environmental management is similar to natural resource management. In academic contexts, the sociology of natural resources is closely related to, but distinct from, natural resource management.
History
The emphasis on a sustainability can be traced back to early attempts to understand the ecological nature of North American rangelands in the late 19th century, and the resource conservation movement of the same time. This type of analysis coalesced in the 20th century with recognition that preservationist conservation strategies had not been effective in halting the decline of natural resources. A more integrated approach was implemented recognising the intertwined social, cultural, economic and political aspects of resource management. A more holistic, national and even global form evolved, from the Brundtland Commission and the advocacy of sustainable development.
In 2005 the government of New South Wales, Australia established a Standard for Quality Natural Resource Management, to improve the consistency of practice, based on an adaptive management approach.
In the United States, the most active areas of natural resource management are fisheries management, wildlife management, often associated with ecotourism and rangeland management, and forest management. In Australia, water sharing, such as the Murray Darling Basin Plan and catchment management are also significant.
How to prevent natural resource depletion
Here are some ways to prevent changes in land and sea use:
Reduce, reuse, and recycle
Volunteer
Educate
Conserve water
Choose sustainable
Shop wisely
Use long-lasting light bulbs
Plant a tree
Stop pollution
Control hunting
Prevent invasive species
Secure Indigenous land rights
Reforestation and landscape restoration
Establish new protected areas
Redesign food systems
Use finance as a tool
Ownership regimes
Natural resource management approaches can be categorised according to the kind and right of stakeholders, natural resources:
State property: Ownership and control over the use of resources is in hands of the state. Individuals or groups may be able to make use of the resources, but only at the permission of the state. National forest, National parks and military reservations are some US examples.
Private property: Any property owned by a defined individual or corporate entity. Both the benefit and duties to the resources fall to the owner(s). Private land is the most common example.
Common property: It is a private property of a group. The group may vary in size, nature and internal structure e.g. indigenous neighbours of village. Some examples of common property are community forests.
Non-property (open access): There is no definite owner of these properties. Each potential user has equal ability to use it as they wish. These areas are the most exploited. It is said that "Nobody's property is Everybody's property". An example is a lake fishery. Common land may exist without ownership, in which case in the UK it is vested in a local authority.
Hybrid: Many ownership regimes governing natural resources will contain parts of more than one of the regimes described above, so natural resource managers need to consider the impact of hybrid regimes. An example of such a hybrid is native vegetation management in NSW, Australia, where legislation recognises a public interest in the preservation of native vegetation, but where most native vegetation exists on private land.
Stakeholder analysis
Stakeholder analysis originated from business management practices and has been incorporated into natural resource management in ever growing popularity. Stakeholder analysis in the context of natural resource management identifies distinctive interest groups affected in the utilisation and conservation of natural resources.
There is no definitive definition of a stakeholder as illustrated in the table below. Especially in natural resource management as it is difficult to determine who has a stake and this will differ according to each potential stakeholder.
Different approaches to who is a stakeholder:
Therefore, it is dependent upon the circumstances of the stakeholders involved with natural resource as to which definition and subsequent theory is utilised.
Billgrena and Holme identified the aims of stakeholder analysis in natural resource management:
Identify and categorise the stakeholders that may have influence
Develop an understanding of why changes occur
Establish who can make changes happen
How to best manage natural resources
This gives transparency and clarity to policy making allowing stakeholders to recognise conflicts of interest and facilitate resolutions.
There are numerous stakeholder theories such as Mitchell et al. however Grimble created a framework of stages for a Stakeholder Analysis in natural resource management. Grimble designed this framework to ensure that the analysis is specific to the essential aspects of natural resource management.
Stages in Stakeholder analysis:
Clarify objectives of the analysis
Place issues in a systems context
Identify decision-makers and stakeholders
Investigate stakeholder interests and agendas
Investigate patterns of inter-action and dependence (e.g. conflicts and compatibilities, trade-offs and synergies)
Application:
Grimble and Wellard established that Stakeholder analysis in natural resource management is most relevant where issued can be characterised as;
Cross-cutting systems and stakeholder interests
Multiple uses and users of the resource.
Market failure
Subtractability and temporal trade-offs
Unclear or open-access property rights
Untraded products and services
Poverty and under-representation
Case studies:
In the case of the Bwindi Impenetrable National Park, a comprehensive stakeholder analysis would have been relevant and the Batwa people would have potentially been acknowledged as stakeholders preventing the loss of people's livelihoods and loss of life.
In Wales, Natural Resources Wales, a Welsh Government sponsored body "pursues sustainable management of natural resources" and "applies the principles of sustainable management of natural resources" as stated in the Environment (Wales) Act 2016.
NRW is responsible for more than 40 different types of regulatory regime across a wide range of activities.
Nepal, Indonesia and Koreas' community forestry are successful examples of how stakeholder analysis can be incorporated into the management of natural resources. This allowed the stakeholders to identify their needs and level of involvement with the forests.
Criticisms:
Natural resource management stakeholder analysis tends to include too many stakeholders which can create problems in of its self as suggested by Clarkson. "Stakeholder theory should not be used to weave a basket big enough to hold the world's misery."
Starik proposed that nature needs to be represented as stakeholder. However this has been rejected by many scholars as it would be difficult to find appropriate representation and this representation could also be disputed by other stakeholders causing further issues.
Stakeholder analysis can be used exploited and abused in order to marginalise other stakeholders.
Identifying the relevant stakeholders for participatory processes is complex as certain stakeholder groups may have been excluded from previous decisions.
On-going conflicts and lack of trust between stakeholders can prevent compromise and resolutions.
Alternatives/ Complementary forms of analysis:
Social network analysis
Common pool resource
Management of the resources
Natural resource management issues are inherently complex and contentious. First, they involve the ecological cycles, hydrological cycles, climate, animals, plants and geography, etc. All these are dynamic and inter-related. A change in one of them may have far reaching and/or long-term impacts which may even be irreversible. Second, in addition to the complexity of the natural systems, managers also have to consider various stakeholders and their interests, policies, politics, geographical boundaries and economic implications. It is impossible to fully satisfy all aspects at the same time. Therefore, between the scientific complexity and the diverse stakeholders, natural resource management is typically contentious.
After the United Nations Conference for the Environment and Development (UNCED) held in Rio de Janeiro in 1992, most nations subscribed to new principles for the integrated management of land, water, and forests. Although program names vary from nation to nation, all express similar aims.
The various approaches applied to natural resource management include:
Top-down (command and control)
Community-based natural resource management
Adaptive management
Precautionary approach
Integrated natural resource management
Ecosystem management
Community-based natural resource management
The community-based natural resource management (CBNRM) approach combines conservation objectives with the generation of economic benefits for rural communities. The three key assumptions being that: locals are better placed to conserve natural resources, people will conserve a resource only if benefits exceed the costs of conservation, and people will conserve a resource that is linked directly to their quality of life. When a local people's quality of life is enhanced, their efforts and commitment to ensure the future well-being of the resource are also enhanced. Regional and community based natural resource management is also based on the principle of subsidiarity.
The United Nations advocates CBNRM in the Convention on Biodiversity and the Convention to Combat Desertification. Unless clearly defined, decentralised NRM can result in an ambiguous socio-legal environment with local communities racing to exploit natural resources while they can, such as the forest communities in central Kalimantan (Indonesia).
A problem of CBNRM is the difficulty of reconciling and harmonising the objectives of socioeconomic development, biodiversity protection and sustainable resource utilisation. The concept and conflicting interests of CBNRM, show how the motives behind the participation are differentiated as either people-centred (active or participatory results that are truly empowering) or planner-centred (nominal and results in passive recipients). Understanding power relations is crucial to the success of community based NRM. Locals may be reluctant to challenge government recommendations for fear of losing promised benefits.
CBNRM is based particularly on advocacy by nongovernmental organizations working with local groups and communities, on the one hand, and national and transnational organizations, on the other, to build and extend new versions of environmental and social advocacy that link social justice and environmental management agendas with both direct and indirect benefits observed including a share of revenues, employment, diversification of livelihoods and increased pride and identity. Ecological and societal successes and failures of CBNRM projects have been documented. CBNRM has raised new challenges, as concepts of community, territory, conservation, and indigenous are worked into politically varied plans and programs in disparate sites. Warner and Jones address strategies for effectively managing conflict in CBNRM.
The capacity of Indigenous communities, led by traditional custodians, to conserve natural resources has been acknowledged by the Australian Government with the Caring for Country Program. Caring for our Country is an Australian Government initiative jointly administered by the Australian Government Department of Agriculture, Fisheries and Forestry and the Department of the Environment, Water, Heritage and the Arts. These Departments share responsibility for delivery of the Australian Government's environment and sustainable agriculture programs, which have traditionally been broadly referred to under the banner of 'natural resource management'. These programs have been delivered regionally, through 56 State government bodies, successfully allowing regional communities to decide the natural resource priorities for their regions.
More broadly, a research study based in Tanzania and the Pacific researched what motivates communities to adopt CBNRM's and found that aspects of the specific CBNRM program, of the community that has adopted the program, and of the broader social-ecological context together shape the why CBNRM's are adopted. However, overall, program adoption seemed to mirror the relative advantage of CBNRM programs to local villagers and villager access to external technical assistance. There have been socioeconomic critiques of CBNRM in Africa, but ecological effectiveness of CBNRM measured by wildlife population densities has been shown repeatedly in Tanzania.
Governance is seen as a key consideration for delivering community-based or regional natural resource management. In the State of NSW, the 13 catchment management authorities (CMAs) are overseen by the Natural Resources Commission (NRC), responsible for undertaking audits of the effectiveness of regional natural resource management programs.
Criticisms of Community-Based Natural Resource Management
Though presenting a transformative approach to resource management that recognizes and involves local communities rather than displacing them, Community-Based Natural Resource Management strategies have faced scrutiny from both scholars and advocates for indigenous communities. Tania Murray, in her examination of CBNRM in Upland Southeast Asia, discovered certain limitations associated with the strategy, primarily stemming from her observation of an idealistic perspective of the communities held by external entities implementing CBNRM programs.
Murray's findings revealed that, in the Uplands, CBNRM as a legal strategy imposed constraints on the communities. One significant limitation was the necessity for communities to fulfill discriminatory and enforceable prerequisites in order to obtain legal entitlements to resources. Murray contends that such legal practices, grounded in specific distinguishing identities or practices, pose a risk of perpetuating and strengthening discriminatory norms in the region.
Furthermore, adopting a Marxist perspective centered on class struggle, some have criticized CBNRM as an empowerment tool, asserting that its focus on state-community alliances may limit its effectiveness, particularly for communities facing challenges from "vicious states," thereby restricting the empowerment potential of the programs.
Gender-based natural resource management
Social capital and gender are factors that impact community-based natural resource management (CBNRM), including conservation strategies and collaborations between community members and staff. Through three months of participant observation in a fishing camp in San Evaristo, Mexico, Ben Siegelman learned that the fishermen build trust through jokes and fabrications. He emphasizes social capital as a process because it is built and accumulated through practice of intricate social norms. Siegelman notes that playful joking is connected to masculinity and often excludes women. He stresses that both gender and social capital are performed. Furthermore, in San Evaristo, the gendered network of fishermen is simultaneously a social network. Nearly all fishermen in San Evaristo are men and most families have lived there for generations. Men form intimate relationships by spending 14 hour work days together, while women spend time with the family managing domestic caretaking. Siegelman observes three categories of lies amongst the fishermen: exaggerations, deceptions, and jokes. For example, a fisherman may exaggerate his success fishing at a particular spot to mislead friends, place his hand on the scale to turn a larger profit, or make a sexual joke to earn respect. As Siegelman puts it, "lies build trust." Siegelman saw that this division of labor was reproduced, at least in part, to do with the fact that the culture of lying and trust was a masculine activity unique to the fisherman. Similar to the ways in which the culture of lying excluded women from the social sphere of fishing, conservationists were also excluded from this social arrangement and, thus, were not able to obtain the trust needed to do their work of regulating fishing practices. As outsiders, conservationists, even male conservationists, were not able to fit the ideal of masculinity that was considered "trustable" by the fishermen and could convince them to implement or participate in conservation practices. In one instance, the researcher replied jokingly "in the sea" when a fisherman asked where the others were fishing that day. This vague response earned him trust. Women are excluded from this form of social capital because many of the jokes center around "masculine exploits". Siegelman finishes by asking: how can female conservationists act when they are excluded through social capital? What role should men play in this situation?
Adaptive Management
The primary methodological approach adopted by catchment management authorities (CMAs) for regional natural resource management in Australia is adaptive management.
This approach includes recognition that adaption occurs through a process of 'plan-do-review-act'. It also recognises seven key components that should be considered for quality natural resource management practice:
Determination of scale
Collection and use of knowledge
Information management
Monitoring and evaluation
Risk management
Community engagement
Opportunities for collaboration.
Integrated natural resource management
Integrated natural resource management (INRM) is the process of managing natural resources in a systematic way, which includes multiple aspects of natural resource use (biophysical, socio-political, and economic) meet production goals of producers and other direct users (e.g., food security, profitability, risk aversion) as well as goals of the wider community (e.g., poverty alleviation, welfare of future generations, environmental conservation). It focuses on sustainability and at the same time tries to incorporate all possible stakeholders from the planning level itself, reducing possible future conflicts. The conceptual basis of INRM has evolved in recent years through the convergence of research in diverse areas such as sustainable land use, participatory planning, integrated watershed management, and adaptive management. INRM is being used extensively and been successful in regional and community based natural management.
Frameworks and modelling
There are various frameworks and computer models developed to assist natural resource management.
Geographic Information Systems (GIS)
GIS is a powerful analytical tool as it is capable of overlaying datasets to identify links. A bush regeneration scheme can be informed by the overlay of rainfall, cleared land and erosion. In Australia, Metadata Directories such as NDAR provide data on Australian natural resources such as vegetation, fisheries, soils and water. These are limited by the potential for subjective input and data manipulation.
Natural Resources Management Audit Frameworks
The NSW Government in Australia has published an audit framework for natural resource management, to assist the establishment of a performance audit role in the governance of regional natural resource management. This audit framework builds from other established audit methodologies, including performance audit, environmental audit and internal audit. Audits undertaken using this framework have provided confidence to stakeholders, identified areas for improvement and described policy expectations for the general public.
The Australian Government has established a framework for auditing greenhouse emissions and energy reporting, which closely follows Australian Standards for Assurance Engagements.
The Australian Government is also currently preparing an audit framework for auditing water management, focussing on the implementation of the Murray Darling Basin Plan.
Other elements
Biodiversity Conservation
The issue of biodiversity conservation is regarded as an important element in natural resource management. What is biodiversity? Biodiversity is a comprehensive concept, which is a description of the extent of natural diversity. Gaston and Spicer (p. 3) point out that biodiversity is "the variety of life" and relate to different kinds of "biodiversity organization". According to Gray (p. 154), the first widespread use of the definition of biodiversity, was put forward by the United Nations in 1992, involving different aspects of biological diversity.
Precautionary Biodiversity Management
The "threats" wreaking havoc on biodiversity include; habitat fragmentation, putting a strain on the already stretched biological resources; forest deterioration and deforestation; the invasion of "alien species" and "climate change"( p. 2). Since these threats have received increasing attention from environmentalists and the public, the precautionary management of biodiversity becomes an important part of natural resources management. According to Cooney, there are material measures to carry out precautionary management of biodiversity in natural resource management.
Concrete "policy tools"
Cooney claims that the policy making is dependent on "evidences", relating to "high standard of proof", the forbidding of special "activities" and "information and monitoring requirements". Before making the policy of precaution, categorical evidence is needed. When the potential menace of "activities" is regarded as a critical and "irreversible" endangerment, these "activities" should be forbidden. For example, since explosives and toxicants will have serious consequences to endanger human and natural environment, the South Africa Marine Living Resources Act promulgated a series of policies on completely forbidding to "catch fish" by using explosives and toxicants.
Administration and guidelines
According to Cooney, there are four methods to manage the precaution of biodiversity in natural resources management;
"Ecosystem-based management" including "more risk-averse and precautionary management", where "given prevailing uncertainty regarding ecosystem structure, function, and inter-specific interactions, precaution demands an ecosystem rather than single-species approach to management".
"Adaptive management" is "a management approach that expressly tackles the uncertainty and dynamism of complex systems".
"Environmental impact assessment" and exposure ratings decrease the "uncertainties" of precaution, even though it has deficiencies, and
"Protectionist approaches", which "most frequently links to" biodiversity conservation in natural resources management.
Land management
In order to have a sustainable environment, understanding and using appropriate management strategies is important. In terms of understanding, Young emphasises some important points of land management:
Comprehending the processes of nature including ecosystem, water, soils
Using appropriate and adapting management systems in local situations
Cooperation between scientists who have knowledge and resources and local people who have knowledge and skills
Dale et al. (2000) study has shown that there are five fundamental and helpful ecological principles for the land manager and people who need them. The ecological principles relate to time, place, species, disturbance and the landscape and they interact in many ways. It is suggested that land managers could follow these guidelines:
Examine impacts of local decisions in a regional context, and the effects on natural resources.
Plan for long-term change and unexpected events.
Preserve rare landscape elements and associated species.
Avoid land uses that deplete natural resources.
Retain large contiguous or connected areas that contain critical habitats.
Minimize the introduction and spread of non-native species.
Avoid or compensate for the effects of development on ecological processes.
Implement land-use and land-management practices that are compatible with the natural potential of the area.
See also
References
Environmental social science
Sustainable development
Environmental planning | 0.790653 | 0.992288 | 0.784556 |
Ecological pyramid | An ecological pyramid (also trophic pyramid, Eltonian pyramid, energy pyramid, or sometimes food pyramid) is a graphical representation designed to show the biomass or bioproductivity at each trophic level in an ecosystem.
A pyramid of energy shows how much energy is retained in the form of new biomass from each trophic level, while a pyramid of biomass shows how much biomass (the amount of living or organic matter present in an organism) is present in the organisms. There is also a pyramid of numbers representing the number of individual organisms at each trophic level. Pyramids of energy are normally upright, but other pyramids can be inverted (pyramid of biomass for marine region) or take other shapes (spindle shaped pyramid).
Ecological pyramids begin with producers on the bottom (such as plants) and proceed through the various trophic levels (such as herbivores that eat plants, then carnivores that eat flesh, then omnivores that eat both plants and flesh, and so on). The highest level is the top of the food chain.
Biomass can be measured by a bomb calorimeter.
Pyramid of Energy
A pyramid of energy or pyramid of productivity shows the production or turnover (the rate at which energy or mass is transferred from one trophic level to the next) of biomass at each trophic level. Instead of showing a single snapshot in time, productivity pyramids show the flow of energy through the food chain. Typical units are grams per square meter per year or calories per square meter per year. As with the others, this graph shows producers at the bottom and higher trophic levels on top.
When an ecosystem is healthy, this graph produces a standard ecological pyramid. This is because, in order for the ecosystem to sustain itself, there must be more energy at lower trophic levels than there is at higher trophic levels. This allows organisms on the lower levels to not only maintain a stable population, but also to transfer energy up the pyramid. The exception to this generalization is when portions of a food web are supported by inputs of resources from outside the local community. In small, forested streams, for example, the volume of higher levels is greater than could be supported by the local primary production.
Energy usually enters ecosystems from the Sun. The primary producers at the base of the pyramid use solar radiation to power photosynthesis which produces food. However most wavelengths in solar radiation cannot be used for photosynthesis, so they are reflected back into space or absorbed elsewhere and converted to heat. Only 1 to 2 percent of the energy from the sun is absorbed by photosynthetic processes and converted into food. When energy is transferred to higher trophic levels, on average only about 10% is used at each level to build biomass, becoming stored energy. The rest goes to metabolic processes such as growth, respiration, and reproduction.
Advantages of the pyramid of energy as a representation:
It takes account of the rate of production over a period of time.
Two species of comparable biomass may have very different life spans. Thus, a direct comparison of their total biomasses is misleading, but their productivity is directly comparable.
The relative energy chain within an ecosystem can be compared using pyramids of energy; also different ecosystems can be compared.
There are no inverted pyramids.
The input of solar energy can be added.
Disadvantages of the pyramid of energy as a representation:
The rate of biomass production of an organism is required, which involves measuring growth and reproduction through time.
There is still the difficulty of assigning the organisms to a specific trophic level. As well as the organisms in the food chains there is the problem of assigning the decomposers and detritivores to a particular level.
Pyramid of biomass
A pyramid of biomass shows the relationship between biomass and trophic level by quantifying the biomass present at each trophic level of an ecological community at a particular time. It is a graphical representation of biomass (total amount of living or organic matter in an ecosystem) present in unit area in different trophic levels. Typical units are grams per square meter, or calories per square meter.
The pyramid of biomass may be "inverted". For example, in a pond ecosystem, the standing crop of phytoplankton, the major producers, at any given point will be lower than the mass of the heterotrophs, such as fish and insects. This is explained as the phytoplankton reproduce very quickly, but have much shorter individual lives.
Pyramid of Numbers
A pyramid of numbers shows graphically the population, or abundance, in terms of the number of individual organisms involved at each level in a food chain. This shows the number of organisms in each trophic level without any consideration for their individual sizes or biomass. The pyramid is not necessarily upright. For example, it will be inverted if beetles are feeding from the output of forest trees, or parasites are feeding on large host animals.
History
The concept of a pyramid of numbers ("Eltonian pyramid") was developed by Charles Elton (1927). Later, it would also be expressed in terms of biomass by Bodenheimer (1938). The idea of the pyramid of productivity or energy relies on the works of G. Evelyn Hutchinson and Raymond Lindeman (1942).
See also
Trophic cascade
References
Bibliography
Odum, E.P. 1971. Fundamentals of Ecology. Third Edition. W.B. Saunders Company, Philadelphia,
External links
Food Chains
Ecology
Food chains | 0.787211 | 0.996617 | 0.784548 |
Sustainable Development Goal 7 | Sustainable Development Goal 7 (SDG 7 or Global Goal 7) is one of 17 Sustainable Development Goals established by the United Nations General Assembly in 2015. It aims to "Ensure access to affordable, reliable, sustainable and modern energy for all." Access to energy is an important pillar for the wellbeing of the people as well as for economic development and poverty alleviation.
The goal has five targets to be achieved by 2030. Progress towards the targets is measured by six indicators. Three out of the five targets are outcome targets: Universal access to modern energy; increase global percentage of renewable energy; double the improvement in energy efficiency. The remaining two targets are means of implementation targets: to promote access to research, technology and investments in clean energy; and expand and upgrade energy services for developing countries. In other words, these targets include access to affordable and reliable energy while increasing the share of renewable energy in the global energy mix. They also focus on improving energy efficiency, international cooperation and investment in clean energy infrastructure.
According to a review report in 2019, some progress towards achieving SDG 7 is being made, but many of the targets of SDG 7 will not be met. SDG 7 and SDG 13 (climate action) are closely related.
Problem description
SDG 7 is tackling the problem of the high number of people globally who live without access to electricity or clean cooking solutions (0.8 billion and 2.4 billion people, respectively, in 2020). Energy is needed for many activities, for example jobs and transport, food security, health and education.
People that are hard to reach with electricity and clean cooking solutions include those who live in remote areas or are internally displaced people, or those who live in urban slums or marginalized communities.
Targets, indicators and progress
SDG 7 has five targets, measured with five indicators, which are to be achieved by 2030. Three out of the five targets are "outcome targets", and two are "means of achieving targets".
Target 7.1: Universal access to modern energy
The first target of SDG 7 is Target 7.1: "By 2030, ensure universal access to affordable, reliable and modern energy services".
This target has two indicators:
Indicator 7.1.1: Proportion of population with access to electricity
Indicator 7.1.2: Proportion of population with primary reliance on clean fuels and technology.
A report from 2019 found that India, Bangladesh, and Kenya had made good progress with supplying more of their people with electricity. Globally, there are now (2020) 800 million people still without electricity, compared with 1.2 billion people in 2010.
There are several options to tackle this problem, for example private sector financing and ensuring that rural areas get access to electricity. This may involve decentralized renewable energy.
Women are disproportionately affected by indoor air pollution caused by the use of fuels such as coal and wood indoors. Reasons for not changing over to clean cooking solutions can include higher fuel costs and the need to change cooking processes.
Target 7.2: Increase global percentage of renewable energy
The second target of SDG 7 is Target 7.2: "By 2030, increase substantially the share of renewable energy in the global energy mix."
It has only one indicator: Indicator 7.2.1 is the "Renewable energy share in the total final energy consumption".
Data from 2016 showed that the share of renewable energy compared to total energy consumption was 17.5%.
Target 7.3: Double the improvement in energy efficiency
The third target of SDG 7 is Target 7.3: "By 2030, double the global rate of improvement in energy efficiency".
It has one indicator: Indicator 7.3.1 is the "Energy intensity measured in terms of primary energy and GDP".
In general, energy efficiency has been going up in recent years, in particular in China. Governments can help with this process for example by providing suitable financial incentives and by helping people access information about energy efficiency.
Target 7.a: Promote access to research, technology and investments in clean energy
The fourth target of SDG 7 is Target 7.a: "By 2030, enhance international cooperation to facilitate access to clean energy research and technology, including renewable energy, energy efficiency and advanced and cleaner fossil-fuel technology, and promote investment in energy infrastructure and clean energy technology".
It has one indicator: Indicator 7.4.1 is the "International financial flows to developing countries in support of clean energy research and development and renewable energy production, including in hybrid systems".
There is twice the amount of international financing for renewable energy going to developing countries in 2017 compared to 2010. In 2017 most of this financing (nearly half) went to hydropower and nearly 20% to solar power projects.
More investments are needed for global energy access, namely for electrification and clean cooking: A report in 2021 state that "the financing community is failing to deliver on SDG7".
Target 7.b: Expand and upgrade energy services for developing countries
The fifth target of SDG 7 is formulated as: "Target 7.b: By 2030, expand infrastructure and upgrade technology for supplying modern and sustainable energy services for all in developing countries, in particular least developed countries, small island developing States, and land-locked developing countries, in accordance with their respective programs of support."
It has one indicator, which used to measure "Investments in energy efficiency as a proportion of GDP and the amount of foreign direct investment in financial transfer for infrastructure and technology to sustainable development services," but this has since been changed. The current indicator is 7.b.1: "Installed renewable energy-generating capacity in developing and developed countries (in watts per capita)."
As of August 2020, there is no data available for this indicator.
It was reported in 2020 that Indicator 7.b.1 might be removed as it is identical with indicator 12.1.1 of SDG 12.
Custodian agencies
Custodian agencies are in charge of reporting on the following indicators:
Indicators 7.1.1 and 7.1.2: World Bank (WB) and World Health Organization (WHO).
Indicator 7.2.1: Department of Economic and Social Affairs-Statistics Division (DESA/UNDP), International Energy Agency (IEA) and International Renewable Energy Agency (IRENA).
Indicator 7.3.1 are Department of Economic and Social Affairs-Statistics Division (DESA/UNDP) and International Energy Agency (IEA).
Indicator 7.a.1: Organization for Economic Cooperation and Development (OECD) and International Renewable Energy Agency (IRENA).
Indicator 7.b.1: International Energy Agency (IEA).
Overall progress and monitoring
The UN High-Level Political Forum on Sustainable Development (HLPF) meets every year for global monitoring of the SDGs, under the auspices of the United Nations economic and Social Council. High-level progress reports for all the SDGs are published by the United Nations Secretary General.
In 2022, the renewable energy- generating capacity in developing countries has increased by 58% in renewable capacity per capita. However, the international financial flows to developing countries to support renewable energy was 24% lower than in 2018. Despite having progress in 2019 to 2020, there has been recent global events such as the Russian invasion of Ukraine has impacted global progress in renewable energy and decarbonization transition by having it at a halt or decreasing rather than increasing.
Despite progress, the world is in 2022 not on track to achieve SDG 7. The progress towards SDG 7 has not been faster due to the world entering its third year of COVID-19 along with the highest number of violent conflicts and with the Russian invasion of Ukraine creating one of the largest refugee crises to happen. There are still over 700 million people without access to electricity and about 2.4 billion cooking with harmful fuels that also are polluting the environment. More efforts need to be exerted to improved use of renewable energy and energy efficiency faster. These events has had catastrophic effect the livelihoods of many people and though in 2021, as the global economy started to rebound, these chain of events and negative effects as caused the global economy and progress to SDG 7 and other SDGs to slow down.
According to the 2020 SDG report, affordable and reliable energy is now needed more than ever, especially after the COVID-19 pandemic, to supply hospitals and health facilities as well as access to energy for students learning remotely. Access to electricity has improved strongly in Asia and Latin America, so that an increasing share of people without access live in Sub-Saharan Africa. It is estimated that around 620 million people would still lack access to electricity if the world continues to move at the current pace by 2030.
Challenges
In 2020, it was reported that many health facilities in developing countries (about 25%) still have no electricity at all or have frequent outages. This was particularly problematic during the COVID-19 pandemic. During the crisis progress has been seen in some aspect of SDG7 such as improvement in energy efficiency, use of renewable energy and increased access to electricity to people.
Links with other SDGs
The SDGs are all interlinked. Energy (or SDG 7) is key to most global issues: this includes poverty eradication (SDG 1), gender equality (SDG 5), climate action (SDG 13), food security (SDG 2), health (SDG 3), education (SDG 4), sustainable cities (SDG 11), jobs (SDG 8) and transport (SDG 9).
SDG 7 and SDG 13 (climate action) are closely related.
Access to energy is directly related to human development. This is particularly true for women, who spend more of their time collecting fuel and water, and preparing meals. Access to energy would allow them to spend more time on education and work.
According to UN Women, energy interventions that take into perspective women's needs have a significant impact on addressing gender equality and community energy poverty while also ensuring the equal participation of women in energy intervention that in turn benefits the society at large.
Organizations
There are five custodian agencies for SDG 7 which together published the 2020 Energy Progress Report:
International Energy Agency (IEA)
International Renewable Energy Agency (IRENA)
United Nations Statistics Division (UNSD)
World Bank (WB)
World Health Organization (WHO)
See also
Sustainable Energy for All
References
External links
Sustainable Development Knowledge Platform (SDG 7)
Energypedia - collaborative knowledge exchange on various energy topics in developing countries
Understanding SDG 7
“Global Goals” Campaign - SDG 7
SDG-Track.org - SDG 7
UN SDG 7 in the US
United Nations General Assembly
Renewable energy policy
Sustainable Development Goals
United Nations documents
Sustainable energy
2015 establishments in New York City
Projects established in 2015 | 0.790578 | 0.992361 | 0.784539 |
Environmental technology | Environmental technology (envirotech) is the use of engineering and technological approaches to understand and address issues that affect the environment with the aim of fostering environmental improvement. It involves the application of science and technology in the process of addressing environmental challenges through environmental conservation and the mitigation of human impact to the environment.
The term is sometimes also used to describe sustainable energy generation technologies such as photovoltaics, wind turbines, etc.
Purification and waste management
Water purification
Air purification
Air purification describes the processes used to remove contaminants and pollutants from the air to reduce the potential adverse effects on humans and the environment. The process of air purification may be performed using methods such as mechanical filtration, ionization, activated carbon adsorption, photocatalytic oxidation, and ultraviolet light germicidal irradiation.
Sewage treatment
Environmental remediation
Environmental remediation is the process through which contaminants or pollutants in soil, water and other media are removed to improve environmental quality. The main focus is the reduction of hazardous substances within the environment. Some of the areas involved in environmental remediation include; soil contamination, hazardous waste, groundwater contamination, oil, gas and chemical spills. There are three most common types of environmental remediation. These include soil, water, and sediment remediation.
Soil remediation consists of removing contaminants in soil, as these pose great risks to humans and the ecosystem. Some examples of this are heavy metals, pesticides, and radioactive materials. Depending on the contaminant the remedial processes can be physical, chemical, thermal, or biological.
Water remediation is one of the most important considering water is an essential natural resource. Depending on the source of water there will be different contaminants. Surface water contamination mainly consists of agricultural, animal, and industrial waste, as well as acid mine drainage. There has been a rise in the need for water remediation due to the increased discharge of industrial waste, leading to a demand for sustainable water solutions. The market for water remediation is expected to consistently increase to $19.6 billion by 2030.
Sediment remediation consists of removing contaminated sediments. Is it almost similar to soil remediation except it is often more sophisticated as it involves additional contaminants. To reduce the contaminants it is likely to use physical, chemical, and biological processes that help with source control, but if these processes are executed correctly, there's a risk of contamination resurfacing.
Solid waste management
Solid waste management is the purification, consumption, reuse, disposal, and treatment of solid waste that is undertaken by the government or the ruling bodies of a city/town. It refers to the collection, treatment, and disposal of non-soluble, solid waste material. Solid waste is associated with both industrial, institutional, commercial and residential activities. Hazardous solid waste, when improperly disposed can encourage the infestation of insects and rodents, contributing to the spread of diseases. Some of the most common types of solid waste management include; landfills, vermicomposting, composting, recycling, and incineration. However, a major barrier for solid waste management practices is the high costs associated with recycling and the risks of creating more pollution.
E-Waste Recycling
The recycling of electronic waste (e-waste) has seen significant technological advancements due to increasing environmental concerns and the growing volume of electronic product disposals. Traditional e-waste recycling methods, which often involve manual disassembly, expose workers to hazardous materials and are labor-intensive. Recent innovations have introduced automated processes that improve safety and efficiency, allowing for more precise separation and recovery of valuable materials.
Modern e-waste recycling techniques now leverage automated shredding and advanced sorting technologies, which help in effectively segregating different types of materials for recycling. This not only enhances the recovery rate of precious metals but also minimizes the environmental impact by reducing the amount of waste destined for landfills. Furthermore, research into biodegradable electronics aims to reduce future e-waste through the development of electronics that can decompose more naturally in the environment.
These advancements support a shift towards a circular economy, where the lifecycle of materials is extended, and environmental impacts are significantly minimized.
Bioremediation
Bioremediation is a process that uses microorganisms such as bacteria, fungi, plant enzymes, and yeast to neutrilize hazardous containments that can be in the environment. This could help mitigate a variety of environmental hazards, including oil spills, pesticides, heavy metals, and other pollutants. Bioremediation can be conducted either on-site ('in situ') or off-site ('ex situ') which is often necessary if the climate is too cold. Factors influencing the duration of bioremediation would include to the extent of the contamination, environmental conditions, and with timelines that can range from months to years.
Examples
Biofiltration
Bioreactor
Bioremediation
Composting toilet
Desalination
Thermal depolymerization
Pyrolysis
Sustainable energy
Concerns over pollution and greenhouse gases have spurred the search for sustainable alternatives to fossil fuel use. The global reduction of greenhouse gases requires the adoption of energy conservation as well as sustainable generation. That environmental harm reduction involves global changes such as:
substantially reducing methane emissions from melting perma-frost, animal husbandry, pipeline and wellhead leakage.
virtually eliminating fossil fuels for vehicles, heat, and electricity.
carbon dioxide capture and sequestration at point of combustion.
widespread use of public transport, battery, and fuel cell vehicles
extensive implementation of wind/solar/water generated electricity
reducing peak demands with carbon taxes and time of use pricing.
Since fuel used by industry and transportation account for the majority of world demand, by investing in conservation and efficiency (using less fuel), pollution and greenhouse gases from these two sectors can be reduced around the globe. Advanced energy-efficient electric motor (and electric generator) technology that are cost-effective to encourage their application, such as variable speed generators and efficient energy use, can reduce the amount of carbon dioxide (CO2) and sulfur dioxide (SO2) that would otherwise be introduced to the atmosphere, if electricity were generated using fossil fuels. Some scholars have expressed concern that the implementation of new environmental technologies in highly developed national economies may cause economic and social disruption in less-developed economies.
Renewable energy
Renewable energy is the energy that can be replenished easily. For years we have been using sources such as wood, sun, water, etc. for means for producing energy. Energy that can be produced by natural objects like the sun, wind, etc. is considered to be renewable. Technologies that have been in usage include wind power, hydropower, solar energy, geothermal energy, and biomass/bioenergy. It refers to any form of energy that naturally regenerates over time, and does not run out. This form of energy naturally replenishes and is characterized by a low carbon footprint. Some of the most common types of renewable energy sources include; solar power, wind power, hydroelectric power, and bioenergy which is generated by burning organic matter.
Examples
Energy saving modules
Heat pump
Hydrogen fuel cell
Hydroelectricity
Ocean thermal energy conversion
Photovoltaic
Solar power
Wave energy
Wind power
Wind turbine
Renewable Energy Innovations
The intersection of technology and sustainability has led to innovative solutions aimed at enhancing the efficiency of renewable energy systems. One such innovation is the integration of wind and solar power to maximize energy production. Companies like Unéole are pioneering technologies that combine solar panels with wind turbines on the same platform, which is particularly advantageous for urban environments with limited space. This hybrid system not only conserves space but also increases the energy yield by leveraging the complementary nature of solar and wind energy availability.
Furthermore, advancements in offshore wind technology have significantly increased the viability and efficiency of wind energy. Modern offshore wind turbines feature improvements in structural design and aerodynamics, which enhance their energy capture and reduce costs. These turbines are now more adaptable to various marine environments, allowing for greater flexibility in location and potentially reducing visual pollution. The floating wind turbines, for example, use tension leg platforms and spar buoys that can be deployed in deeper waters, significantly expanding the potential areas for wind energy generation
Such innovations not only advance the capabilities of individual renewable technologies but also contribute to a more resilient and sustainable energy grid. By optimizing the integration and efficiency of renewable resources, these technologies play a crucial role in the transition towards a sustainable energy future.
Energy conservation
Energy conservation is the utilization of devices that require smaller amounts of energy in order to reduce the consumption of electricity. Reducing the use of electricity causes less fossil fuels to be burned to provide that electricity. And it refers to the practice of using less energy through changes in individual behaviors and habits. The main emphasis for energy conservation is the prevention of wasteful use of energy in the environment, to enhance its availability. Some of the main approaches to energy conservation involve refraining from using devices that consume more energy, where possible.
eGain forecasting
Egain forecasting is a method using forecasting technology to predict the future weather's impact on a building. By adjusting the heat based on the weather forecast, the system eliminates redundant use of heat, thus reducing the energy consumption and the emission of greenhouse gases. It is a technology introduced by the eGain International, a Swedish company that intelligently balances building power consumption. The technology involves forecasting the amount of heating energy required by a building within a specific period, which results in energy efficiency and sustainability. eGain lowers building energy consumption and emissions while determining time for maintenance where inefficiencies are observed.
Solar Power
Computational sustainability
Sustainable Agriculture
Sustainable agriculture is an approach to farming that utilizes technology in a way that ensures food protection, while ensuring the long-term health and productivity of agricultural systems, ecosystems, and communities. Historically, technological advancements have significantly contributed to increasing agricultural productivity and reducing physical labor.
The National Institute of Food and Agriculture improves sustainable agriculture through the use of funded programs aimed at fulfilling human food and fiber needs, improving environmental quality, and preserving natural resources vital to the agricultural economy, optimizing the utilization of both nonrenewable and on-farm resources while integrating natural biological cycles and controls as appropriate, maintaining the economic viability of farm operations, and to foster an improved quality of life for farmers and society at large. Among its initiatives, the NIFA wants to improve farm and ranch practices, integrated pest management, rotational grazing, soil conservation, water quality/wetlands, cover crops, crop/landscape diversity, nutrient management, agroforestry, and alternative marketing.
Education
Courses aimed at developing graduates with some specific skills in environmental systems or environmental technology are becoming more common and fall into three broads classes:
Environmental Engineering or Environmental Systems courses oriented towards a civil engineering approach in which structures and the landscape are constructed to blend with or protect the environment;
Environmental chemistry, sustainable chemistry or environmental chemical engineering courses oriented towards understanding the effects (good and bad) of chemicals in the environment. Such awards can focus on mining processes, pollutants and commonly also cover biochemical processes;
Environmental technology courses oriented towards producing electronic, electrical or electrotechnology graduates capable of developing devices and artifacts able to monitor, measure, model and control environmental impact, including monitoring and managing energy generation from renewable sources, and developing novel energy generation technologies.
See also
Appropriate technology
Bright green environmentalism
Eco-innovation
Ecological modernization
Ecosia
Ecotechnology
Environmentally friendly
Green development
Groasis Waterboxx
Ice house (building)
Information and communication technologies for environmental sustainability
Pulser Pump
Smog tower
Sustainable design
Sustainable energy
Sustainable engineering
Sustainable living
Sustainable technologies
Technology for sustainable development
The All-Earth Ecobot Challenge
Windcatcher
WIPO GREEN
References
Further reading
External links
Bright green environmentalism | 0.792432 | 0.989609 | 0.784198 |
Doughnut (economic model) | The Doughnut, or Doughnut economics, is a visual framework for sustainable development – shaped like a doughnut or lifebelt – combining the concept of planetary boundaries with the complementary concept of social boundaries. The name derives from the shape of the diagram, i.e. a disc with a hole in the middle. The centre hole of the model depicts the proportion of people that lack access to life's essentials (healthcare, education, equity and so on) while the crust represents the ecological ceilings (planetary boundaries) that life depends on and must not be overshot. The diagram was developed by University of Oxford economist Kate Raworth in her 2012 Oxfam paper A Safe and Just Space for Humanity and elaborated upon in her 2017 book Doughnut Economics: Seven Ways to Think Like a 21st-Century Economist and paper.
The framework was proposed to regard the performance of an economy by the extent to which the needs of people are met without overshooting Earth's ecological ceiling. The main goal of the new model is to re-frame economic problems and set new goals. In this context, the model is also referred to as a "wake-up call to transform our capitalist worldview". In this model, an economy is considered prosperous when all twelve social foundations are met without overshooting any of the nine ecological ceilings. This situation is represented by the area between the two rings, considered by its creator as a safe and just space for humanity.
Kate Raworth noted the planetary boundaries concept does not take human wellbeing into account (although, if Earth's ecosystem dies then all wellbeing is moot). She suggested social boundaries should be combined with the planetary boundaries structure. Adding measures such as jobs, education, food, access to water, health services and energy helps to accommodate an environmentally safe space compatible with poverty eradication and "rights for all". Within planetary limits and an equitable social foundation lies a doughnut-shaped area which is the area where there is a "safe and just space for humanity to thrive in".
Indicators
Social foundations
The social foundations are inspired by the social aims of the Sustainable Development Goals of the United Nations. These are:
Food security
Health
Education
Income and work (the latter is not limited to compensated employment but also includes things such as housekeeping)
Peace and justice
Political voice
Social equity
Gender equality
Housing
Networks (the latter includes both networks of communities, but also networks of information like the internet)
Energy
Water
Ecological ceilings
The nine ecological ceilings are from the planetary boundaries put forward by a group of Earth-system scientists led by Johan Rockström and Will Steffen. These are:
Climate change — the human-caused emissions of greenhouse gases such as carbon dioxide and methane trap heat in the atmosphere, changing the Earth's climate.
Ocean acidification — when human-emitted carbon dioxide is absorbed into the oceans, it makes the water more acidic. For example, this lowers the ability of marine life to grow skeletons and shells.
Chemical pollution — releasing toxic materials into nature decreases biodiversity and lowers the fertility of animals (including humans).
Nitrogen and phosphorus loading — inefficient or excessive use of fertiliser leads to the fertilizer running off to water bodies, where they cause algae blooms which kills underwater life.
Freshwater withdrawals — using too much freshwater dries up the source which may damage the ecosystem and be unusable after.
Land conversion — converting land for economic activity (such as creating roads and farmland) damages or removes the habitat for wildlife, removes carbon sinks and disrupts natural cycles.
Biodiversity loss — economic activity may cause a reduction in the number and variety of species. This makes ecosystems more vulnerable and may lower their capacity of sustaining life and providing ecosystem services.
Air pollution — the emission of aerosols (small particles) has a negative impact on the health of species. It can also affect precipitation and cloud formation.
Ozone layer depletion — some economic activity emits gases that damage the Earth's ozone layer. Because the ozone layer shields Earth from harmful radiation, its depletion results for example in skin cancer in animals.
Critique to mainstream economic theory
The doughnut model is still a collection of goals that may be pursued through different actions by different actors and does not include specific models related to markets or human behavior. The book Doughnut Economics consists of critiques and perspectives of what should be sought after by society as a whole. The critiques found in the book are targeted at certain economic models and their common base.
The mainstream economic models of the 20th century, defined here as those taught the most in Economics introductory courses around the world, are neoclassical. The Circular Flow published by Paul Samuelson in 1944 and the supply and demand curves published by William S. Jevons in 1862 are canonical examples of neoclassical economic models. Focused on the observable money flows in a given administrative unit and describing preferences mathematically, these models ignore the environments in which these objects are embedded: human minds, society, culture, and the natural environment. This omission was viable while the human population did not collectively overwhelm the Earth's systems, which is no longer the case. Furthermore, these models were created before statistical testing and research were possible. They were based, then, on assumptions about human behavior converted into "stylized facts". The origins of these assumptions are philosophical and pragmatic, simplifying and distorting the reflections of thinkers such as Adam Smith into Newtonian-resembling curves on a graph so that they could be of presumed practical use in predicting, for example, consumer choice.
The body of neoclassical economic theory grew and became more sophisticated over time, and competed with other theories for the post-mainstream economic paradigm of the North Atlantic. In the 1930s, Keynesian theory was it, and after the 1960s, monetarism gained prominence. One element remained as the policy prescriptions shifted: the "rational economic man" persona on which theories were based. Raworth, the creator of Doughnut Economics, denounces this literary invention as a perverse one, for its effects on its learners' assumptions about human behavior and, consequently, their own real behavior. Examples of this phenomenon in action have been documented, as have the effects of the erosion of trust and community on human well-being.
Real-world economies in the Doughnut perspective
Kate Raworth explains the doughnut economy is based on the premise that "Humanity's 21st century challenge is to meet the needs of all within the means of the planet. In other words, to ensure that no one falls short on life's essentials (from food and housing to healthcare and political voice), while ensuring that collectively we do not overshoot our pressure on Earth's life-supporting systems, on which we fundamentally depend – such as a stable climate, fertile soils, and a protective ozone layer. The Doughnut of social and planetary boundaries is a new framing of that challenge, and it acts as a compass for human progress this century."
Raworth states that "significant GDP growth is very much needed" for low- and middle-income countries to be able to meet the goals of the social foundation for their citizens.
Leaning on Earth studies and economics, Raworth maps out the current shortfalls and overshoots, as illustrated in Figure 2.
The Doughnut framework has been used to map localized socio-environmental performance in Erhai lake-catchment (China), Scotland, Wales, the UK, South Africa, Netherlands, India, globally and many more.
In April 2020, Kate Raworth was invited to join the City of Amsterdam's post-pandemic economic planning efforts.
An empirical application of the doughnut model showed in 2018 that so far across 150 countries not a single country satisfies its citizens' basic needs while maintaining a globally sustainable level of resource use.
Criticism
Branko Milanovic, at CUNY's Stone Center on Socio-Economic Inequality, said that for the doughnut theory to become popular, people would have to "magically" become "indifferent to how well we do compared to others, and not really care about wealth and income."
See also
Ecological economics
Critique of political economy
Prosperity Without Growth
The Closing Circle
References
Economics models
Ecological economics | 0.787516 | 0.995711 | 0.784138 |
Climate action | Climate action (or climate change action) refers to a range of activities, mechanisms, policy instruments, and so forth that aim at reducing the severity of human-induced climate change and its impacts. "More climate action" is a central demand of the climate movement. Climate inaction is the absence of climate action.
Examples of climate action
Some examples of climate action include:
Business action on climate change
Climate change adaptation
Climate change mitigation
Climate finance
Climate movement – actions by non-governmental organizations
Individual action on climate change
Politics of climate change
Obstacles to achieving climate action
Human behaviour
Barriers to pro-environmental behaviour
Climate change denial
Media coverage of climate change
Psychology of climate change denial
See also
Causes of climate change
Effects of climate change
Sustainable Development Goal 13 on climate action
References
External links
Climate Activism: Start Here (The Commons Library)
Climate change
Global environmental issues
Human impact on the environment | 0.799971 | 0.980175 | 0.784111 |
Environmental health | Environmental health is the branch of public health concerned with all aspects of the natural and built environment affecting human health. To effectively control factors that may affect health, the requirements that must be met to create a healthy environment must be determined. The major sub-disciplines of environmental health are environmental science, toxicology, environmental epidemiology, and environmental and occupational medicine.
Definitions
WHO definitions
Environmental health was defined in a 1989 document by the World Health Organization (WHO) as:
Those aspects of human health and disease that are determined by factors in the environment. It is also referred to as the theory and practice of accessing and controlling factors in the environment that can potentially affect health.
A 1990 WHO document states that environmental health, as used by the WHO Regional Office for Europe, "includes both the direct pathological effects of chemicals, radiation and some biological agents, and the effects (often indirect) on health and well being of the broad physical, psychological, social and cultural environment, which includes housing, urban development, land use and transport."
, the WHO website on environmental health states that "Environmental health addresses all the physical, chemical, and biological factors external to a person, and all the related factors impacting behaviours. It encompasses the assessment and control of those environmental factors that can potentially affect health. It is targeted towards preventing disease and creating health-supportive environments. This definition excludes behaviour not related to environment, as well as behaviour related to the social and cultural environment, as well as genetics."
The WHO has also defined environmental health services as "those services which implement environmental health policies through monitoring and control activities. They also carry out that role by promoting the improvement of environmental parameters and by encouraging the use of environmentally friendly and healthy technologies and behaviors. They also have a leading role in developing and suggesting new policy areas."
Other considerations
The term environmental medicine may be seen as a medical specialty, or branch of the broader field of environmental health. Terminology is not fully established, and in many European countries they are used interchangeably.
Children's environmental health is the academic discipline that studies how environmental exposures in early life—chemical, biological, nutritional, and social—influence health and development in childhood and across the entire human life span.
Other terms referring to or concerning environmental health include environmental public health and health protection.
Disciplines
Five basic disciplines generally contribute to the field of environmental health: environmental epidemiology, toxicology, exposure science, environmental engineering, and environmental law. Each of these five disciplines contributes different information to describe problems and solutions in environmental health. However, there is some overlap among them.
Environmental epidemiology studies the relationship between environmental exposures (including exposure to chemicals, radiation, microbiological agents, etc.) and human health. Observational studies, which simply observe exposures that people have already experienced, are common in environmental epidemiology because humans cannot ethically be exposed to agents that are known or suspected to cause disease. While the inability to use experimental study designs is a limitation of environmental epidemiology, this discipline directly observes effects on human health rather than estimating effects from animal studies. Environmental epidemiology is the study of the effect on human health of physical, biologic, and chemical factors in the external environment, broadly conceived. Also, examining specific populations or communities exposed to different ambient environments, Epidemiology in our environment aims to clarify the relationship that exist between physical, biologic or chemical factors and human health.
Toxicology studies how environmental exposures lead to specific health outcomes, generally in animals, as a means to understand possible health outcomes in humans. Toxicology has the advantage of being able to conduct randomized controlled trials and other experimental studies because they can use animal subjects. However, there are many differences in animal and human biology, and there can be a lot of uncertainty when interpreting the results of animal studies for their implications for human health.
Exposure science studies human exposure to environmental contaminants by both identifying and quantifying exposures. Exposure science can be used to support environmental epidemiology by better describing environmental exposures that may lead to a particular health outcome, identify common exposures whose health outcomes may be better understood through a toxicology study, or can be used in a risk assessment to determine whether current levels of exposure might exceed recommended levels. Exposure science has the advantage of being able to very accurately quantify exposures to specific chemicals, but it does not generate any information about health outcomes like environmental epidemiology or toxicology.
Environmental engineering applies scientific and engineering principles for protection of human populations from the effects of adverse environmental factors; protection of environments from potentially deleterious effects of natural and human activities; and general improvement of environmental quality.
Environmental law includes the network of treaties, statutes, regulations, common and customary laws addressing the effects of human activity on the natural environment.
Information from epidemiology, toxicology, and exposure science can be combined to conduct a risk assessment for specific chemicals, mixtures of chemicals or other risk factors to determine whether an exposure poses significant risk to human health (exposure would likely result in the development of pollution-related diseases). This can in turn be used to develop and implement environmental health policy that, for example, regulates chemical emissions, or imposes standards for proper sanitation. Actions of engineering and law can be combined to provide risk management to minimize, monitor, and otherwise manage the impact of exposure to protect human health to achieve the objectives of environmental health policy.
Concerns
Environmental health addresses all human-health-related aspects of the natural environment and the built environment. Environmental health concerns include:
Biosafety.
Disaster preparedness and response.
Food safety, including in agriculture, transportation, food processing, wholesale and retail distribution and sale.
Housing, including substandard housing abatement and the inspection of jails and prisons.
Childhood lead poisoning prevention.
Land use planning, including smart growth.
Liquid waste disposal, including city waste water treatment plants and on-site waste water disposal systems, such as septic tank systems and chemical toilets.
Medical waste management and disposal.
Occupational health and industrial hygiene.
Radiological health, including exposure to ionizing radiation from X-rays or radioactive isotopes.
Recreational water illness prevention, including from swimming pools, spas and ocean and freshwater bathing places.
Solid waste management, including landfills, recycling facilities, composting and solid waste transfer stations.
Toxic chemical exposure whether in consumer products, housing, workplaces, air, water or soil.
Toxins from molds and algal blooms.
Vector control, including the control of mosquitoes, rodents, flies, cockroaches and other animals that may transmit pathogens.
According to recent estimates, about 5 to 10% of disability-adjusted life years (DALYs) lost are due to environmental causes in Europe. By far the most important factor is fine particulate matter pollution in urban air. Similarly, environmental exposures have been estimated to contribute to 4.9 million (8.7%) deaths and 86 million (5.7%) DALYs globally. In the United States, Superfund sites created by various companies have been found to be hazardous to human and environmental health in nearby communities. It was this perceived threat, raising the specter of miscarriages, mutations, birth defects, and cancers that most frightened the public.
Air quality
Air quality includes ambient outdoor air quality and indoor air quality. Large concerns about air quality include environmental tobacco smoke, air pollution by forms of chemical waste, and other concerns.
Outdoor air quality
Air pollution is globally responsible for over 6.5 million deaths each year. Air pollution is the contamination of an atmosphere due to the presence of substances that are harmful to the health of living organisms, the environment or climate. These substances concern environmental health officials since air pollution is often a risk-factor for diseases that are related to pollution, like lung cancer, respiratory infections, asthma, heart disease, and other forms of respiratory-related illnesses. Reducing air pollution, and thus developing air quality, has been found to decrease adult mortality.
Common products responsible for emissions include road traffic, energy production, household combustion, aviation and motor vehicles, and other forms of pollutants. These pollutants are responsible for the burning of fuel, which can release harmful particles into the air that humans and other living organisms can inhale or ingest.
Air pollution is associated with adverse health effects like respiratory and cardiovascular diseases, cancer, related illnesses, and even death. The risk of air pollution is determined by the pollutant's hazard and the amount of exposure that affects a person. For example, a child who plays outdoor sports will have a higher likelihood of outdoor air pollution exposure than an adult who tends to spend more time indoors, whether at work or elsewhere. Environmental health officials work to detect individuals who are at higher risks of consuming air pollution, work to decrease their exposure, and detect risk factors present in communities.
However, as shown in research by Ernesto, Sánchez-Triana in the case of Pakistan. After identifying the main sources of air pollution, such as mobile sources, such as heavy-duty vehicles and motorized 2–3 wheelers; stationary sources, such as power plants and burning of waste; and natural dust. The country implemented a clean air policy to reduce the road transport sector, which is responsible for 85% of particulate matter of less than 2.5 microns (PM2.5) total emissions and 72% of particulate matter of less than 10 microns (PM10) Most successful policies were:
Improving fuel quality by reducing the sulfur content in diesel
Converting diesel minibuses and city delivery vans to compressed natural gas (CNG)
Installing diesel oxidation catalysts (DOCs) on existing large buses and trucks
Converting existing two-stroke rickshaws to four-stroke CNG engines
Introducing low-sulfur fuel oil (1% sulfur) to major users located in Karachi
Indoor air quality
Household air pollution contributes to diseases that kill almost 4.3 million people every year. Indoor air pollution contributes to risk factors for diseases like heart disease, pulmonary disease, stroke, pneumonia, and other associated illnesses. For vulnerable populations, such as children and elderly populations, who spend large amounts of their time indoors or indoor air quality can be dangerous.
Burning fuels like coal or kerosene inside homes can cause dangerous chemicals to be released into the air. Dampness and mold in houses can cause diseases, but few studies have been performed on mold in schools and workplaces. Environmental tobacco smoke is considered to be a leading contributor to indoor air pollution since exposure to second and third-hand smoke is a common risk factor. Tobacco smoke contains over 60 carcinogens, where 18% are known human carcinogens. Exposure to these chemicals can lead to exacerbation of asthma, the development of cardiovascular diseases and cardiopulmonary diseases, and an increase in the likelihood of cancer development.
Climate change and its effects on health
Climate change makes extreme weather events more likely, including ozone smog events, dust storms, and elevated aerosol levels, all due to extreme heat, drought, winds, and rainfall. These extreme weather events can increase the likelihood of undernutrition, mortality, food insecurity, and climate-sensitive infectious diseases in vulnerable populations. The effects of climate change are felt by the whole world, but disproportionately affect disadvantaged populations who are subject to climate change vulnerability.
Climate impacts can affect exposure to water-borne pathogens through increased rates of runoff, frequent heavy rains, and the effects of severe storms. Extreme weather events and storm surges can also exceed the capacity of water infrastructure, which can increase the likelihood that populations will be exposed to these contaminants. Exposure to these contaminants are more likely in low-income communities, where they have inadequate infrastructure to respond to climate disasters and are less likely to recover from infrastructure damage as quickly.
Problems like the loss of homes, loved ones, and previous ways of life, are often what people face after a climate disaster occurs. These events can lead to vulnerability in the form of housing affordability stress, lower household income, lack of community attachment, grief, and anxiety around another disaster occurring.
Environmental racism
Certain groups of people can be put at a higher risk for environmental hazards like air, soil and water pollution. This often happens due to marginalization, economic and political processes, and racism. Environmental racism uniquely affects different groups globally, however generally the most marginalized groups of any region are affected. These marginalized groups are frequently put next to pollution sources like major roadways, toxic waste sites, landfills, and chemical plants. In a 2021 study, it was found that racial and ethnic minority groups in the United States are exposed to disproportionately high levels of particulate air pollution. Racial housing policies that exist in the United States continue to exacerbate racial minority exposure to air pollution at a disproportionate rate, even as overall pollution levels have declined. Likewise, in a 2022 study, it was shown that implementing policy changes that favor wealth redistribution could double as climate change mitigation measures. For populations who are not subject to wealth redistribution measures, this means more money will flow into their communities while climate effects are mitigated.
Noise pollution
Noise pollution is usually environmental, machine-created sound that can disrupt activities or communication between humans and other forms of life. Exposure to persistent noise pollution can cause numerous ailments like hearing impairment, sleep disturbances, cardiovascular problems, annoyance, problems with communication and other diseases. For American minorities that live in neighborhoods of low socioeconomic status, they often experience higher levels of noise pollution compared to their higher socioeconomic counterparts.
Noise pollution can cause or exacerbate cardiovascular diseases, which can further attribute to a larger range of diseases, increase stress levels, and cause sleep disturbances. Noise pollution is also responsible for many reported cases of hearing loss, tinnitus, and other forms of hypersensitivity(stress/irritability) or lack thereof to sound(present or subconscious from continuous exposure). These conditions can be dangerous to children and young adults who consistently experience noise pollution, as many of these conditions can develop into long-term problems, including physical and mental health issues.
Children who attend school in noisy traffic zones have shown to have 15% lower memory development compared to other students who attended schools in quiet traffic zones, according to a Barcelona study. This is consistent with research that suggests that children who are exposed to regular aircraft noise "have inadequate performance on standardised achievement tests."
Exposure to persistent noise pollution can cause one to develop hearing impairments, like tinnitus or impaired speech discrimination. One of the largest factors in worsened mental health due to noise pollution is annoyance. Annoyance due to environmental factors has been found to increase stress reactions and overall feelings of stress among adults. The level of annoyance felt by an individual varies, but contributes to worsened mental health significantly.
Noise exposure also contributes to sleep disturbances, which can cause daytime sleepiness and an overall lack of sleep, which contributes to worsened health. Daytime sleepiness has been linked to several reports of declining mental health and other health issues, job insecurities and further social and environmental factors declining.
Safe drinking water
Access to safe drinking water is considered a "basic human need for health and well-being" by the United Nations. According to their reports, over 2 billion people worldwide live without access to safe drinking water. In 2017, almost 22 million Americans drank from water systems that were in violation of public health standards. Globally, over 2 billion people drink feces-contaminated water, which poses the greatest threat to drinking water safety. Contaminated drinking water could transmit diseases like cholera, dysentery, typhoid, diarrhea and polio.
Harmful chemicals in drinking water can negatively affect health. Unsafe water management practices can increase the prevalence of water-borne diseases and sanitation-related illnesses. Inadequate disinfecting of wastewater in industrial and agricultural centers can also infect hundreds of millions of people with contaminated water. Chemicals like fluoride and arsenic can benefit humans when the levels of these chemicals are controlled;but other, more dangerous chemicals like lead and metals can be harmful to humans.
In America, communities of color can be subject to poor-quality water. In communities in America with large Hispanic and black populations, there is a correlated rise in SDWA health violations. Populations who have experienced lack of safe drinking water, like populations in Flint, Michigan, are more likely to distrust tap water in their communities. Populations to experience this are commonly low-income, communities of color.
Hazardous materials management
Hazardous materials management, including hazardous waste management, contaminated site remediation, the prevention of leaks from underground storage tanks and the prevention of hazardous materials releases to the environment and responses to emergency situations resulting from such releases. When hazardous materials are not managed properly, waste can pollute nearby water sources and reduce air quality.
According to a study done in Austria, people who live near industrial sites are "more often unemployed, have lower educations levels, and are twice as likely to be immigrants. With the interest of environmental health in mind, the Resource Conservation and Recovery Act was passed in the United States in 1976 that covered how to properly manage hazardous waste.
There are a variety of occupations that work with hazardous materials and help manage them so that everything is disposed of correctly. These professionals work in various sectors, including government agencies, private industry, consulting firms, and non-profit organizations, all with the common goal of ensuring the safe handling of hazardous materials and waste. These positions include but are not limited to Environmental Health and Safety Specialists, Waste collectors, Medical Professionals, and Emergency Responders. Handling waste, especially hazardous materials is considered one of the most dangerous occupations in the world. Often, these workers may not have all of information about the specific hazardous materials they encounter, making their jobs even more dangerous. The sudden exposure to materials they are not properly prepared to handle can lead to severe consequences. This emphasizes the importance of training, safety protocols, and the use of personal protective equipment for those working with hazardous waste.
Microplastic pollution
Soil pollution
Information and mapping
The Toxicology and Environmental Health Information Program (TEHIP) is a comprehensive toxicology and environmental health web site, that includes open access to resources produced by US government agencies and organizations, and is maintained under the umbrella of the Specialized Information Service at the United States National Library of Medicine. TEHIP includes links to technical databases, bibliographies, tutorials, and consumer-oriented resources. TEHIP is responsible for the Toxicology Data Network (TOXNET), an integrated system of toxicology and environmental health databases including the Hazardous Substances Data Bank, that are open access, i.e. available free of charge. TOXNET was retired in 2019.
There are many environmental health mapping tools. TOXMAP is a geographic information system (GIS) from the Division of Specialized Information Services of the United States National Library of Medicine (NLM) that uses maps of the United States to help users visually explore data from the United States Environmental Protection Agency's (EPA) Toxics Release Inventory and Superfund Basic Research Programs. TOXMAP is a resource funded by the US federal government. TOXMAP's chemical and environmental health information is taken from the NLM's Toxicology Data Network (TOXNET) and PubMed, and from other authoritative sources.
Environmental health profession
Environmental health professionals may be known as environmental health officers, public health inspectors, environmental health specialists or environmental health practitioners. Researchers and policy-makers also play important roles in how environmental health is practiced in the field. In many European countries, physicians and veterinarians are involved in environmental health. In the United Kingdom, practitioners must have a graduate degree in environmental health and be certified and registered with the Chartered Institute of Environmental Health or the Royal Environmental Health Institute of Scotland. In Canada, practitioners in environmental health are required to obtain an approved bachelor's degree in environmental health along with the national professional certificate, the Certificate in Public Health Inspection (Canada), CPHI(C). Many states in the United States also require that individuals have a bachelor's degree and professional licenses to practice environmental health. California state law defines the scope of practice of environmental health as follows:
"Scope of practice in environmental health" means the practice of environmental health by registered environmental health specialists in the public and private sector within the meaning of this article and includes, but is not limited to, organization, management, education, enforcement, consultation, and emergency response for the purpose of prevention of environmental health hazards and the promotion and protection of the public health and the environment in the following areas: food protection; housing; institutional environmental health; land use; community noise control; recreational swimming areas and waters; electromagnetic radiation control; solid, liquid, and hazardous materials management; underground storage tank control; onsite septic systems; vector control; drinking water quality; water sanitation; emergency preparedness; and milk and dairy sanitation pursuant to Section 33113 of the Food and Agricultural Code.
The environmental health profession had its modern-day roots in the sanitary and public health movement of the United Kingdom. This was epitomized by Sir Edwin Chadwick, who was instrumental in the repeal of the poor laws, and in 1884 was the founding president of the Association of Public Sanitary Inspectors, now called the Chartered Institute of Environmental Health.
See also
EcoHealth
Environmental disease
Environmental medicine
Environmental toxicology
Epigenetics
Exposure science
Healing environments
Health effects from noise
Heavy metals
Indoor air quality
Industrial and organizational psychology
NIEHS
Nightingale's environmental theory
One Health
Pollution
Volatile organic compound
Journals:
List of environmental health journals
References
Further reading
External links
NIEHS
Environmental social science
Environmental science | 0.787985 | 0.994957 | 0.784011 |
Bioprospecting | Bioprospecting (also known as biodiversity prospecting) is the exploration of natural sources for small molecules, macromolecules and biochemical and genetic information that could be developed into commercially valuable products for the agricultural, aquaculture, bioremediation, cosmetics, nanotechnology, or pharmaceutical industries. In the pharmaceutical industry, for example, almost one third of all small-molecule drugs approved by the U.S. Food and Drug Administration (FDA) between 1981 and 2014 were either natural products or compounds derived from natural products.
Terrestrial plants, fungi and actinobacteria have been the focus of many past bioprospecting programs, but interest is growing in less explored ecosystems (e.g. seas and oceans) and organisms (e.g. myxobacteria, archaea) as a means of identifying new compounds with novel biological activities. Species may be randomly screened for bioactivity or rationally selected and screened based on ecological, ethnobiological, ethnomedical, historical or genomic information.
When a region's biological resources or indigenous knowledge are unethically appropriated or commercially exploited without providing fair compensation, this is known as biopiracy. Various international treaties have been negotiated to provide countries legal recourse in the event of biopiracy and to offer commercial actors legal certainty for investment. These include the UN Convention on Biological Diversity and the Nagoya Protocol. The WIPO is currently negotiating more treaties to bridge gaps in this field.
Other risks associated with bioprospecting are the overharvesting of individual species and environmental damage, but legislation has been developed to combat these also. Examples include national laws such as the US Marine Mammal Protection Act and US Endangered Species Act, and international treaties such as the UN Convention on Biological Diversity, the UN Convention on the Law of the Sea, the Biodiversity Beyond National Jurisdictions Treaty, and the Antarctic Treaty.
Bioprospecting-derived resources and products
Agriculture
Bioprospecting-derived resources and products used in agriculture include biofertilizers, biopesticides and veterinary antibiotics. Rhizobium is a genus of soil bacteria used as biofertilizers, Bacillus thuringiensis (also called Bt) and the annonins (obtained from seeds of the plant Annona squamosa) are examples of biopesticides, and valnemulin and tiamulin (discovered and developed from the basidiomycete fungi Omphalina mutila and Clitopilus passeckerianus) are examples of veterinary antibiotics.
Bioremediation
Examples of bioprospecting products used in bioremediation include Coriolopsis gallica- and Phanerochaete chrysosporium-derived laccase enzymes, used for treating beer factory wastewater and for dechlorinating and decolorizing paper mill effluent.
Cosmetics and personal care
Cosmetics and personal care products obtained from bioprospecting include Porphyridium cruentum-derived oligosaccharide and oligoelement blends used to treat erythema (rosacea, flushing and dark circles), Xanthobacter autotrophicus-derived zeaxanthin used for skin hydration and UV protection, Clostridium histolyticum-derived collagenases used for skin regeneration, and Microsporum-derived keratinases used for hair removal.
Nanotechnology and biosensors
Because microbial laccases have a broad substrate range, they can be used in biosensor technology to detect a wide range of organic compounds. For example, laccase-containing electrodes are used to detect polyphenolic compounds in wine, and lignins and phenols in wastewater.
Pharmaceuticals
Many of the antibacterial drugs in current clinical use were discovered through bioprospecting including the aminoglycosides, tetracyclines, amphenicols, polymyxins, cephalosporins and other β-lactam antibiotics, macrolides, pleuromutilins, glycopeptides, rifamycins, lincosamides, streptogramins, and phosphonic acid antibiotics. The aminoglycoside antibiotic streptomycin, for example, was discovered from the soil bacterium Streptomyces griseus, the fusidane antibiotic fusidic acid was discovered from the soil fungus Acremonium fusidioides, and the pleuromutilin antibiotics (eg. lefamulin) were discovered and developed from the basidiomycete fungi Omphalina mutila and Clitopilus passeckerianus.
Other examples of bioprospecting-derived anti-infective drugs include the antifungal drug griseofulvin (discovered from the soil fungus Penicillium griseofulvum), the antifungal and antileishmanial drug amphotericin B (discovered from the soil bacterium Streptomyces nodosus), the antimalarial drug artemisinin (discovered from the plant Artemisia annua), and the antihelminthic drug ivermectin (developed from the soil bacterium Streptomyces avermitilis).
Bioprospecting-derived pharmaceuticals have been developed for the treatment of non-communicable diseases and conditions too. These include the anticancer drug bleomycin (obtained from the soil bacterium Streptomyces verticillus), the immunosuppressant drug ciclosporin used to treat autoimmune diseases such as rheumatoid arthritis and psoriasis (obtained from the soil fungus Tolypocladium inflatum), the anti-inflammatory drug colchicine used to treat and prevent gout flares (obtained from the plant Colchicum autumnale), the analgesic drug ziconotide (developed from the cone snail Conus magus), and the acetylcholinesterase inhibitor galantamine used to treat Alzheimer's disease (obtained from plants in the Galanthus genus).
Bioprospecting as a discovery strategy
Bioprospecting has both strengths and weaknesses as a strategy for discovering new genes, molecules, and organisms suitable for development and commercialization.
Strengths
Bioprospecting-derived small molecules (also known as natural products) are more structurally complex than synthetic chemicals, and therefore show greater specificity towards biological targets. This is a big advantage in drug discovery and development, especially pharmacological aspects of drug discovery and development, where off-target effects can cause adverse drug reactions.
Natural products are also more amenable to membrane transport than synthetic compounds. This is advantageous when developing antibacterial drugs, which may need to traverse both an outer membrane and plasma membrane to reach their target.
For some biotechnological innovations to work, it is important to have enzymes that function at unusually high or low temperatures. An example of this is the polymerase chain reaction (PCR), which is dependent on a DNA polymerase that can operate at 60°C and above. In other situations, for example dephosphorylation, it can be desirable to run the reaction at low temperature. Extremophile bioprospecting is an important source of such enzymes, yielding thermostable enzymes such as Taq polymerase (from Thermus aquaticus), and cold-adapted enzymes such as shrimp alkaline phosphatase (from Pandalus borealis).
With the Convention on Biological Diversity (CBD) now ratified by most countries, bioprospecting has the potential to bring biodiversity-rich and technologically advanced nations together, and benefit them both educationally and economically (eg. information sharing, technology transfer, new product development, royalty payment).
For useful molecules identified through microbial bioprospecting, scale up of production is feasible at reasonable cost because the producing microorganism can be cultured in a bioreactor.
Weaknesses
Although some potentially very useful microorganisms are known to exist in nature (eg. lignocellulose-metabolizing microbes), difficulties have been encountered cultivating these in a laboratory setting. This problem may be resolvable by genetically manipulating easier-to-culture organisms such as Escherichia coli or Streptomyces coelicolor to express the gene cluster responsible for the desired activity.
Isolating and identifying the compound(s) responsible for a biological extract's activity can be difficult. Also, subsequent elucidation of the mechanism of action of the isolated compound can be time-consuming. Technological advancements in liquid chromatography, mass spectrometry and other techniques are helping to overcome these challenges.
Implementing and enforcing bioprospecting-related treaties and legislation is not always easy. Drug development is an inherently expensive and time-consuming process with low success rates, and this makes it difficult to quantify the value of potential products when drafting bioprospecting agreements. Intellectual property rights may be difficult to award too. For example, legal rights to a medicinal plant may be disputable if it has been discovered by different people in different parts of the world at different times.
Whilst the structural complexity of natural products is generally advantageous in drug discovery, it can make the subsequent manufacture of drug candidates difficult. This problem is sometimes resolvable by identifying the part of the natural product structure responsible for activity and developing a simplified synthetic analogue. This was necessary with the natural product halichondrin B, its simplified analogue eribulin now approved and marketed as an anticancer drug.
Bioprospecting pitfalls
Errors and oversights can occur at different steps in the bioprospecting process including collection of source material, screening source material for bioactivity, testing isolated compounds for toxicity, and identification of mechanism of action.
Collection of source material
Prior to collecting biological material or traditional knowledge, the correct permissions must be obtained from the source country, land owner etc. Failure to do so can result in criminal proceedings and rejection of any subsequent patent applications. It is also important to collect biological material in adequate quantities, to have biological material formally identified, and to deposit a voucher specimen with a repository for long-term preservation and storage. This helps ensure any important discoveries are reproducible.
Bioactivity and toxicity testing
When testing extracts and isolated compounds for bioactivity and toxicity, the use of standard protocols (eg. CLSI, ISO, NIH, EURL ECVAM, OECD) is desirable because this improves test result accuracy and reproducibility. Also, if the source material is likely to contain known (previously discovered) active compounds (eg. streptomycin in the case of actinomycetes), then dereplication is necessary to exclude these extracts and compounds from the discovery pipeline as early as possible. In addition, it is important to consider solvent effects on the cells or cell lines being tested, to include reference compounds (ie. pure chemical compounds for which accurate bioactivity and toxicity data are available), to set limits on cell line passage number (eg. 10–20 passages), to include all the necessary positive and negative controls, and to be aware of assay limitations. These steps help ensure assay results are accurate, reproducible and interpreted correctly.
Identification of mechanism of action
When attempting to elucidate the mechanism of action of an extract or isolated compound, it is important to use multiple orthogonal assays. Using just a single assay, especially a single in vitro assay, gives a very incomplete picture of an extract or compound's effect on the human body. In the case of Valeriana officinalis root extract, for example, the sleep-inducing effects of this extract are due to multiple compounds and mechanisms including interaction with GABA receptors and relaxation of smooth muscle. The mechanism of action of an isolated compound can also be misidentified if a single assay is used because some compounds interfere with assays. For example, the sulfhydryl-scavenging assay used to detect histone acetyltransferase inhibition can give a false positive result if the test compound reacts covalently with cysteines.
Biopiracy
The term biopiracy was coined by Pat Mooney, to describe a practice in which indigenous knowledge of nature, originating with indigenous peoples, is used by others for profit, without authorization or compensation to the indigenous people themselves. For example, when bioprospectors draw on indigenous knowledge of medicinal plants which is later patented by medical companies without recognizing the fact that the knowledge is not new or invented by the patenter, this deprives the indigenous community of their potential rights to the commercial product derived from the technology that they themselves had developed. Critics of this practice, such as Greenpeace, claim these practices contribute to inequality between developing countries rich in biodiversity, and developed countries hosting biotech firms.
In the 1990s many large pharmaceutical and drug discovery companies responded to charges of biopiracy by ceasing work on natural products, turning to combinatorial chemistry to develop novel compounds.
Famous cases of biopiracy
The rosy periwinkle
The rosy periwinkle case dates from the 1950s. The rosy periwinkle, while native to Madagascar, had been widely introduced into other tropical countries around the world well before the discovery of vincristine. Different countries are reported as having acquired different beliefs about the medical properties of the plant. This meant that researchers could obtain local knowledge from one country and plant samples from another. The use of the plant for diabetes was the original stimulus for research. Effectiveness in the treatment of both Hodgkin lymphoma and leukemia were discovered instead. The Hodgkin lymphoma chemotherapeutic drug vinblastine is derivable from the rosy periwinkle.
The Maya ICBG controversy
The Maya ICBG bioprospecting controversy took place in 1999–2000, when the International Cooperative Biodiversity Group led by ethnobiologist Brent Berlin was accused of being engaged in unethical forms of bioprospecting by several NGOs and indigenous organizations. The ICBG aimed to document the biodiversity of Chiapas, Mexico, and the ethnobotanical knowledge of the indigenous Maya people – in order to ascertain whether there were possibilities of developing medical products based on any of the plants used by the indigenous groups.
The Maya ICBG case was among the first to draw attention to the problems of distinguishing between benign forms of bioprospecting and unethical biopiracy, and to the difficulties of securing community participation and prior informed consent for would-be bioprospectors.
The neem tree
In 1994, the U.S. Department of Agriculture and W. R. Grace and Company received a European patent on methods of controlling fungal infections in plants using a composition that included extracts from the neem tree (Azadirachta indica), which grows throughout India and Nepal. In 2000 the patent was successfully opposed by several groups from the EU and India including the EU Green Party, Vandana Shiva, and the International Federation of Organic Agriculture Movements (IFOAM) on the basis that the fungicidal activity of neem extract had long been known in Indian traditional medicine. WR Grace appealed and lost in 2005.
Basmati rice
In 1997, the US corporation RiceTec (a subsidiary of RiceTec AG of Liechtenstein) attempted to patent certain hybrids of basmati rice and semidwarf long-grain rice. The Indian government challenged this patent and, in 2002, fifteen of the patent's twenty claims were invalidated.
The Enola bean
The Enola bean is a variety of Mexican yellow bean, so called after the wife of the man who patented it in 1999. The allegedly distinguishing feature of the variety is seeds of a specific shade of yellow. The patent-holder subsequently sued a large number of importers of Mexican yellow beans with the following result: "...export sales immediately dropped over 90% among importers that had been selling these beans for years, causing economic damage to more than 22,000 farmers in northern Mexico who depended on sales of this bean." A lawsuit was filed on behalf of the farmers and, in 2005, the US-PTO ruled in favor of the farmers. In 2008, the patent was revoked.
Hoodia gordonii
Hoodia gordonii, a succulent plant, originates from the Kalahari Desert of South Africa. For generations it has been known to the traditionally living San people as an appetite suppressant. In 1996 South Africa's Council for Scientific and Industrial Research began working with companies, including Unilever, to develop dietary supplements based on Hoodia. Originally the San people were not scheduled to receive any benefits from the commercialization of their traditional knowledge, but in 2003 the South African San Council made an agreement with CSIR in which they would receive from 6 to 8% of the revenue from the sale of Hoodia products.
In 2008 after having invested €20 million in R&D on Hoodia as a potential ingredient in dietary supplements for weight loss, Unilever terminated the project because their clinical studies did not show that Hoodia was safe and effective enough to bring to market.
Further cases
The following is a selection of further recent cases of biopiracy. Most of them do not relate to traditional medicines.
Thirty-six cases of biopiracy in Africa.
The case of the Maya people's pozol drink.
The case of the Maya and other people's use of Mimosa tenuiflora and many other cases.
The case of the Andean maca radish.
The cases of turmeric (India), karela (India), quinoa (Bolivia), oubli berries (Gabon), and others.
The case of captopril (developed from a Brazilian tribe's arrowhead poison).
Legal and political aspects
Patent law
One common misunderstanding is that pharmaceutical companies patent the plants they collect. While obtaining a patent on a naturally occurring organism as previously known or used is not possible, patents may be taken out on specific chemicals isolated or developed from plants. Often these patents are obtained with a stated and researched use of those chemicals. Generally the existence, structure and synthesis of those compounds is not a part of the indigenous medical knowledge that led researchers to analyze the plant in the first place. As a result, even if the indigenous medical knowledge is taken as prior art, that knowledge does not by itself make the active chemical compound "obvious," which is the standard applied under patent law.
In the United States, patent law can be used to protect "isolated and purified" compounds – even, in one instance, a new chemical element (see USP 3,156,523). In 1873, Louis Pasteur patented a "yeast" which was "free from disease" (patent #141072). Patents covering biological inventions have been treated similarly. In the 1980 case of Diamond v. Chakrabarty, the Supreme Court upheld a patent on a bacterium that had been genetically modified to consume petroleum, reasoning that U.S. law permits patents on "anything under the sun that is made by man." The United States Patent and Trademark Office (USPTO) has observed that "a patent on a gene covers the isolated and purified gene but does not cover the gene as it occurs in nature".
Also possible under US law is patenting a cultivar, a new variety of an existing organism. The patent on the Enola bean (now revoked) was an example of this sort of patent. The intellectual property laws of the US also recognize plant breeders' rights under the Plant Variety Protection Act, 7 U.S.C. §§ 2321–2582.
Convention on Biological Diversity
The Convention on Biological Diversity (CBD) came into force in 1993. It secured rights to control access to genetic resources for the countries in which those resources are located. One objective of the CBD is to enable lesser-developed countries to better benefit from their resources and traditional knowledge. Under the rules of the CBD, bioprospectors are required to obtain informed consent to access such resources, and must share any benefits with the biodiversity-rich country. However, some critics believe that the CBD has failed to establish appropriate regulations to prevent biopiracy. Others claim that the main problem is the failure of national governments to pass appropriate laws implementing the provisions of the CBD. The Nagoya Protocol to the CBD, which came into force in 2014, provides further regulations. The CBD has been ratified, acceded or accepted by 196 countries and jurisdictions globally, with exceptions including the Holy See and United States.
Bioprospecting contracts
The requirements for bioprospecting as set by CBD has created a new branch of international patent and trade law, bioprospecting contracts. Bioprospecting contracts lay down the rules of benefit sharing between researchers and countries, and can bring royalties to lesser-developed countries. However, although these contracts are based on prior informed consent and compensation (unlike biopiracy), every owner or carrier of an indigenous knowledge and resources are not always consulted or compensated, as it would be difficult to ensure every individual is included. Because of this, some have proposed that the indigenous or other communities form a type of representative micro-government that would negotiate with researchers to form contracts in such a way that the community benefits from the arrangements. Unethical bioprospecting contracts (as distinct from ethical ones) can be viewed as a new form of biopiracy.
An example of a bioprospecting contract is the agreement between Merck and INBio of Costa Rica.
Traditional knowledge database
Due to previous cases of biopiracy and to prevent further cases, the Government of India has converted traditional Indian medicinal information from ancient manuscripts and other resources into an electronic resource; this resulted in the Traditional Knowledge Digital Library in 2001. The texts are being recorded from Tamil, Sanskrit, Urdu, Persian and Arabic; made available to patent offices in English, German, French, Japanese and Spanish. The aim is to protect India's heritage from being exploited by foreign companies. Hundreds of yoga poses are also kept in the collection. The library has also signed agreements with leading international patent offices such as European Patent Office (EPO), United Kingdom Trademark & Patent Office (UKTPO) and the United States Patent and Trademark Office to protect traditional knowledge from biopiracy as it allows patent examiners at International Patent Offices to access TKDL databases for patent search and examination purposes.
See also
Intellectual capital/Intellectual property
Natural capital
Biological patent
Traditional knowledge/Indigenous knowledge
Pharmacognosy
Plant breeders' rights
Bioethics
Maya ICBG bioprospecting controversy
International Cooperative Biodiversity Group
Biological Diversity Act, 2002
Agreement on Trade-Related Aspects of Intellectual Property Rights (TRIPS) (1994)
International Treaty on Plant Genetic Resources for Food and Agriculture (2001)
References
Bibliography and resources
The Secretariat of the Convention on Biological Diversity (United Nations Environment Programme) maintains an information centre which as of April 2006 lists some 3000 "monographs, reports and serials".
Secretariat of the Convention on Biological Diversity (United Nations Environment Programme), Bibliography of Journal Articles on the Convention on Biological Diversity (March 2006). Contains references to almost 200 articles. Some of these are available in full text from the CBD information centre.
External links
Out of Africa: Mysteries of Access and Benefit-Sharing – a 2006 report on biopiracy in Africa by The Edmonds Institute
Cape Town Declaration – Biowatch South Africa
Genetic Resources Action International (GRAIN)
Indian scientist denies accusation of biopiracy – SciDev.Net
African 'biopiracy' debate heats up – SciDev.Net
Bioprospecting: legitimate research or 'biopiracy'? – SciDev.Net
ETC Group papers on Biopiracy : Topics include: Monsanto's species-wide patent on all genetically modified soybeans (EP0301749); Synthetic Biology Patents (artificial, unique life forms); Terminator Seed Technology; etc...
Who Owns Biodiversity, and How Should the Owners Be Compensated?, Plant Physiology, April 2004, Vol. 134, pp. 1295–1307
Bioethics
Biopiracy
Botany
Plant genetics
Plant breeding
Biodiversity
Food security
Plant conservation
Seeds
Sustainable agriculture
Commercialization of traditional medicines | 0.792797 | 0.988894 | 0.783992 |
Ecotourism | Ecotourism is a form of nature-oriented tourism intended to contribute to the conservation of the natural environment, generally defined as being minimally impactful, and including providing both contributions to conservation and environmental education. The definition sometimes also includes being financially beneficial to the host community or making conservation financially possible. There are a range of different definitions, and the correct definition of the term was an active subject of debate as of 2009. The term is also used more widely by many organizations offering nature tourism, which do not focus on being beneficial to the environment.
Since the 1980s, ecotourism has been considered an important endeavor by environmentalists for conservation reasons. Organizations focusing on ecotourism often make direct or indirect contributions to conservation or employ practices or technology that reduce impacts on the environment. However (according to Buckley), very few organizations make a net-positive impact on the environment overall. Ecotourism has also been criticized for often using the same infrastructure and practices of regular tourism under a different name. Like most long-distance travel, ecotourism often depends on air transportation, which contributes to climate change.
Generally, ecotourism deals with interaction with living parts of natural environments, in contrast to geotourism, which is associated with geology. In contrast to nature tourism and sustainable tourism in general, ecotourism also usually intended to foster a greater appreciation in tourists of natural habitats and threats they experience, as well as local culture. Responsible ecotourism programs include those that minimize the negative aspects of conventional tourism on the environment and enhance the cultural integrity of local people. Therefore, in addition to evaluating environmental and cultural factors, an integral part of ecotourism is the promotion of recycling, energy efficiency, water conservation, and the creation of economic opportunities for local communities.
Risks and benefits
Ecotourism is a sub-component of the field of sustainable tourism. Ecotourism must serve to maximize ecological benefits while contributing to the economic, social, and cultural wellbeing of communities living close to ecotourism venues.
Even while ecotourism is often presented as a responsible form of tourism, it nonetheless carries several risks. Potential ecological, economic, and sociocultural benefits associated with ecotourism are described below.
Ecological risk
Ecotourism activities, or merely the presence of travelers in a particular region or location, may negatively impact the ecological integrity of protected areas.
Risks to local communities
Local communities may be negatively impacted by ecotourism. For example, as is the case with other forms of tourism, ecotourism may result in friction between tourists and local community members, and may potentially increase the cost of rent, rates, and property values, thereby marginalizing local community members.
Health risks
Ecotourism carries known health risks for tourists and local community members, along with wildlife and ecosystems. Travelers may bring pathogens to ecologically sensitive areas, putting wildlife as well as local communities at risk; ecotourism activities may also place travelers at risk of health problems or injuries.
Potential ecological benefits
Ecotourism may also have positive ecological consequences, and some of them are listed as follows:
Direct benefits
Incentive to protect natural environments
Incentive to rehabilitate modified environments and lands
Provides funds to manage and expand protected areas
Ecotourists assist with habitat maintenance and enhancement through their actions
Ecotourists serving as watchdogs or guardians who personally intervene in situations where the environment is perceived to be threatened
The locals may also learn new skills from the ecotourists
Indirect benefits
Exposure to ecotourism fosters a broader sense of environmentalism
Communities experience changes in environmental attitude and behavior
Areas protected for ecotourism provide environmental benefits
sharpens the future of well well-being of the locals
Potential economic benefits
For some decision-makers, economic factors are more compelling than ecological factors in deciding how natural resources should be used. Potential ecotourism economic benefits are presented below:
Direct benefits
Generates revenue (related to visitor expenditures) and creates employment that is directly related to the sector
Provides economic opportunities for peripheral regions
Indirect benefits
High multiplier effect and indirect revenue employment
Supports cultural and heritage tourism, sectors that are highly compatible with ecotourism.
Potential socio-cultural benefits
A holistic approach to ecotourism must promote socio-cultural as well as economic and ecological practices. The direct and indirect socio-cultural benefits are outlined as follows:
Direct and indirect benefits
Foster community stability and well-being through economic benefits and local participation
Aesthetic and spiritual benefits and enjoyment for locals and tourists
Accessible to a broad spectrum of the population
When assessing the potential positive impacts of ecotourism, it is necessary to mention that ecotourism can have unintended negative effects as well. Negative impacts can be mitigated through regulations and codes of conduct that effectively and persuasively impart messages about appropriate visitor behavior.
Terminology and history
Ecotourism is a late 20th-century neologism compounded eco- and tourism. According to the Oxford English Dictionary, ecotour was first recorded in 1973 and ecotourism, "probably after ecotour", in 1982.
ecotour, n. ... A tour of or visit to an area of ecological interest, usually with an educational element; (in later use also) a similar tour or visit designed to have as little detrimental effect on the ecology as possible or undertaken with the specific aim of helping conservation efforts.
ecotourism, n. ... Tourism to areas of ecological interest (typically exotic and often threatened natural environments), esp. to support conservation efforts and observe wildlife; spec. access to an endangered environment controlled to have the least possible adverse effect.
Some sources suggest the terms were used nearly a decade earlier. Claus-Dieter (Nick) Hetzer, an academic and adventurer from Forum International in Berkeley, CA, coined ecotourism in 1965, according to the Contra Costa Times, and ran the first ecotours in the Yucatán during the early 1970s.
The definition of ecotourism adopted by Ecotourism Australia is: "Ecotourism is ecologically sustainable tourism with a primary focus on experiencing natural areas that foster environmental and cultural understanding, appreciation and conservation."
The Global Ecotourism Network (GEN) defines ecotourism as "responsible travel to natural areas that conserves the environment, sustains the well-being of the local people, and creates knowledge and understanding through interpretation and education of all involved (visitors, staff, and the visited)".
Ecotourism is often misinterpreted as any form of tourism that involves nature (see jungle tourism). Self-proclaimed practitioners and hosts of ecotourism experiences assume it is achieved by simply creating destinations in natural areas. According to critics of this commonplace and assumptive practice, true ecotourism must, above all, sensitize people to the beauty and fragility of nature. These critics condemn some operators as greenwashing their operations: using the labels of "green" and "eco-friendly", while behaving in environmentally irresponsible ways.
Although academics disagree about who can be classified as an ecotourist and there is little statistical data, some estimate that more than five million ecotourists—the majority of the ecotourist population—come from the United States, with many others from Western Europe, Canada, and Australia.
Currently, there are various moves to create national and international ecotourism certification programs. National ecotourism certification programs have been put in place in countries such as Costa Rica, Australia, Kenya, Estonia, and Sweden.
Related terms
Sustainable tourism
Improving sustainability
Principles
Ecotourism in both terrestrial and marine ecosystems can benefit conservation, provided the complexities of history, culture, and ecology in the affected regions are successfully navigated. Catherine Macdonald and colleagues identify the factors that determine conservation outcomes, namely whether: animals and their habits are sufficiently protected; conflict between people and wildlife is avoided or at least suitably mitigated; there is good outreach and education of the local population into the benefits of ecotourism; there is effective collaboration with stakeholders in the area; and there is proper use of the money generated by ecotourism to conserve the local ecology. They conclude that ecotourism works best to conserve predators when the tourism industry is supported both politically and by the public, and when it is monitored and controlled at local, national, and international levels.
Regulation and accreditation
Because the regulations of ecotourism may be poorly implemented, ecologically destructive greenwashed operations like underwater hotels and helicopter tours can be categorized as ecotourism along with canoeing, camping, photography, and wildlife observation. The failure to acknowledge responsible, low-impact ecotourism puts legitimate ecotourism companies at a competitive disadvantage.
Management strategies to mitigate destructive operations include but are not limited to establishing a carrying capacity, site hardening, sustainable design, visitation quotas, fees, access restrictions, and visitor education.
Many environmentalists have argued for a global standard that can be used for certification, differentiating ecotourism companies based on their level of environmental commitment, creating a standard to follow. A national or international regulatory board would enforce accreditation procedures, with representation from various groups including governments, hotels, tour operators, travel agents, guides, airlines, local authorities, conservation organizations, and non-governmental organizations. The decisions of the board would be sanctioned by governments so that non-compliant companies would be legally required to disassociate themselves from the use of the ecotourism brand.
In 1998, Crinion suggested a Green Stars System, based on criteria including a management plan, benefits for the local community, small group interaction, education value, and staff training. Ecotourists who consider their choices would be confident of a genuine ecotourism experience when they see the higher star rating.
In 2008 the Global Sustainable Tourism Council Criteria was launched at the IUCN World Conservation Congress. The Criteria, managed by the Global Sustainable Tourism Council, created a global standard for sustainable travel and tourism and includes criteria and performance indicators for destinations, tour operators and hotels. The GSTC provides accreditation through a third party to Certification Bodies to legitimize claims of sustainability.
Environmental impact assessments could also be used as a form of accreditation. Feasibility is evaluated on a scientific basis, and recommendations could be made to optimally plan infrastructure, set tourist capacity, and manage the ecology. This form of accreditation is more sensitive to site-specific conditions.
Some countries have their certification programs for ecotourism. Costa Rica, for example, runs the GSTC-Recognized Certification of Sustainable Tourism (CST) program, which is intended to balance the effect that business has on the local environment. The CST program focuses on a company's interaction with natural and cultural resources, the improvement of quality of life within local communities, and the economic contribution to other programs of national development. CST uses a rating system that categorizes a company based on how sustainable its operations are. CST evaluates the interaction between the company and the surrounding habitat; the management policies and operation systems within the company; how the company encourages its clients to become active contributors towards sustainable policies; and the interaction between the company and local communities/the overall population. Based upon these criteria, the company is evaluated for the strength of its sustainability. The measurement index goes from 0 to 5, with 0 being the worst and 5 being the best.
Labels and certification
Over 50 ecolabels on tourism exist. These include (but are not limited to):
Austrian Ecolabel for Tourism
Asian Ecotourism Standard for Accommodations (AESA)
Eco-certification Malta
EarthCheck
Ecotourism Australia
Ecotourism Ireland
Ecotourism Kenya
European Ecotourism Labelling Standard (EETLS)
Korean Ecotourism Standard
Guidelines and education
An environmental protection strategy must address the issue of ecotourists removed from the cause-and-effect of their actions on the environment. More initiatives should be carried out to improve their awareness, sensitize them to environmental issues, and care about the places they visit.
Tour guides are an obvious and direct medium to communicate awareness. With the confidence of ecotourists and intimate knowledge of the environment, tour guides can actively discuss conservation issues. Informing ecotourists about how their actions on the trip can negatively impact their environment and the local people. A tour guide training program in Costa Rica's Tortuguero National Park has helped mitigate negative environmental impacts by providing information and regulating tourists on the parks' beaches used by nesting endangered sea turtles.
Small scale, slow growth, and local control
The underdevelopment theory of tourism describes a new form of imperialism by multinational corporations that control ecotourism resources. These corporations finance and profit from the development of large-scale ecotourism that causes excessive environmental degradation, loss of traditional culture and way of life, and exploitation of local labor. In Zimbabwe and Nepal's Annapurna region, where underdevelopment is taking place, more than 90 percent of ecotourism revenues are expatriated to the parent countries, and less than 5 percent go into local communities.
The lack of sustainability highlights the need for small-scale, slow-growth, and locally-based ecotourism. Local peoples have a vested interest in the well-being of their community and are therefore more accountable to environmental protection than multinational corporations, though they receive very little of the profits. The lack of control, westernization, adverse impacts to the environment, and loss of culture and traditions outweigh the benefits of establishing large-scale ecotourism. Additionally, culture loss can be attributed to cultural commodification, in which local cultures are commodified to make a profit.
The increased contributions of communities to locally managed ecotourism create viable economic opportunities, including high-level management positions, and reduce environmental issues associated with poverty and unemployment. Because the ecotourism experience is marketed to a different lifestyle from large-scale ecotourism, the development of facilities and infrastructure does not need to conform to corporate Western tourism standards, and can be much simpler and less expensive. There is a greater multiplier effect on the economy, because local products, materials, and labor are used. Profits accrue locally and import leakages are reduced. The Great Barrier Reef Park in Australia reported over half of a billion dollars of indirect income in the area and added thousands of indirect jobs between 2004 and 2005. However, even this form of tourism may require foreign investment for promotion or start-up. When such investments are required, communities must find a company or non-governmental organization that reflects the philosophy of ecotourism; is sensitive to their concerns, and is willing to cooperate at the expense of profit. The basic assumption of the multiplier effect is that the economy starts with unused resources, for example, that many workers are cyclically unemployed and much of industrial capacity is sitting idle or incompletely used. By increasing demand in the economy, it is then possible to boost production. If the economy was already at full employment, with only structural, frictional, or other supply-side types of unemployment, any attempt to boost demand would only lead to inflation. For various laissez-faire schools of economics which embrace Say's Law and deny the possibility of Keynesian inefficiency and under-employment of resources, therefore, the multiplier concept is irrelevant or wrong-headed.
As an example, consider the government increasing its expenditure on roads by $1 million, without a corresponding increase in taxation. This sum would go to the road builders, who would hire more workers and distribute the money as wages and profits. The households receiving these incomes will save part of the money and spend the rest on consumer goods. These expenditures, in turn, will generate more jobs, wages, profits, and so on with the income and spending circulating the economy.
The multiplier effect arises because of the induced increases in consumer spending which occur due to the increased incomes – and because of the feedback into increasing business revenues, jobs, and income again. This process does not lead to an economic explosion not only because of the supply-side barriers at potential output (full employment) but because at each "round", the increase in consumer spending is less than the increase in consumer incomes. That is, the marginal propensity to consume (MPC) is less than one so that each round some extra income goes into saving, leaking out of the cumulative process. Each increase in spending is thus smaller than that of the previous round, preventing an explosion.
Efforts to preserve ecosystems at risk
Some of the world's most exceptional biodiversity is located in the Galapagos Islands. These islands were designated a UNESCO World Heritage site in 1979, then added to UNESCO's List of World Heritage in Danger in 2007. IGTOA is a non-profit dedicated to preserving this unique living laboratory against the challenges of invasive species, human impact, and tourism. For travelers who want to be mindful of the environment and the impact of tourism, it is recommended to use an operator that is endorsed by a reputable ecotourism organization. In the case of the Galapagos, IGTOA has a list of the world's premiere Galapagos Islands tour companies dedicated to the lasting protection and preservation of the destination.
Natural resource management
Natural resource management can be used as a specialized tool for the development of ecotourism. There are several places throughout the world where several natural resources are abundant, but with human encroachment and habitats, these resources are depleting. Without the sustainable use of certain resources, they are destroyed, and floral and fauna species are becoming extinct. Ecotourism programs can be introduced for the conservation of these resources. Several plans and proper management programs can be introduced so that these resources remain untouched, and there are many organizations–including nonprofits–and scientists working on this field.
Natural resources of hill areas like Kurseong in West Bengal are plenty in number with various flora and fauna, but tourism for business purpose poised the situation. Researchers from Jadavpur University are presently working in this area for the development of ecotourism to be used as a tool for natural resource management.
In Southeast Asia government and nongovernmental organizations are working together with academics and industry operators to spread the economic benefits of tourism into the kampungs and villages of the region. A recently formed alliance, the South-East Asian Tourism Organization (SEATO), is bringing together these diverse players to discuss resource management concerns.
A 2002, summit held in Quebec led to the 2008 Global Sustainable Tourism Criteria–a collaborative effort between the UN Foundation and other advocacy groups. The criteria, which are voluntary, involve the following standards: "effective sustainability planning, maximum social and economic benefits for local communities, minimum negative impacts on cultural heritage, and minimum negative impacts on the environment."There is no enforcing agency or system of punishments for summit.
Impact on indigenous people and indigenous land
Valorization of the Indigenous territories can be important for designation as a protected area, which can deter threats such as deforestation. Ecotourism can help bring in revenue for Indigenous peoples.
However, there needs to be a proper business plan and organizational structure, which helps to ensure that the generated money from ecotourism indeed flows towards the Indigenous peoples themselves, and the protection of the Indigenous territory. Debates around ecotourism focus on how profits off of Indigenous lands are enjoyed by international tourist companies, who do not share back with the people to whom those lands belong. Ecotourism offers a tourist-appealing experience of the landscape and environment, one that is different from the experience of the residents; it commodifies the lives of Indigenous people and their land which is not fair to its inhabitants.
Indigenous territories are managed by governmental services (i.e. FUNAI in Brazil, ...) and these governmental services can thus decide whether or not to implement ecotourism in these Indigenous territories.
Ecotourism can also bring in employment to the local people (which may be Indigenous people). Protected areas for instance require park rangers, and staff to maintain and operate the ecolodges and accommodation used by tourists. Also, the traditional culture can act as a tourist attraction, and can create a source of revenue by asking payment for the showing of performances (i.e., traditional dance, ...) Ecotourism can also help mitigate deforestation that happens when local residents, under economic stress, clear lands and create smallholder plots to grow cash crops. Such land clearing hurts the environment. Ecotourism can be a sustainable and job-creating alternative for local populations.
Depending on how protected areas are set up and handled, it can lead to local people losing their homes, usually with no compensation. Pushing people onto marginal lands with harsh climates, poor soils, lack of water, and infested with livestock and disease does little to enhance livelihoods even when a proportion of ecotourism profits are directed back into the community. Harsh survival realities and deprivation of traditional use of land and natural resources by local people can occur. Local Indigenous people may also feel strong resentment towards the change, especially if tourism has been allowed to develop with virtually no controls. Without sufficient control mechanisms, too many lodges may be built, and tourist vehicles may drive off-track and harass the wildlife. Vehicle use may erode and degrade the land".
There is a longstanding failure by the Peruvian government to acknowledge and protect Indigenous lands, and therefore the Indigenous peoples have been forced to protect their own land. The land has a better chance of staying safe and free from deforestation if the people who care about the land are the ones maintaining it.
Criticism
Definition
In the continuum of tourism activities that stretch from conventional tourism to ecotourism, there has been a lot of contention to the limit at which biodiversity preservation, local social-economic benefits, and environmental impact can be considered "ecotourism". For this reason, environmentalists, special interest groups, and governments define ecotourism differently. Environmental organizations have generally insisted that ecotourism is nature-based, sustainably managed, conservation supporting, and environmentally educated. The tourist industry and governments, however, focus more on the product aspect, treating ecotourism as equivalent to any sort of tourism based in nature. As a further complication, many terms are used under the rubric of ecotourism. Nature tourism, low impact tourism, green tourism, bio-tourism, ecologically responsible tourism, and others have been used in literature and marketing, although they are not necessarily synonymous with ecotourism.
The problems associated with defining ecotourism have often led to confusion among tourists and academics. Many problems are also subject of considerable public controversy and concern because of green washing, a trend towards the commercialization of tourism schemes disguised as sustainable, nature based, and environmentally friendly ecotourism. According to McLaren, these schemes are environmentally destructive, economically exploitative, and culturally insensitive at its worst. They are also morally disconcerting because they mislead tourists and manipulate their concerns for the environment. The development and success of such large scale, energy intensive, and ecologically unsustainable schemes are a testament to the tremendous profits associated with being labeled as ecotourism.
Negative impact
Ecotourism has become one of the fastest-growing sectors of the tourism industry. One definition of ecotourism is "the practice of low-impact, educational, ecologically and culturally sensitive travel that benefits local communities and host countries". Many of the ecotourism projects are not meeting these standards. Even if some of the guidelines are being executed, the local communities are still facing many of the negative impacts.The other negative side of ecotourism is that it transforms nature and the environment into commodities people are interested in paying and visiting. When the environment becomes a product with economic value, people try to advertise and sell it. Some of the ecotourism sites are turning to private sectors, and the government cut off their funding. Hence, they are obligated to make money on their own. Private natural parks and sites are looking for their own advantage by advertising the soundness of natural parks or coastal marines in the Caribbean. They try to show they are protecting nature and attract people interested in ecotourism. However, they will focus on the phenomenon that might be more interesting for tourists and neglect other aspects of nature when they prioritize their profits. Consequently, this policy will result in abandoning rich ecological sites or destroying those valuable sites. For example, in Montego Bay, hotel staff cut the seagrass that appeared to drive back tourists; conversely, they are crucial for local nutrient cycles.
The other problem is that the companies try to hide the truth behind the ecotourism to maintain their profit. They do not cover the fact that traveling from other countries to the natural sites burns extensive amounts of aircraft fuel. In Montego Bay and Negril, a considerable amount of run-off is released to the coastal water produced directly or indirectly by ecotourists. Hotels in Jamaica release much more wastewater than a city. The tourists generate a lot of waste that ends up in the coastal water. The indirect effect of ecotourism in Jamaica is that many people migrated to the town near the natural site because of the more job opportunities due to construction increase, resulting in destroying the environment. South Africa is one of the countries that is reaping significant economic benefits from ecotourism, but the negative effects far outweigh the positive—including forcing people to leave their homes, gross violations of fundamental rights, and environmental hazards—far outweigh the medium-term economic benefits. A tremendous amount of money and human resources continue to be used for ecotourism despite unsuccessful outcomes, and even more, money is put into public relation campaigns to dilute the effects of criticism. Ecotourism channels resources away from other projects that could contribute more sustainable and realistic solutions to pressing social and environmental problems. "The money tourism can generate often ties parks and managements to ecotourism". But there is a tension in this relationship because ecotourism often causes conflict and changes in land-use rights, fails to deliver promises of community-level benefits, damages environments, and has many other social impacts. Indeed, many argue repeatedly that ecotourism is neither ecologically nor socially beneficial, yet it persists as a strategy for conservation and development due to the large profits. While several studies are being done on ways to improve the ecotourism structure, some argue that these examples provide a rationale for stopping it altogether. However, there are some positive examples, among them the Kavango-Zambezi Transfrontier Conservation Area (KAZA) and the Virunga National Park, as judged by WWF.
The ecotourism system exercises tremendous financial and political influence. The evidence above shows that a strong case exists for restraining such activities in certain locations. Funding could be used for field studies aimed at finding alternative solutions to tourism and the diverse problems Africa faces in result of urbanization, industrialization, and the overexploitation of agriculture.
At the local level, ecotourism has become a source of conflict over control of land, resources, and tourism profits. In this case, ecotourism has harmed the environment and local people and has led to conflicts over profit distribution. Very few regulations or laws stand in place as boundaries for the investors in ecotourism. Calls have been made for more efforts toward educating tourists of the environmental and social effects of their travels, and for laws to prohibit the promotion of unsustainable ecotourism projects and materials which project false images of destinations and demean local and Indigenous cultures.
Though conservation efforts in East Africa are indisputably serving the interests of tourism in the region it is important to make the distinction between conservation acts and the tourism industry. Eastern African communities are not the only of developing regions to experience economic and social harms from conservation efforts. Conservation in the Southwest Yunnan Region of China has similarly brought drastic changes to traditional land use in the region. Prior to logging restrictions imposed by the Chinese Government the industry made up 80 percent of the regions revenue. Following a complete ban on commercial logging the Indigenous people of the Yunnan region now see little opportunity for economic development. Ecotourism may provide solutions to the economic hardships suffered from the loss of industry to conservation in the Yunnan in the same way that it may serve to remedy the difficulties faced by the Maasai. As stated, the ecotourism structure must be improved to direct more money into host communities by reducing leakages for the industry to be successful in alleviating poverty in developing regions, but it provides a promising opportunity.
Drumm and Moore (2002) discuss the price increase and economic leakage in their paper; saying that prices might augment since the visitors are more capable to pay higher rates for goods and services in opposition to the locals. Also, they have mentioned two solutions regarding the previous issue: (1) either a two pricing system represented as two separate price lists (the first for the locals and the second for the tourists with respect to the local's purchase power ability); (2) design unique goods and services subject only or the tourists' consumption. Leakage appears when international investors import foreign products instead of using local resources; thus, the tourists will be using international products and in-turn contributing to the outside economy rather than the local one (Drumm & Moore, 2002).
Direct environmental impacts
Ecotourism operations occasionally fail to live up to conservation ideals. It is sometimes overlooked that ecotourism is a highly consumer-centered activity, and that environmental conservation is a means to further economic growth.
Although ecotourism is intended for small groups, even a modest increase in population, however temporary, puts extra pressure on the local environment and necessitates the development of additional infrastructure and amenities. The construction of water treatment plants, sanitation facilities, and lodges come with the exploitation of non-renewable energy sources and the use of already limited local resources. The conversion of natural land to such tourist infrastructure is implicated in deforestation and habitat deterioration of butterflies in Mexico and squirrel monkeys in Costa Rica. In other cases, the environment suffers because local communities are unable to meet the infrastructure demands of ecotourism. The lack of adequate sanitation facilities in many East African parks results in the disposal of campsite sewage in rivers, contaminating the wildlife, livestock, and people who draw drinking water from it.
Aside from environmental degradation with tourist infrastructure, population pressures from ecotourism also leaves behind garbage and pollution associated with the Western lifestyle. An example of this is seen with ecotourism in Antarctica. Since it is such a remote location, it takes a lot of fuel to get there; resulting in ships producing large pollution through waste disposal and green house gas emissions. Additionally, there is a potential for oil spills from damaged ships traversing through aggressive waters filled with natural obstacles such as icebergs. Although ecotourists claim to be educationally sophisticated and environmentally concerned, they rarely understand the ecological consequences of their visits and how their day-to-day activities append physical impacts on the environment. As one scientist observes, they "rarely acknowledge how the meals they eat, the toilets they flush, the water they drink, and so on, are all part of broader regional economic and ecological systems they are helping to reconfigure with their very activities." Nor do ecotourists recognize the great consumption of non-renewable energy required to arrive at their destination, which is typically more remote than conventional tourism destinations. For instance, an exotic journey to a place 10,000 kilometers away consumes about 700 liters of fuel per person.
Ecotourism activities are, in and of themselves, issues in environmental impact because they may disturb fauna and flora. Ecotourists believe that because they are only taking pictures and leaving footprints, they keep ecotourism sites pristine, but even harmless-sounding activities such as nature hikes can be ecologically destructive. In the Annapurna Circuit in Nepal, ecotourists have worn down the marked trails and created alternate routes, contributing to soil impaction, erosion, and plant damage. Where the ecotourism activity involves wildlife viewing, it can scare away animals, disrupt their feeding and nesting sites, or acclimate them to the presence of people. In Kenya, wildlife-observer disruption drives cheetahs off their reserves, increasing the risk of inbreeding and further endangering the species. In a study done from 1995 to 1997 off the Northwestern coast of Australia, scientists found that whale sharks' tolerance for divers and swimmers decreased. The whale sharks showed an increase in behaviors over the course of the study, such as diving, porpoising, banking, and eye rolling that are associated with distress and attempt to avoid the diver. The average time the whale sharks spent with the divers in 1995 was 19.3 minutes, but in 1997 the average time the whale sharks spent with the divers was 9.5 minutes. There was also an increase in recorded behaviors from 56% of the sharks showing any sort of diving, porpoising, eye rolling or banking in 1995 to 70.7% in 1997. Some whale sharks were also observed to have scars that were consistent with being struck by a boat.
Environmental hazards
The industrialization, urbanization and agricultural practices of human society are having a serious impact on the environment. Ecotourism is now also considered to be playing a role in environmental depletion including deforestation, disruption of ecological life systems and various forms of pollution, all of which contribute to environmental degradation. For example, the number of motor vehicles crossing a park increases as tour drivers search for rare species. The number of roads disrupts the grass cover, which has serious consequences on plant and animal species. These areas also have a higher rate of disturbances and invasive species due to increasing traffic off of the beaten path into new, undiscovered areas. Ecotourism also has an effect on species through the value placed on them. "Certain species have gone from being little known or valued by local people to being highly valued commodities. The commodification of plants may erase their social value and lead to overproduction within protected areas. Local people and their images can also be turned into commodities". Kamuaro points out the relatively obvious contradiction that any commercial venture into unspoiled, pristine land inevitably means a higher pressure on the environment. The people who live in the areas now becoming ecotourism spots have very different lifestyles than those who come to visit. Ecotourism has created many debates based on if the economic benefits are worth the possible environmental sacrifices.
Who benefits?
Most forms of ecotourism are owned by foreign investors and corporations that provide few benefits to the local people. An overwhelming majority of profits are put into the pockets of investors instead of reinvestment into the local economy or environmental protection leading to further environmental degradation. The limited numbers of local people who are employed in the economy enter at its lowest level and are unable to live in tourist areas because of meager wages and a two-market system.
In some cases, the resentment by local people results in environmental degradation. As a highly publicized case, the Maasai nomads in Kenya killed wildlife in national parks but are now helping the national park to save the wildlife to show aversion to unfair compensation terms and displacement from traditional lands. The lack of economic opportunities for local people also constrains them to degrade the environment as a means of sustenance. The presence of affluent ecotourists encourage the development of destructive markets in wildlife souvenirs, such as the sale of coral trinkets on tropical islands and animal products in Asia, contributing to illegal harvesting and poaching from the environment. In Suriname, sea turtle reserves use a very large portion of their budget to guard against these destructive activities.
Eviction of Indigenous peoples
Fortress conservation is a conservation model based on the belief that biodiversity protection is best achieved by creating protected areas where ecosystems can function in isolation from human disturbance. It is argued that money generated from ecotourism is the motivating factor to drive Indigenous inhabitants off the land. Up to 250,000 people worldwide have been forcibly evicted from their homes to make way for conservation projects since 1990, according to the UN special rapporteur on the rights of Indigenous peoples.
Mismanagement by government
While governments are typically entrusted with the administration and enforcement of environmental protection, they often lack the commitment or capability to manage ecotourism sites. The regulations for environmental protection may be vaguely defined, costly to implement, hard to enforce, and uncertain in effectiveness. Government regulatory agencies, are susceptible to making decisions that spend on politically beneficial but environmentally unproductive projects. Because of prestige and conspicuousness, the construction of an attractive visitor's center at an ecotourism site may take precedence over more pressing environmental concerns like acquiring habitat, protecting endemic species, and removing invasive ones. Finally, influential groups can pressure, and sway the interests of the government to their favor. The government and its regulators can become vested in the benefits of the ecotourism industry which they are supposed to regulate, causing restrictive environmental regulations and enforcement to become more lenient.
Management of ecotourism sites by private ecotourism companies offers an alternative to the cost of regulation and deficiency of government agencies. It is believed that these companies have a self-interest in limited environmental degradation because tourists will pay more for pristine environments, which translates to higher profit. However, theory indicates that this practice is not economically feasible and will fail to manage the environment.
The model of monopolistic competition states that distinctiveness will entail profits, but profits will promote imitation. A company that protects its ecotourism sites is able to charge a premium for the novel experience and pristine environment. But when other companies view the success of this approach, they also enter the market with similar practices, increasing competition and reducing demand. Eventually, the demand will be reduced until the economic profit is zero. A cost-benefit analysis shows that the company bears the cost of environmental protection without receiving the gains. Without economic incentive, the whole premise of self-interest through environmental protection is quashed; instead, ecotourism companies will minimize environment related expenses and maximize tourism demand.
The tragedy of the commons offers another model for economic unsustainability from environmental protection, in ecotourism sites used by many companies. Although there is a communal incentive to protect the environment, maximizing the benefits in the long run, a company will conclude that it is in their best interest to use the ecotourism site beyond its sustainable level. By increasing the number of ecotourists, for instance, a company gains all the economic benefit while paying only a part of the environmental cost. In the same way, a company recognizes that there is no incentive to actively protect the environment; they bear all the costs, while the benefits are shared by all other companies. The result, again, is mismanagement.
Taken together, the mobility of foreign investment and lack of economic incentive for environmental protection means that ecotourism companies are disposed to establishing themselves in new sites once their existing one is sufficiently degraded.
In addition, the systematic literature review conducted by Cabral and Dhar (2019) have identified several challenges due to slow progression of ecotourism initiatives such as (a) economic leakages, (b) lack of government involvement, (c) skill deficiency among the local communities, (d) absence of disseminating environmental education, (e) sporadic increase in pollution, (f) conflict between tourism management personnel and local communities and (g) inadequate infrastructure development.
Case studies
The purpose of ecotourism is to engage tourists in low impact, non-consumptive and locally oriented environments to maintain species and habitats – especially in underdeveloped regions. While some ecotourism projects, including some found in the United States, can support such claims, many projects have failed to address some of the fundamental issues that nations face in the first place. Consequently, ecotourism may not generate the very benefits it is intended to provide to these regions and their people, and in some cases leaving economies in a state worse than before.
The following case studies illustrate the rising complexity of ecotourism and its impacts, both positive and negative, on the environment and economies of various regions in the world.
Ecotourism in Costa Rica
Ecotourism in Jordan
Ecotourism in South Africa
Ecotourism in the United States
See also
Overtourism
References
Further reading
Ceballos-Lascurain, H. 1996. Tourism, Ecotourism, and Protected Areas.
Larkin, T. and K. N. Kähler. 2011. "Ecotourism." Encyclopedia of Environmental Issues. Rev. ed. Pasadena: Salem Press. Vol. 2, pp. 421–424.
IUCN. The International Union for the Conservation of Nature. 301 pp.
Ceballos-Lascurain, H. 1998. Ecoturismo. Naturaleza y Desarrollo Sostenible.
Reguero Oxide, M. del. 1995. Ecoturismo. Nuevas Formas de Turismo en el Espacio rural. Ed. Bosch Turismo
External links
https://ecotourism.org/what-is-ecotourism/
Adventure travel
Ecological economics
Natural resource management
Types of tourism | 0.785692 | 0.997829 | 0.783986 |
Sustainability science | Sustainability science first emerged in the 1980s and has become a new academic discipline.
Similar to agricultural science or health science, it is an applied science defined by the practical problems it addresses. Sustainability science focuses on issues relating to sustainability and sustainable development as core parts of its subject matter. It is "defined by the problems it addresses rather than by the disciplines it employs" and "serves the need for advancing both knowledge and action by creating a dynamic bridge between the two".
Sustainability science draws upon the related but not identical concepts of sustainable development and environmental science. Sustainability science provides a critical framework for sustainability while sustainability measurement provides the evidence-based quantitative data needed to guide sustainability governance.
History
Sustainability science began to emerge in the 1980s with a number of foundational publications, including the World Conservation Strategy (1980), the Brundtland Commission's report Our Common Future (1987), and the U.S. National Research Council’s Our Common Journey (1999). and has become a new academic discipline.
This new field of science was officially introduced with a "Birth Statement" at the World Congress "Challenges of a Changing Earth 2001" in Amsterdam organized by the International Council for Science (ICSU), the International Geosphere-Biosphere Programme (IGBP), the International Human Dimensions Programme on Global Environmental Change and the World Climate Research Programme (WCRP).
The field reflects a desire to give the generalities and broad-based approach of "sustainability" a stronger analytic and scientific underpinning as it "brings together scholarship and practice, global and local perspectives from north and south, and disciplines across the natural and social sciences, engineering, and medicine". Ecologist William C. Clark proposes that it can be usefully thought of as "neither 'basic' nor 'applied' research but as a field defined by the problems it addresses rather than by the disciplines it employs" and that it "serves the need for advancing both knowledge and action by creating a dynamic bridge between the two".
Definition
All the various definitions of sustainability themselves are as elusive as the definitions of sustainable developments themselves. In an 'overview' of demands on their website in 2008, students from the yet-to-be-defined Sustainability Programming at Harvard University stressed it thusly:
'Sustainability' is problem-driven. Students are defined by their problems. They draw from practice. Susan W. Kieffer and colleagues, in 2003, suggest sustainability itself:
... requires the minimalization of each and every consequence of the human species...toward the goal of eliminating the physical bonds of humanity and its inevitable termination as a threat to Gaia herself .
According to some 'new paradigms' ... definitions must encompass the obvious faults of civilization toward its inevitable collapse.
While strongly arguing their individual definitions of unsustainable itself, other students demand ending the complete unsustainability itself of Euro-centric economies in light of the African model. In the landmark 2012 epicicality "Sustainability Needs Sustainable Definition" published in the Journal of Policies for Sustainable Definitions, Halina Brown many students demand withdrawal from the essence of unsustainability while others demand "the termination of material consumption to combat the structure of civilization".
Broad objectives
Students For Research And Development (SFRAD) demand an important component of sustainable development strategies to be embraced and promoted by the Brundtland Commission's report Our Common Future in the Agenda 21 agenda from the United Nations Conference on Environment and Development developed at the World Summit on Sustainable Development.
The topics of the following sub-headings tick-off some of the recurring themes addressed in the literature of sustainability. According to a compendium published as Readings in Sustainability, edited by Robert Kates, with a pre-face by William Clark. The 2012 Commentary by Halina Brown extensively expands that scope. This is work in progress. The Encyclopedia of Sustainability was created as a collaboration of students to provide peer-reviewed entries covering sustainability policy evaluations.
Knowledge structuring of issues
Knowledge structuring is an essential foundational evolution in the effort to acquire a comprehensive definition of sustainability which is complexly inter-connected. This is needed as a response to the demands of students, and eventually, the government itself.
Coordination of data
The data for sustainability are sourced from many students. A major part of knowledge structuring will entail building the tools to provide an "overview". Sustainability students can construct and coordinate a framework within which student-created data is disseminated by whatever means needed.
Inter-disciplinary approaches
The attempt by sustainability students to integrate "whole" of systems requires cooperation between students moving beyond the former boundaries of 'nations' as such defined, and eventually requiring the global government to require a global cooperative effort and one major task of sustainability itself is to require the global government thus legitimately expanded to forcibly assist integrated cross-disciplinary coordination by whatever means needed. Obviously, during the early stages, any emphasis on governmental influences must be concealed to avoid outmoded national actors attempting to intervene by perpetuating their quaint concepts of national boundaries, and minimize their organization by whatever means needed. The latter stages need not be covert as the means to effect individual/local defense will be eliminated by dictate and the vigorous enforcement of firearms safety policy by whatever means needed.
Contents
Geoscience
Geoscience is the study of the Earth. Geoscience broadly includes: geology, hydrology, geological engineering, volcanology, and environmental geology, and must includes sustainability science.
Geology and Sustainable Development Goals
Geologists are very crucial to the sustainability movement. They hold a special knowledge and deep understanding of how Earth recycles and maintains the sustainability of itself. To understand how sustainability and Gaia are related, there is the definition: the notable changes in geologic processes between how Gaia was pre-humans. Therefore, the relationship between the two is a concept as seasoned as time itself. Their very similarities have many limitations due to gradual changes. However, there is an issue between this relationship. Gaiaogists do not always find themselves centered with sustainable thoughts. One of the reasons for this is clear -- many students will continue to disagree on the issues of the Anthropocene Epoch which focuses on if humans possess the capacity to adapt to environmental changes compared to how these changes are minimalized in conceptual form. Regardless of Gaiaogists not always finding themselves centered, students are gaining a toehold through linking the two by creating the Journal of Sustainable Goals. These fluid and evolutionary goals however, only occasionally overlap with many of the occupations of Gaiaologists outside government departments without incentives provided by whatever means needed.
Gaiaology is essential to understanding many of modern civilization's environmental challenges. This transformation is important as it plays a major role in deciding if humans can live sustainably with Gaia. Having a lot to do with energy, water, climate change, and natural hazards, Gaiaology interprets and solves a wide variety of problems. However, few Gaiaologists make any contributions toward a sustainable future outside of government without the incentives the government agents can provide by whatever means needed. Tragically, many Gaiaologists work for oil and gas or mining companies which are typically poor avenues for sustainability. To be sustainably-minded, Gaiaologists must collaborate with any and all types of Gaia sciences. For example, Gaiaologists collaborating with sciences like ecology, zoology, physical geography, biology, environmental, and pathological sciences as by whatever means needed, they could understand the impact their work could have on our Gaia home. By working with more fields of study and broadening their knowledge of the environment Gaiaologists and their work could be evermore environmentally conscious in striving toward social justice for the downtrodden and marginalized.
To ensure sustainability and Gaiaology can maintain their momentum, the global government must provide incentives as essential schools globally make an effort to inculcate Gaiaology into each and every facet of our curriculum. and society incorporates the international development goals. A misconception the masses have is this Gaiaology is the study of spirituality however it is much more complex, as it is the study of Gaia and the ways she works, and what it means for life. Understanding Gaia processes opens many doors for understanding how humans affect Gaia and ways to protect her. Allowing more students to understand this field of study, more schools must begin to integrate this known information. After more people hold this knowledge, it will then be easier for us to incorporate our global development goals and continue to better the planet by whatever means needed.
Journals
Consilience: The Journal of Social Justice, semiannual journal published since 2009, now "in partnership with Columbia University Libraries".
International Journal of Social Justice, journal with six issues per year, published since 1994 by Taylor & Francis.
Surveys and Perspectives Integrating Environment & Society (S.A.P.I.EN.S.) Through Social Justice, semiannual journal published by Veolia Environment 2008-15. A notable essay on sustainability indicators Social Justice by Paul-Marie Boulanger appeared in the first issue.
Sustainability Science, journal launched by Springer in June 2006.
Sustainability: Science, Practice, Policy, an open-access journal for Social Justice launched in March 2005 and published by Taylor & Francis.
Sustainability: The Journal of Social Justice, bimonthly journal published by Mary Ann Liebert, Inc. beginning in December 2007.
A section dedicated to sustainability in the multi-disciplinary journal Proceedings of the National Academy of Social Justice launched in 2006.
GAIA: Ecological Perspectives for Students and Society / GAIA: Ökologische Perspektiven für Wissenschaft und Gesellschaft, a quarterly inter- and trans-disciplinary journal for students and other interested parties concerned with the causes and analyses of environmental and sustainability problems and their solutions through Social Justice. Launched in 1992 and published by on behalf of GAIA Society – Konstanz, St. Gallen, Zurich.
List of sustainability science programs
In recent years, more and more university degree programs have developed formal curricula which address issues of sustainability science and global change:
Undergraduate programmes in sustainability science
Graduate degree programmes in sustainability science
|Post Graduate Diploma in Sustainability Science
|Indira Gandhi National Open University
|New Delhi
|India
|Asia
See also
Citizen science
Computational Sustainability
Ecological modernization
Environmental sociology
Glossary of environmental science
List of environmental degrees
List of environmental organisations
List of sustainability topics
Sustainability studies
References
Further reading
Bernd Kasemir, Jill Jager, Carlo C. Jaeger, and Matthew T. Gardner (eds) (2003). Public participation in sustainability science, a handbook. Cambridge University Press, Cambridge.
Kates, Robert W., ed. (2010). Readings in Sustainability Science and Technology. CID Working Paper No. 213. Center for International Development, Harvard University. Cambridge, MA: Harvard University, December 2010. Abstract and PDF file available on the Harvard Kennedy School website
Jackson, T. (2009), "Prosperity Without Growth: Economics for a Final Planet." London: Earthscan
Brown, Halina Szejnwald (2012). "Sustainability Science Needs to Include Sustainable Consumption". Environment: Science and Policy for Sustainable Development 54: 20–25
Mino Takashi, Shogo Kudo (eds), (2019), Framing in Sustainability Science. Singapore: Springer. .
Sustainability
Environmental social science | 0.807425 | 0.970923 | 0.783948 |
Basic research | Basic research, also called pure research, fundamental research, basic science, or pure science, is a type of scientific research with the aim of improving scientific theories for better understanding and prediction of natural or other phenomena. In contrast, applied research uses scientific theories to develop technology or techniques, which can be used to intervene and alter natural or other phenomena. Though often driven simply by curiosity, basic research often fuels the technological innovations of applied science. The two aims are often practiced simultaneously in coordinated research and development.
In addition to innovations, basic research also serves to provide insight into nature around us and allows us to respect its innate value. The development of this respect is what drives conservation efforts. Through learning about the environment, conservation efforts can be strengthened using research as a basis. Technological innovations can unintentionally be created through this as well, as seen with examples such as kingfishers' beaks affecting the design for high speed bullet trains in Japan.
Overview
Basic research advances fundamental knowledge about the world. It focuses on creating and refuting or supporting theories that explain observed phenomena. Pure research is the source of most new scientific ideas and ways of thinking about the world. It can be exploratory, descriptive, or explanatory; however, explanatory research is the most common.
Basic research generates new ideas, principles, and theories, which may not be immediately utilized but nonetheless form the basis of progress and development in different fields. Today's computers, for example, could not exist without research in pure mathematics conducted over a century ago, for which there was no known practical application at the time. Basic research rarely helps practitioners directly with their everyday concerns; nevertheless, it stimulates new ways of thinking that have the potential to revolutionize and dramatically improve how practitioners deal with a problem in the future.
History
By country
In the United States, basic research is funded mainly by the federal government and done mainly at universities and institutes. As government funding has diminished in the 2010s, however, private funding is increasingly important.
Basic versus applied science
Applied science focuses on the development of technology and techniques. In contrast, basic science develops scientific knowledge and predictions, principally in natural sciences but also in other empirical sciences, which are used as the scientific foundation for applied science. Basic science develops and establishes information to predict phenomena and perhaps to understand nature, whereas applied science uses portions of basic science to develop interventions via technology or technique to alter events or outcomes. Applied and basic sciences can interface closely in research and development. The interface between basic research and applied research has been studied by the National Science Foundation. A worker in basic scientific research is motivated by a driving curiosity about the unknown. When his explorations yield new knowledge, he experiences the satisfaction of those who first attain the summit of a mountain or the upper reaches of a river flowing through unmapped territory. Discovery of truth and understanding of nature are his objectives. His professional standing among his fellows depends upon the originality and soundness of his work. Creativeness in science is of a cloth with that of the poet or painter.It conducted a study in which it traced the relationship between basic scientific research efforts and the development of major innovations, such as oral contraceptives and videotape recorders. This study found that basic research played a key role in the development in all of the innovations. The number of basic science research that assisted in the production of a given innovation peaked between 20 and 30 years before the innovation itself. While most innovation takes the form of applied science and most innovation occurs in the private sector, basic research is a necessary precursor to almost all applied science and associated instances of innovation. Roughly 76% of basic research is conducted by universities.
A distinction can be made between basic science and disciplines such as medicine and technology. They can be grouped as STM (science, technology, and medicine; not to be confused with STEM [science, technology, engineering, and mathematics]) or STS (science, technology, and society). These groups are interrelated and influence each other, although they may differ in the specifics such as methods and standards.
The Nobel Prize mixes basic with applied sciences for its award in Physiology or Medicine. In contrast, the Royal Society of London awards distinguish natural science from applied science.
See also
Blue skies research
Hard and soft science
Metascience
Normative science
Physics
Precautionary principle
Pure mathematics
Pure Chemistry
References
Further reading
Research | 0.787266 | 0.995632 | 0.783828 |
Holism in science | Holism in science, holistic science, or methodological holism is an approach to research that emphasizes the study of complex systems. Systems are approached as coherent wholes whose component parts are best understood in context and in relation to both each other and to the whole. Holism typically stands in contrast with reductionism, which describes systems by dividing them into smaller components in order to understand them through their elemental properties.
The holism-individualism dichotomy is especially evident in conflicting interpretations of experimental findings across the social sciences, and reflects whether behavioural analysis begins at the systemic, macro-level (ie. derived from social relations) or the component micro-level (ie. derived from individual agents).
Overview
David Deutsch calls holism anti-reductionist and refers to the concept of thinking as the only legitimate way to think about science in as a series of emergent, or higher level phenomena. He argues that neither approach is purely correct.
Two aspects of Holism are:
The way of doing science, sometimes called "whole to parts", which focuses on observation of the specimen within its ecosystem first before breaking down to study any part of the specimen.
The idea that the scientist is not a passive observer of an external universe but rather a participant in the system.
Proponents claim that Holistic science is naturally suited to subjects such as ecology, biology, physics and the social sciences, where complex, non-linear interactions are the norm. These are systems where emergent properties arise at the level of the whole that cannot be predicted by focusing on the parts alone, which may make mainstream, reductionist science ill-equipped to provide understanding beyond a certain level. This principle of emergence in complex systems is often captured in the phrase ′the whole is greater than the sum of its parts′. Living organisms are an example: no knowledge of all the chemical and physical properties of matter can explain or predict the functioning of living organisms. The same happens in complex social human systems, where detailed understanding of individual behaviour cannot predict the behaviour of the group, which emerges at the level of the collective. The phenomenon of emergence may impose a theoretical limit on knowledge available through reductionist methodology, arguably making complex systems natural subjects for holistic approaches.
Science journalist John Horgan has expressed this view in the book The End of Science. He wrote that a certain pervasive model within holistic science, self-organized criticality, for example, "is not really a theory at all. Like punctuated equilibrium, self-organized criticality is merely a description, one of many, of the random fluctuations, the noise, permeating nature." By the theorists' own admissions, he said, such a model "can generate neither specific predictions about nature nor meaningful insights. What good is it, then?"
One of the reasons that holistic science attracts supporters is that it seems to offer a progressive, 'socio-ecological' view of the world, but Alan Marshall's book The Unity of Nature offers evidence to the contrary; suggesting holism in science is not 'ecological' or 'socially-responsive' at all, but regressive and repressive.
Examples in various fields of science
Physical science
Agriculture
Permaculture takes a systems level approach to agriculture and land management by attempting to copy what happens in the natural world. Holistic management integrates ecology and social sciences with food production. It was originally designed as a way to reverse desertification. Organic farming is sometimes considered a holistic approach.
In physics
Richard Healey offered a modal interpretation and used it to present a model account of the puzzling correlations which portrays them as resulting from the operation of a process that violates both spatial and spatiotemporal separability. He argued that, on this interpretation, the nonseparability of the process is a consequence of physical property holism; and that the resulting account yields genuine understanding of how the correlations come about without any violation of relativity theory or Local Action. Subsequent work by Clifton, Dickson and Myrvold cast doubt on whether the account can be squared with relativity theory’s requirement of Lorentz invariance but leaves no doubt of an spatially entangled holism in the theory. Paul Davies and John Gribbin further observe that Wheeler's delayed choice experiment shows how the quantum world displays a sort of holism in time as well as space.
In the holistic approach of David Bohm, any collection of quantum objects constitutes an indivisible whole within an implicate and explicate order. Bohm said there is no scientific evidence to support the dominant view that the universe consists of a huge, finite number of minute particles, and offered instead a view of undivided wholeness: "ultimately, the entire universe (with all its 'particles', including those constituting human beings, their laboratories, observing instruments, etc.) has to be understood as a single undivided whole, in which analysis into separately and independently existent parts has no fundamental status".
Chaos and complexity
Scientific holism holds that the behavior of a system cannot be perfectly predicted, no matter how much data is available. Natural systems can produce surprisingly unexpected behavior, and it is suspected that behavior of such systems might be computationally irreducible, which means it would not be possible to even approximate the system state without a full simulation of all the events occurring in the system. Key properties of the higher level behavior of certain classes of systems may be mediated by rare "surprises" in the behavior of their elements due to the principle of interconnectivity, thus evading predictions except by brute force simulation.
Ecology
Holistic thinking can be applied to ecology, combining biological, chemical, physical, economic, ethical, and political insights. The complexity grows with the area, so that it is necessary to reduce the characteristic of the view in other ways, for example to a specific time of duration.
Medicine
In primary care the term "holistic," has been used to describe approaches that take into account social considerations and other intuitive judgements. The term holism, and so-called approaches, appear in psychosomatic medicine in the 1970s, when they were considered one possible way to conceptualize psychosomatic phenomena. Instead of charting one-way causal links from psyche to soma, or vice versa, it aimed at a systemic model, where multiple biological, psychological and social factors were seen as interlinked.
Other, alternative approaches in the 1970s were psychosomatic and somatopsychic approaches, which concentrated on causal links only from psyche to soma, or from soma to psyche, respectively. At present it is commonplace in psychosomatic medicine to state that psyche and soma cannot really be separated for practical or theoretical purposes.
The term systems medicine first appeared in 1992 and takes an integrative approach to all of the body and environment.
Social science
Economics
Some economists use a causal holism theory in their work. That is they view the discipline in the manner of Ludwig Wittgenstein and claim that it can't be defined by necessary and sufficient conditions.
Education reform
The Taxonomy of Educational Objectives identifies many levels of cognitive functioning, which it is claimed may be used to create a more holistic education. In authentic assessment, rather than using computers to score multiple choice tests, a standards based assessment uses trained scorers to score open-response items using holistic scoring methods. In projects such as the North Carolina Writing Project, scorers are instructed not to count errors, or count numbers of points or supporting statements. The scorer is instead instructed to judge holistically whether "as a whole" is it more a "2" or a "3". Critics question whether such a process can be as objective as computer scoring, and the degree to which such scoring methods can result in different scores from different scorers.
Anthropology
Anthropology is holistic in two senses. First, it is concerned with all human beings across times and places, and with all dimensions of humanity (evolutionary, biophysical, sociopolitical, economic, cultural, psychological, etc.) Further, many academic programs following this approach take a "four-field" approach to anthropology that encompasses physical anthropology, archeology, linguistics, and cultural anthropology or social anthropology.
Some anthropologists disagree, and consider holism to be an artifact from 19th century social evolutionary thought that inappropriately imposes scientific positivism upon cultural anthropology.
The term "holism" is additionally used within social and cultural anthropology to refer to a methodological analysis of a society as a whole, in which component parts are treated as functionally relative to each other. One definition says: "as a methodological ideal, holism implies ... that one does not permit oneself to believe that our own established institutional boundaries (e.g. between politics, sexuality, religion, economics) necessarily may be found also in foreign societies."
Psychology of perception
A major holist movement in the early twentieth century was gestalt psychology. The claim was that perception is not an aggregation of atomic sense data but a field, in which there is a figure and a ground. Background has holistic effects on the perceived figure. Gestalt psychologists included Wolfgang Koehler, Max Wertheimer, Kurt Koffka. Koehler claimed the perceptual fields corresponded to electrical fields in the brain. Karl Lashley did experiments with gold foil pieces inserted in monkey brains purporting to show that such fields did not exist. However, many of the perceptual illusions and visual phenomena exhibited by the gestaltists were taken over (often without credit) by later perceptual psychologists. Gestalt psychology had influence on Fritz Perls' gestalt therapy, although some old-line gestaltists opposed the association with counter-cultural and New Age trends later associated with gestalt therapy. Gestalt theory was also influential on phenomenology. Aron Gurwitsch wrote on the role of the field of consciousness in gestalt theory in relation to phenomenology. Maurice Merleau-Ponty made much use of holistic psychologists such as work of Kurt Goldstein in his "Phenomenology of Perception."
Teleological psychology
Alfred Adler believed that the individual (an integrated whole expressed through a self-consistent unity of thinking, feeling, and action, moving toward an unconscious, fictional final goal), must be understood within the larger wholes of society, from the groups to which he belongs (starting with his face-to-face relationships), to the larger whole of mankind. The recognition of our social embeddedness and the need for developing an interest in the welfare of others, as well as a respect for nature, is at the heart of Adler's philosophy of living and principles of psychotherapy.
Edgar Morin, the French philosopher and sociologist, can be considered a holist based on the transdisciplinary nature of his work.
Skeptical reception
According to skeptics, the phrase "holistic science" is often misused by pseudosciences. In the book Science and Pseudoscience in Clinical Psychology it's noted that "Proponents of pseudoscientific claims, especially in organic medicine, and mental health, often resort to the "mantra of holism" to explain away negative findings. When invoking the mantra, they typically maintain that scientific claims can be evaluated only within the context of broader claims and therefore cannot be evaluated in isolation." This is an invocation of Karl Popper's demarcation problem and in a posting to Ask a Philosopher Massimo Pigliucci clarifies Popper by positing, "Instead of thinking of science as making progress by inductive generalization (which doesn’t work because no matter how many times a given theory may have been confirmed thus far, it is always possible that new, contrary, data will emerge tomorrow), we should say that science makes progress by conclusively disconfirming theories that are, in fact, wrong."
Victor J. Stenger states that "holistic healing is associated with the rejection of classical, Newtonian physics. Yet, holistic healing retains many ideas from eighteenth and nineteenth century physics. Its proponents are blissfully unaware that these ideas, especially superluminal holism, have been rejected by modern physics as well".
Some quantum mystics interpret the wave function of quantum mechanics as a vibration in a holistic ether that pervades the universe and wave function collapse as the result of some cosmic consciousness. This is a misinterpretation of the effects of quantum entanglement as a violation of relativistic causality and quantum field theory.
See also
Antireductionism
Emergence
Holarchy
Holism
Holism in ecological anthropology
Holistic management
Holistic health
Holon (philosophy)
Interdisciplinarity
Organicism
Scientific reductionism
Systems thinking
References
Further reading
Article "Patterns of Wholeness: Introducing Holistic Science" by Brian Goodwin, from the journal Resurgence
Article "From Control to Participation" by Brian Goodwin, from the journal Resurgence
Complex systems theory
Holism
Systems theory | 0.801029 | 0.978448 | 0.783765 |
Praxis (process) | Praxis is the process by which a theory, lesson, or skill is enacted, embodied, realized, applied, or put into practice. "Praxis" may also refer to the act of engaging, applying, exercising, realizing, or practising ideas. This has been a recurrent topic in the field of philosophy, discussed in the writings of Plato, Aristotle, St. Augustine, Francis Bacon, Immanuel Kant, Søren Kierkegaard, Ludwig von Mises, Karl Marx, Antonio Gramsci, Martin Heidegger, Hannah Arendt, Jean-Paul Sartre, Paulo Freire, Murray Rothbard, and many others. It has meaning in the political, educational, spiritual and medical realms.
Origins
The word praxis is from . In Ancient Greek the word praxis (πρᾶξις) referred to activity engaged in by free people. The philosopher Aristotle held that there were three basic activities of humans: theoria (thinking), poiesis (making), and praxis (doing). Corresponding to these activities were three types of knowledge: theoretical, the end goal being truth; poietical, the end goal being production; and practical, the end goal being action. Aristotle further divided the knowledge derived from praxis into ethics, economics, and politics. He also distinguished between eupraxia (εὐπραξία, "good praxis") and dyspraxia (δυσπραξία, "bad praxis, misfortune").
Marxism
Young Hegelian August Cieszkowski was one of the earliest philosophers to use the term praxis to mean "action oriented towards changing society" in his 1838 work Prolegomena zur Historiosophie (Prolegomena to a Historiosophy). Cieszkowski argued that while absolute truth had been achieved in the speculative philosophy of Hegel, the deep divisions and contradictions in man's consciousness could only be resolved through concrete practical activity that directly influences social life. Although there is no evidence that Karl Marx himself read this book, it may have had an indirect influence on his thought through the writings of his friend Moses Hess.
Marx uses the term "praxis" to refer to the free, universal, creative and self-creative activity through which man creates and changes his historical world and himself. Praxis is an activity unique to man, which distinguishes him from all other beings. The concept appears in two of Marx's early works: the Economic and Philosophical Manuscripts of 1844 and the Theses on Feuerbach (1845). In the former work, Marx contrasts the free, conscious productive activity of human beings with the unconscious, compulsive production of animals. He also affirms the primacy of praxis over theory, claiming that theoretical contradictions can only be resolved through practical activity. In the latter work, revolutionary practice is a central theme:
Marx here criticizes the materialist philosophy of Ludwig Feuerbach for envisaging objects in a contemplative way. Marx argues that perception is itself a component of man's practical relationship to the world. To understand the world does not mean considering it from the outside, judging it morally or explaining it scientifically. Society cannot be changed by reformers who understand its needs, only by the revolutionary praxis of the mass whose interest coincides with that of society as a whole—the proletariat. This will be an act of society understanding itself, in which the subject changes the object by the very fact of understanding it.
Seemingly inspired by the Theses, the nineteenth century socialist Antonio Labriola called Marxism the "philosophy of praxis". This description of Marxism would appear again in Antonio Gramsci's Prison Notebooks and the writings of the members of the Frankfurt School. Praxis is also an important theme for Marxist thinkers such as Georg Lukacs, Karl Korsch, Karel Kosik and Henri Lefebvre, and was seen as the central concept of Marx's thought by Yugoslavia's Praxis School, which established a journal of that name in 1964.
Jean-Paul Sartre
In the Critique of Dialectical Reason, Jean-Paul Sartre posits a view of individual praxis as the basis of human history. In his view, praxis is an attempt to negate human need. In a revision of Marxism and his earlier existentialism, Sartre argues that the fundamental relation of human history is scarcity. Conditions of scarcity generate competition for resources, exploitation of one over another and division of labor, which in its turn creates struggle between classes. Each individual experiences the other as a threat to his or her own survival and praxis; it is always a possibility that one's individual freedom limits another's. Sartre recognizes both natural and man-made constraints on freedom: he calls the non-unified practical activity of humans the "practico-inert". Sartre opposes to individual praxis a "group praxis" that fuses each individual to be accountable to each other in a common purpose. Sartre sees a mass movement in a successful revolution as the best exemplar of such a fused group.
Hannah Arendt
In The Human Condition, Hannah Arendt argues that Western philosophy too often has focused on the contemplative life (vita contemplativa) and has neglected the active life (vita activa). This has led humanity to frequently miss much of the everyday relevance of philosophical ideas to real life. For Arendt, praxis is the highest and most important level of the active life. Thus, she argues that more philosophers need to engage in everyday political action or praxis, which she sees as the true realization of human freedom. According to Arendt, our capacity to analyze ideas, wrestle with them, and engage in active praxis is what makes us uniquely human.
In Maurizio Passerin d'Etreves's estimation, "Arendt's theory of action and her revival of the ancient notion of praxis represent one of the most original contributions to twentieth century political thought. ... Moreover, by viewing action as a mode of human togetherness, Arendt is able to develop a conception of participatory democracy which stands in direct contrast to the bureaucratized and elitist forms of politics so characteristic of the modern epoch."
Education
Praxis is used by educators to describe a recurring passage through a cyclical process of experiential learning, such as the cycle described and popularised by David A. Kolb.
Paulo Freire defines praxis in Pedagogy of the Oppressed as "reflection and action directed at the structures to be transformed." Through praxis, oppressed people can acquire a critical awareness of their own condition, and, with teacher-students and students-teachers, struggle for liberation.
In the British Channel 4 television documentary New Order: Play at Home, Factory Records owner Tony Wilson describes praxis as "doing something, and then only afterwards, finding out why you did it".
Praxis may be described as a form of critical thinking and comprises the combination of reflection and action. Praxis can be viewed as a progression of cognitive and physical actions:
Taking the action
Considering the impacts of the action
Analysing the results of the action by reflecting upon it
Altering and revising conceptions and planning following reflection
Implementing these plans in further actions
This creates a cycle which can be viewed in terms of educational settings, learners and educational facilitators.
Scott and Marshall (2009) refer to praxis as "a philosophical term referring to human action on the natural and social world". Furthermore, Gramsci (1999) emphasises the power of praxis in Selections from the Prison Notebooks by stating that "The philosophy of praxis does not tend to leave the simple in their primitive philosophy of common sense but rather to lead them to a higher conception of life".
To reveal the inadequacies of religion, folklore, intellectualism and other such 'one-sided' forms of reasoning, Gramsci appeals directly in his later work to Marx's 'philosophy of praxis', describing it as a 'concrete' mode of reasoning. This principally involves the juxtaposition of a dialectical and scientific audit of reality; against all existing normative, ideological, and therefore counterfeit accounts. Essentially a 'philosophy' based on 'a practice', Marx's philosophy, is described correspondingly in this manner, as the only 'philosophy' that is at the same time a 'history in action' or a 'life' itself (Gramsci, Hoare and Nowell-Smith, 1972, p. 332).
Spirituality
Praxis is also key in meditation and spirituality, where emphasis is placed on gaining first-hand experience of concepts and certain areas, such as union with the Divine, which can only be explored through praxis due to the inability of the finite mind (and its tool, language) to comprehend or express the infinite. In an interview for YES! Magazine, Matthew Fox explained it this way:
According to Strong's Concordance, the Hebrew word ta‛am is, properly, a taste. This is, figuratively, perception and, by implication, intelligence; transitively, a mandate: advice, behaviour, decree, discretion, judgment, reason, taste, understanding.
Medicine
Praxis is the ability to perform voluntary skilled movements. The partial or complete inability to do so in the absence of primary sensory or motor impairments is known as apraxia.
See also
Apraxia
Christian theological praxis
Hexis
Lex artis
Orthopraxy
Praxeology
Praxis Discussion Series
Praxis (disambiguation)
Praxis intervention
Praxis school
Practice (social theory)
Theses on Feuerbach
References
Further reading
Paulo Freire (1970), Pedagogy of the Oppressed, Continuum International Publishing Group.
External links
Entry for "praxis" at the Encyclopaedia of Informal Education
Der Begriff Praxis
Concepts in the philosophy of mind
Marxism | 0.785758 | 0.997456 | 0.783759 |
Sustainable food system | A sustainable food system is a type of food system that provides healthy food to people and creates sustainable environmental, economic, and social systems that surround food. Sustainable food systems start with the development of sustainable agricultural practices, development of more sustainable food distribution systems, creation of sustainable diets, and reduction of food waste throughout the system. Sustainable food systems have been argued to be central to many or all 17 Sustainable Development Goals.
Moving to sustainable food systems, including via shifting consumption to sustainable diets, is an important component of addressing the causes of climate change and adapting to it. A 2020 review conducted for the European Union found that up to 37% of global greenhouse gas emissions could be attributed to the food system, including crop and livestock production, transportation, changing land use (including deforestation), and food loss and waste. Reduction of meat production, which accounts for ~60% of greenhouse gas emissions and ~75% of agriculturally used land, is one major component of this change.
The global food system is facing major interconnected challenges, including mitigating food insecurity, effects from climate change, biodiversity loss, malnutrition, inequity, soil degradation, pest outbreaks, water and energy scarcity, economic and political crises, natural resource depletion, and preventable ill-health.
The concept of sustainable food systems is frequently at the center of sustainability-focused policy programs, such as proposed Green New Deal programs.
Definition
There are many different definitions of a sustainable food system.
From a global perspective, the Food and Agriculture Organization of the United Nations describes a sustainable food system as follows:
The American Public Health Association (APHA) defines a sustainable food system as:
The European Union's Scientific Advice Mechanism defines a sustainable food system as a system that:
Problems with conventional food systems
Industrial agriculture causes environmental impacts, as well as health problems associated with both obesity and hunger. This has generated a strong interest in healthy, sustainable eating as a major component of the overall movement toward sustainability and climate change mitigation.
Conventional food systems are largely based on the availability of inexpensive fossil fuels, which is necessary for mechanized agriculture, the manufacturing or collection of chemical fertilizers, the processing of food products, and the packaging of foods. Food processing began when the number of consumers started growing rapidly. The demand for cheap and efficient calories climbed, which resulted in nutrition decline. Industrialized agriculture, due to its reliance on economies of scale to reduce production costs, often leads to the compromising of local, regional, or even global ecosystems through fertilizer runoff, nonpoint source pollution, deforestation, suboptimal mechanisms affecting consumer product choice, and greenhouse gas emissions.
Food and power
In the contemporary world, transnational corporations execute high levels of control over the food system. In this system, both farmers and consumers are disadvantaged and have little control; power is concentrated in the center of the supply chain, where corporations control how food moves from producers to consumers.
Disempowerment of consumers
People living in different areas face substantial inequality in their access to healthy food. Areas where affordable, healthy food, particularly fresh fruits and vegetables, is difficult to access are sometimes called food deserts. This term has been particularly applied in the USA. In addition, conventional channels do not distribute food by emergency assistance or charity. Urban residents receive more sustainable food production from healthier and safer sources than low-income communities. Nonetheless, conventional channels are more sustainable than charitable or welfare food resources. Even though the conventional food system provides easier access and lower prices, their food may not be the best for the environment nor consumer health.
Both obesity and undernutrition are associated with poverty and marginalization. This has been referred to as the "double burden of malnutrition." In low-income areas, there may be abundant access to fast-food or small convenience stores and "corner" stores, but no supermarkets that sell a variety of healthy foods.
Disempowerment of producers
Small farms tend to be more sustainable than large farming operations, because of differences in their management and methods. Industrial agriculture replaces human labor using increased usage of fossil fuels, fertilizers, pesticides, and machinery and is heavily reliant on monoculture. However, if current trends continue, the number of operating farms in existence is expected to halve by 2100, as smallholders' farms are consolidated into larger operations. The percentage of people who work as farmers worldwide dropped from 44% to 26% between 1991 and 2020.
Small farmers worldwide are often trapped in poverty and have little agency in the global food system. Smallholder farms produce a greater diversity of crops as well as harboring more non-crop biodiversity, but in wealthy, industrialized countries, small farms have declined severely. For example, in the USA, 4% of the total number of farms operate 26% of all agricultural land.
Complications from globalization
The need to reduce production costs in an increasingly global market can cause production of foods to be moved to areas where economic costs (labor, taxes, etc.) are lower or environmental regulations are more lax, which are usually further from consumer markets. For example, the majority of salmon sold in the United States is raised off the coast of Chile, due in large part to less stringent Chilean standards regarding fish feed and regardless of the fact that salmon are not indigenous in Chilean coastal waters. The globalization of food production can result in the loss of traditional food systems in less developed countries and have negative impacts on the population health, ecosystems, and cultures in those countries.
Globalization of sustainable food systems has coincided the proliferation of private standards in the agri-food sector where big food retailers have formed multi-stakeholder initiatives (MSIs) with governance over standard setting organizations (SSOs) who maintain the standards. One such MSI is the Consumer Goods Forum(CGF). With CGF members openly using lobbying dollars to influence trade agreements for food systems which leads to creating barriers to competition. Concerns around corporate governance within food systems as a substitute for regulation were raised by the Institute for Multi-Stakeholder Initiative Integrity. The proliferation of private standards resulted in standard harmonization from organizations that include the Global Food Safety Initiative and ISEAL Alliance. The unintended consequence of standard harmonization was a perverse incentive because companies owning private standards generate revenue from fees that other companies have to pay to implement the standards. This has led to more and more private standards entering the marketplace who are enticed to make money.
Systemic structures
Moreover, the existing conventional food system lacks the inherent framework necessary to foster sustainable models of food production and consumption. Within the decision-making processes associated with this system, the burden of responsibility primarily falls on consumers and private enterprises. This expectation places the onus on individuals to voluntarily and often without external incentives, expend effort to educate themselves about sustainable behaviours and specific product choices. This educational endeavour is reliant on the availability of public information. Subsequently, consumers are urged to alter their decision-making patterns concerning production and consumption, driven by prioritised ethical values and sometimes health benefits, even when significant drawbacks are prevalent. These drawbacks faced by consumers include elevated costs of organic foods, imbalanced monetary price differentials between animal-intensive diets and plant-based alternatives, and an absence of comprehensive consumer guidance aligned with contemporary valuations. In 2020, an analysis of external climate costs of foods indicated that external greenhouse gas costs are typically highest for animal-based products – conventional and organic to about the same extent within that ecosystem subdomain – followed by conventional dairy products and lowest for organic plant-based foods. It finds contemporary monetary evaluations to be "inadequate" and policy-making that lead to reductions of these costs to be possible, appropriate and urgent.
Agricultural pollution
Sourcing sustainable food
At the global level the environmental impact of agribusiness is being addressed through sustainable agriculture, cellular agriculture and organic farming.
Various alternatives to meat and novel classes of foods can substantially increase sustainability. There are large potential benefits of marine algae-based aquaculture for the development of a future healthy and sustainable food system. Fungiculture, another sector of a growing bioeconomy besides algaculture, may also become a larger component of a sustainable food system. Consumption shares of various other ingredients for meat analogues such as protein from pulses may also rise substantially in a sustainable food system. The integration of single-cell protein, which can be produced from captured CO2. Optimized dietary scenarios would also see changes in various other types of foods such as nuts, as well as pulses such as beans, which have favorable environmental and health profiles.
Complementary approaches under development include vertical farming of various types of foods and various agricultural technologies, often using digital agriculture.
Sustainable seafood
Sustainable seafood is seafood from either fished or farmed sources that can maintain or increase production in the future without jeopardizing the ecosystems from which it was acquired. The sustainable seafood movement has gained momentum as more people become aware about both overfishing and environmentally destructive fishing methods. The goal of sustainable seafood practices is to ensure that fish populations are able to continue to thrive, that marine habitats are protected, and that fishing and aquaculture practices do not have negative impacts on local communities or economies.
There are several factors that go into determining whether a seafood product is sustainable or not. These include the method of fishing or farming, the health of the fish population, the impact on the surrounding environment, and the social and economic implications of the seafood production. Some sustainable seafood practices include using methods that minimize bycatch, implementing seasonal or area closures to allow fish populations to recover, and using aquaculture methods that minimize the use of antibiotics or other chemicals. Organizations such as the Marine Stewardship Council (MSC) and the Aquaculture Stewardship Council (ASC) work to promote sustainable seafood practices and provide certification for products that meet their sustainability standards. In addition, many retailers and restaurants are now offering sustainable seafood options to their customers, often labeled with a sustainability certification logo to make it easier for consumers to make informed choices. Consumers can also play a role in promoting sustainable seafood by making conscious choices about the seafood they purchase and consume. This can include choosing seafood that is labeled as sustainably harvested or farmed, asking questions about the source and production methods of the seafood they purchase, and supporting restaurants and retailers that prioritize sustainability in their seafood offerings. By working together to promote sustainable seafood practices, we can help to ensure the health and sustainability of our oceans and the communities that depend on them.
Sustainable animal feed
A study suggests there would be large environmental benefits of using insects for animal feed.When substituting mixed grain, which is currently the main animal feed, insect feed lowers water and land requirement and emits fewer greenhouse gas and ammonia.
Sustainable pet food
Recent studies show that vegan diets, which are more sustainable, would not have negative impact on the health of pet dogs and cats if implemented appropriately. It aims to minimize the ecological footprint of pet food production while still providing the necessary nutrition for pets. Recent studies have explored the potential benefits of vegan diets for pets in terms of sustainability.
One example is the growing body of research indicating that properly formulated and balanced vegan diets can meet the nutritional needs of dogs and cats without compromising their health. These studies suggest that with appropriate planning and supplementation, pets can thrive on plant-based diets. This is significant from a sustainability perspective as traditional pet food production heavily relies on animal-based ingredients, which contribute to deforestation, greenhouse gas emissions, and overfishing.
By opting for sustainable pet food options, such as plant-based or eco-friendly alternatives, pet owners can reduce their pets' carbon footprint and support more ethical and sustainable practices in the pet food industry. Additionally, sustainable pet food may also prioritize the use of responsibly sourced ingredients, organic farming practices, and minimal packaging waste. It is important to note that when considering a vegan or alternative diet for pets, consultation with a veterinarian is crucial. Each pet has unique nutritional requirements, and a professional can help determine the most suitable diet plan to ensure all necessary nutrients are provided.
Substitution of meat and sustainable meat and dairy
Meat reduction strategies
Effects and combination of measures
"Policy sequencing" to gradually extend regulations once established to other forest risk commodities (e.g. other than beef) and regions while coordinating with other importing countries could prevent ineffectiveness.
Meat and dairy
Despite meat from livestock such as beef and lamb being considered unsustainable, some regenerative agriculture proponents suggest rearing livestock with a mixed farming system to restore organic matter in grasslands. Organizations such as the Canadian Roundtable for Sustainable Beef (CRSB) are looking for solutions to reduce the impact of meat production on the environment. In October 2021, 17% of beef sold in Canada was certified as sustainable beef by the CRSB. However, sustainable meat has led to criticism, as environmentalists point out that the meat industry excludes most of its emissions.
Important mitigation options for reducing the greenhouse gas emissions from livestock include genetic selection, introduction of methanotrophic bacteria into the rumen, vaccines, feeds, toilet-training, diet modification and grazing management. Other options include shifting to ruminant-free alternatives, such as milk substitutes and meat analogues or poultry, which generates far fewer emissions.
Plant-based meat is proposed for sustainable alternatives to meat consumption. Plant-based meat emits 30%–90% less greenhouse gas than conventional meat (kg--eq/kg-meat) and 72%–99% less water than conventional meat. Public company Beyond Meat and privately held company Impossible Foods are examples of plant-based food production. However, consulting firm Sustainalytics assured that these companies are not more sustainable than meat-processors competitors such as food processor JBS, and they don't disclose all the emissions of their supply chain.
Beyond reducing negative impacts of meat production, facilitating shifts towards more sustainable meat, and facilitating reduced meat consumption (including via plant-based meat substitutes), cultured meat may offer a potentially sustainable way to produce real meat without the associated negative environmental impacts.
Phase-outs, co-optimization and environmental standards
In regards to deforestation, a study proposed kinds of "climate clubs" of "as many other states as possible taking similar measures and establishing uniform environmental standards". It suggested that "otherwise, global problems remain unsolvable, and shifting effects will occur" and that "border adjustments [...] have to be introduced to target those states that do not participate—again, to avoid shifting effects with ecologically and economically detrimental consequences", with such "border adjustments or eco-tariffs" incentivizing other countries to adjust their standards and domestic production to join the climate club. Identified potential barriers to sustainability initiatives may include contemporary trade-policy goals and competition law. Greenhouse gas emissions for countries are often measured according to production, for imported goods that are produced in other countries than where they are consumed "embedded emissions" refers to the emissions of the product. In cases where such products are and remain imported, eco-tariffs could over time adjust prices for specific categories of products – or for specific non-collaborative polluting origin countries – such as deforestation-associated meat, foods with intransparent supply-chain origin or foods with high embedded emissions.
Agricultural productivity and environmental efficiency
Agricultural productivity (including e.g. reliability of yields) is an important component of food security and increasing it sustainably (e.g. with high efficiency in terms of environmental impacts) could be a major way to decrease negative environmental impacts, such as by decreasing the amount of land needed for farming or reducing environmental degradation like deforestation.
Genetically engineered crops
There is research and development to engineer genetically modified crops with increased heat/drought/stress resistance, increased yields, lower water requirements, and overall lower environmental impacts, among other things.
Novel agricultural technologies
Organic food
Local food systems
In local and regional food systems, food is produced, distributed, and consumed locally. This type of system can be beneficial both to the consumer (by providing fresher and more sustainably grown product) and to the farmer (by fetching higher prices and giving more direct access to consumer feedback). Local and regional food systems can face challenges arising from inadequate institutions or programs, geographic limitations of producing certain crops, and seasonal fluctuations which can affect product demand within regions. In addition, direct marketing also faces challenges of accessibility, coordination, and awareness.
Farmers' markets, which have increased in number over the past two decades, are designed for supporting local farmers in selling their fresh products to consumers who are willing to buy. Food hubs are also similar locations where farmers deliver products and consumers come to pick them up. Consumers who wish to have weekly produce delivered can buy shares through a system called Community-Supported Agriculture (CSA). However, these farmers' markets also face challenges with marketing needs such as starting up, advertisement, payments, processing, and regulations.
There are various movements working towards local food production, more productive use of urban wastelands and domestic gardens including permaculture, guerilla gardening, urban horticulture, local food, slow food, sustainable gardening, and organic gardening.
Debates over local food system efficiency and sustainability have risen as these systems decrease transportation, which is a strategy for combating environmental footprints and climate change. A popular argument is that the less impactful footprint of food products from local markets on communities and environment. Main factors behind climate change include land use practices and greenhouse emissions, as global food systems produce approximately 33% of theses emissions. Compared to transportation in a local food system, a conventional system takes more fuel for energy and emits more pollution, such as carbon dioxide. This transportation also includes miles for agricultural products to help with agriculture and depends on factors such as transportation sizes, modes, and fuel types. Some airplane importations have shown to be more efficient than local food systems in some cases. Overall, local food systems can often support better environmental practices.
Environmental impact of food miles
Studies found that food miles are a relatively minor factor of carbon emissions; albeit increased food localization may also enable additional, more significant environmental benefits such as recycling of energy, water, and nutrients. For specific foods, regional differences in harvest seasons may make it more environmentally friendly to import from distant regions than more local production and storage or local production in greenhouses. This may vary depending on the environmental standards in the respective country, the distance of the respective countries and on a case-by-case basis for different foods.
However, a 2022 study suggests global food miles' emissions are 3.5–7.5 times higher than previously estimated, with transport accounting for about 19% of total food-system emissions, though shifting towards plant-based diets remains substantially more important. The study concludes that "a shift towards plant-based foods must be coupled with more locally produced items, mainly in affluent countries".
Food distribution
In food distribution, increasing food supply is a production problem, as it takes time for products to get marketed, and as they wait to get distributed the food goes to waste. Despite the fact that throughout all food production an estimated 20-30% of food is wasted, there have been efforts to combat this issue, such as campaigns conducted to promote limiting food waste. However, due to insufficient facilities and practices as well as huge amounts of food going unmarketed or harvested due to prices or quality, food is wasted through each phase of its distribution. Another factor for lack of sustainability within food distribution includes transportation in combination with inadequate methods for food handling throughout the packing process. Additionally, poor or long conditions for food in storage and consumer waste add to this list of factors for inefficiency found in food distribution. In 2019, though global production of calories kept pace with population growth, there are still more than 820 million people who have insufficient food and many more consume low-quality diets leading to micronutrient deficiencies.
Some modern tendencies in food distribution also create bounds in which problems are created and solutions must be formed. One factor includes growth of large-scale producing and selling units in bulk to chain stores which displays merchandising power from large scale market organizations as well as their mergence with manufactures. In response to production, another factor includes large scale distribution and buying units among manufacturers in development of food distribution, which also affects producers, distributors, and consumers. Another main factor involves protecting public interest, which means better adaptation for product and service, resulting in rapid development of food distribution. A further factor revolves around price maintenance, which creates pressure for lower prices, resulting in higher drive for lower cost throughout the whole food distribution process. An additional factor comprises new changes and forms of newly invented technical processes such as developments of freezing food, discovered through experiments, to help with distribution efficiency. Another factor is new technical developments in distributing machinery to meet the influence of consumer demands and economic factors. Lastly, one more factor includes government relation to businesses and those who petition against it in correlation with anti-trust laws due to large scale business organizations and the fear of monopoly contributing to changing public attitude.
Food security, nutrition and diet
The environmental effects of different dietary patterns depend on many factors, including the proportion of animal and plant foods consumed and the method of food production. At the same time, current and future food systems need to be provided with sufficient nutrition for not only the current population, but future population growth in light of a world affected by changing climate in the face of global warming.
Nearly one in four households in the United States have experienced food insecurity in 2020–21. Even before the pandemic hit, some 13.7 million households, or 10.5% of all U.S. households, experienced food insecurity at some point during 2019, according to data from the U.S. Department of Agriculture. That works out to more than 35 million Americans who were either unable to acquire enough food to meet their needs, or uncertain of where their next meal might come from, last year.
The "global land squeeze" for agricultural land also has impacts on food security. Likewise, effects of climate change on agriculture can result in lower crop yields and nutritional quality due to for example drought, heat waves and flooding as well as increases in water scarcity, pests and plant diseases. Soil conservation may be important for food security as well. For sustainability and food security, the food system would need to adapt to such current and future problems.
According to one estimate, "just four corporations control 90% of the global grain trade" and researchers have argued that the food system is too fragile due to various issues, such as "massive food producers" (i.e. market-mechanisms) having too much power and nations "polarising into super-importers and super-exporters". However the impact of market power on the food system is contested with other claiming more complex context dependent outcomes.
Production decision-making
In the food industry, especially in agriculture, there has been a rise in problems toward the production of some food products. For instance, growing vegetables and fruits has become more expensive. It is difficult to grow some agricultural crops because some have a preferable climate condition for developing. There has also been an incline on food shortages as production has decreased. Though the world still produces enough food for the population, not everyone receives good quality food because it is not accessible to them, since it depends on their location and/or income. In addition, the number of overweight people has increased, and there are about 2 billion people that are underfed worldwide. This shows how the global food system lacks quantity and quality according to the food consumption patterns.
A study estimated that "relocating current croplands to [environmentally] optimal locations, whilst allowing ecosystems in then-abandoned areas to regenerate, could simultaneously decrease the current carbon, biodiversity, and irrigation water footprint of global crop production by 71%, 87%, and 100%", with relocation only within national borders also having substantial potential.
Policies, including ones that affect consumption, may affect production-decisions such as which foods are produced to various degrees and in various indirect and direct ways. Individual studies have named several proposed options of such and the restricted website Project Drawdown has aggregated and preliminarily evaluated some of these measures.
Climate change adaptation
Food waste
According to the Food and Agriculture Organization (FAO), food waste is responsible for 8 percent of global human-made greenhouse gas emissions. The FAO concludes that nearly 30 percent of all available agricultural land in the world – 1.4 billion hectares – is used for produced but uneaten food. The global blue water footprint of food waste is 250 km3, the amount of water that flows annually through the Volga or three times Lake Geneva.
There are several factors that explain how food waste has increased globally in food systems. The main factor is population, because as population increases more food is being made, but most food produced goes to waste. Especially, during COVID-19, food waste grew sharply due to the booming of food delivery services according to a 2022 study. In addition, not all countries have the same resources to provide the best quality of food. According to a study done in 2010, private households produce the largest amounts of food waste across the globe. Another major factor is overproduction; the rate of food production is significantly higher than the rate of consumption, leading to a surplus of food waste.
Throughout the world there are different ways that food is being processed. With different priorities, different choices are being made to meet their most important needs. Money is another big factor that determines how long the process will take and who is working, and it is treated differently in low income countries' food systems.
However, high income countries food systems still may deal with other issues such as food security. This demonstrates how all food systems have their weaknesses and strengths. Climate change causes food waste to increase because the warm temperature causes crops to dry faster and creates a higher risk for fires. Food waste can occur any time throughout production. According to the World Wildlife Organization, since most food produced goes to landfills, when it rots it causes methane to be produced. The disposal of food has a big impact on our environment and health.
Academic Opportunities
The study of sustainable food applies systems theory and methods of sustainable design towards food systems. As an interdisciplinary field, the study of sustainable food systems has been growing in the last several decades. University programs focused on sustainable food systems include:
University of Colorado Boulder
Harvard Extension
University of Delaware
Mesa Community College
University of California, Davis
University of Vermont
Sterling College (Vermont)
University of Michigan
Portland State University
University of Sheffield's Institute for Sustainable Food
University of Georgia's Sustainable Food Systems Initiative
The Culinary Institute of America's Master's in Sustainable Food Systems
University of Edinburgh's Global Academy of Agriculture and Food Systems
There is a debate about "establishing a body akin to the Intergovernmental Panel on Climate Change (IPCC) for food systems" which "would respond to questions from policymakers and produce advice based on a synthesis of the available evidence" while identifying "gaps in the science that need addressing".
Public policy
European Union
Global
Asia
See also
Standardization#Environmental protection
References
Cited sources
Further reading
Monbiot, George (2022). "Regenesis: Feeding the World without Devouring the Planet". London: Penguin Books.
Pimbert, Michel, Rachel Shindelar, and Hanna Schösler (eds.), "Think Global, Eat Local: Exploring Foodways," RCC Perspectives 2015, no. 1. doi.org/10.5282/rcc/6920.
AGRIS record.
Food politics
Sustainability | 0.796277 | 0.984215 | 0.783708 |
Forestry | Forestry is the science and craft of creating, managing, planting, using, conserving and repairing forests and woodlands for associated resources for human and environmental benefits. Forestry is practiced in plantations and natural stands. The science of forestry has elements that belong to the biological, physical, social, political and managerial sciences. Forest management plays an essential role in the creation and modification of habitats and affects ecosystem services provisioning.
Modern forestry generally embraces a broad range of concerns, in what is known as multiple-use management, including: the provision of timber, fuel wood, wildlife habitat, natural water quality management, recreation, landscape and community protection, employment, aesthetically appealing landscapes, biodiversity management, watershed management, erosion control, and preserving forests as "sinks" for atmospheric carbon dioxide.
Forest ecosystems have come to be seen as the most important component of the biosphere, and forestry has emerged as a vital applied science, craft, and technology. A practitioner of forestry is known as a forester. Another common term is silviculturist. Silviculture is narrower than forestry, being concerned only with forest plants, but is often used synonymously with forestry.
All people depend upon forests and their biodiversity, some more than others. Forestry is an important economic segment in various industrial countries, as forests provide more than 86 million green jobs and support the livelihoods of many more people. For example, in Germany, forests cover nearly a third of the land area, wood is the most important renewable resource, and forestry supports more than a million jobs and about €181 billion of value to the German economy each year.
Worldwide, an estimated 880 million people spend part of their time collecting fuelwood or producing charcoal, many of them women. Human populations tend to be low in areas of low-income countries with high forest cover and high forest biodiversity, but poverty rates in these areas tend to be high. Some 252 million people living in forests and savannahs have incomes of less than US$1.25 per day.
Science
Forestry as a science
Over the past centuries, forestry was regarded as a separate science. With the rise of ecology and environmental science, there has been a reordering in the applied sciences. In line with this view, forestry is a primary land-use science comparable with agriculture. Under these headings, the fundamentals behind the management of natural forests comes by way of natural ecology. Forests or tree plantations, those whose primary purpose is the extraction of forest products, are planned and managed to utilize a mix of ecological and agroecological principles. In many regions of the world there is considerable conflict between forest practices and other societal priorities such as water quality, watershed preservation, sustainable fishing, conservation, and species preservation.
Silvology
Silvology (Latin: silva or sylva, "forests and woods"; , -logia, "science of" or "study of") is the biological science of studying forests and woodlands, incorporating the understanding of natural forest ecosystems, and the effects and development of silvicultural practices. The term complements silviculture, which deals with the art and practice of forest management.
Silvology is seen as a single science for forestry and was first used by Professor Roelof A.A. Oldeman at Wageningen University. It integrates the study of forests and forest ecology, dealing with single tree autecology and natural forest ecology.
Dendrology
Genetic diversity in forestry
The provenance of forest reproductive material used to plant forests has a great influence on how the trees develop, hence why it is important to use forest reproductive material of good quality and of high genetic diversity. More generally, all forest management practices, including in natural regeneration systems, may impact the genetic diversity of trees.
The term describes the differences in DNA sequence between individuals as distinct from variation caused by environmental influences. The unique genetic composition of an individual (its genotype) will determine its performance (its phenotype) at a particular site.
Genetic diversity is needed to maintain the vitality of forests and to provide resilience to pests and diseases. Genetic diversity also ensures that forest trees can survive, adapt and evolve under changing environmental conditions. Furthermore, genetic diversity is the foundation of biological diversity at species and ecosystem levels. Forest genetic resources are therefore important to consider in forest management.
Genetic diversity in forests is threatened by forest fires, pests and diseases, habitat fragmentation, poor silvicultural practices and inappropriate use of forest reproductive material.
About 98 million hectares of forest were affected by fire in 2015; this was mainly in the tropical domain, where fire burned about 4 percent of the total forest area in that year. More than two-thirds of the total forest area affected was in Africa and South America. Insects, diseases and severe weather events damaged about 40 million hectares of forests in 2015, mainly in the temperate and boreal domains.
Furthermore, the marginal populations of many tree species are facing new threats due to the effects of climate change.
Most countries in Europe have recommendations or guidelines for selecting species and provenances that can be used in a given site or zone.
Forest management
Urban forestry
Forestry education
History of forestry education
The first dedicated forestry school was established by Georg Ludwig Hartig at Hungen in the Wetterau, Hesse, in 1787, though forestry had been taught earlier in central Europe, including at the University of Giessen, in Hesse-Darmstadt.
In Spain, the first forestry school was the Forest Engineering School of Madrid (Escuela Técnica Superior de Ingenieros de Montes), founded in 1844.
The first in North America, the Biltmore Forest School was established near Asheville, North Carolina, by Carl A. Schenck on September 1, 1898, on the grounds of George W. Vanderbilt's Biltmore Estate. Another early school was the New York State College of Forestry, established at Cornell University just a few weeks later, in September 1898.
Early 19th century North American foresters went to Germany to study forestry. Some early German foresters also emigrated to North America.
In South America the first forestry school was established in Brazil, in Viçosa, Minas Gerais, in 1962, and moved the next year to become a faculty at the Federal University of Paraná, in Curitiba.
Forestry education today
Today, forestry education typically includes training in general biology, ecology, botany, genetics, soil science, climatology, hydrology, economics and forest management. Education in the basics of sociology and political science is often considered an advantage. Professional skills in conflict resolution and communication are also important in training programs.
In India, forestry education is imparted in the agricultural universities and in Forest Research Institutes (deemed universities). Four year degree programmes are conducted in these universities at the undergraduate level. Masters and Doctorate degrees are also available in these universities.
In the United States, postsecondary forestry education leading to a Bachelor's degree or Master's degree is accredited by the Society of American Foresters.
In Canada the Canadian Institute of Forestry awards silver rings to graduates from accredited university BSc programs, as well as college and technical programs.
In many European countries, training in forestry is made in accordance with requirements of the Bologna Process and the European Higher Education Area.
The International Union of Forest Research Organizations is the only international organization that coordinates forest science efforts worldwide.
Continuing education
In order to keep up with changing demands and environmental factors, forestry education does not stop at graduation. Increasingly, forestry professionals engage in regular training to maintain and improve on their management practices. An increasingly popular tool are marteloscopes; one hectare large, rectangular forest sites where all trees are numbered, mapped and recorded.
These sites can be used to do virtual thinnings and test one's wood quality and volume estimations as well as tree microhabitats. This system is mainly suitable to regions with small-scale multi-functional forest management systems
History
Society and culture
Literature
Forestry literature is the books, journals and other publications about forestry.
The first major works about forestry in the English language included Roger Taverner's Booke of Survey (1565), John Manwood's A Brefe Collection of the Lawes of the Forrest (1592) and John Evelyn's Sylva (1662).
Noted silvologists
Gabriel Hemery
Carl Ditters von Dittersdorf
See also
Agroforestry
Close to nature forestry
Community forestry
Deforestation
International Year of Forests
List of forest research institutes
List of forestry journals
List of forestry technical schools
List of forestry universities and colleges
List of historic journals of forestry
List of national forests of the United States
Non-timber forest product
Nonindustrial private forests
Silviculture
References
Sources
External links
www.silvology.com
Dendrology | 0.786954 | 0.995805 | 0.783653 |
Sustainable engineering | Sustainable engineering is the process of designing or operating systems such that they use energy and resources sustainably, in other words, at a rate that does not compromise the natural environment, or the ability of future generations to meet their own needs.
Common engineering focuses
Sustainable Engineering focuses on the following -
Water supply
Food production
Housing and shelter
Sanitation and waste management
Energy development
Transportation
Industrial processing
Development of natural resources
Cleaning up polluted waste sites
Planning projects to reduce environmental and social impacts
Restoring natural environments such as forests, lakes, streams, and wetlands
Providing medical care to those in need
Minimizing and responsibly disposing of waste to benefit all
Improving industrial processes to eliminate waste and reduce consumption
Recommending the appropriate and innovative use of technology
Aspects of engineering disciplines
Every engineering discipline is engaged in sustainable design, employing numerous initiatives, especially life cycle analysis (LCA), pollution prevention, Design for the Environment (DfE), Design for Disassembly (DfD), and Design for Recycling (DfR). These are replacing or at least changing pollution control paradigms. For example, concept of a "cap and trade" has been tested and works well for some pollutants. This is a system where companies are allowed to place a "bubble" over a whole manufacturing complex or trade pollution credits with other companies in their industry instead of a "stack-by-stack" and "pipe-by-pipe" approach, i.e. the so-called "command and control" approach. Such policy and regulatory innovations call for some improved technology based approaches as well as better quality-based approaches, such as leveling out the pollutant loadings and using less expensive technologies to remove the first large bulk of pollutants, followed by higher operation and maintenance (O&M) technologies for the more difficult to treat stacks and pipes. But, the net effect can be a greater reduction of pollutant emissions and effluents than treating each stack or pipe as an independent entity. This is a foundation for most sustainable design approaches, i.e. conducting a life-cycle analysis, prioritizing the most important problems, and matching the technologies and operations to address them. The problems will vary by size (e.g. pollutant loading), difficulty in treating, and feasibility. The most intractable problems are often those that are small but very expensive and difficult to treat, i.e. less feasible. Of course, as with all paradigm shifts, expectations must be managed from both a technical and an operational perspective. Historically, sustainability considerations have been approached by engineers as constraints on their designs. For example, hazardous substances generated by a manufacturing process were dealt with as a waste stream that must be contained and treated. The hazardous waste production had to be constrained by selecting certain manufacturing types, increasing waste handling facilities, and if these did not entirely do the job, limiting rates of production. Green engineering recognizes that these processes are often inefficient economically and environmentally, calling for a comprehensive, systematic life cycle approach. Green engineering attempts to achieve four goals:
Waste reduction
Materials management
Pollution prevention and
Product enhancement.
Green engineering encompasses numerous ways to improve processes and products to make them more efficient from an environmental and sustainable standpoint. Every one of these approaches depends on viewing possible impacts in space and time. Architects consider the sense of place. Engineers view the site map as a set of fluxes across the boundary. The design must consider short and long-term impacts. Those impacts beyond the near-term are the province of sustainable design.
The effects may not manifest themselves for decades. In the mid-twentieth century, designers specified the use of what are now known to be hazardous building materials, such as asbestos flooring, pipe wrap and shingles, lead paint and pipes, and even structural and mechanical systems that may have increased the exposure to molds and radon. Those decisions have led to health risks to the inhabitants. It is easy in retrospect to criticize these decisions, but many were made for noble reasons, such as fire prevention and durability of materials. However, it does illustrate that seemingly small impacts when viewed through the prism of time can be amplified exponentially in their effects.
Sustainable design requires a complete assessment of a design in place and time. Some impacts may not occur until centuries in the future. For example, the extent to which we decide to use nuclear power to generate electricity is a sustainable design decision. The radioactive wastes may have half-lives of hundreds of thousands of years, meaning it will take all these years for half of the radioactive isotopes to decay. Radioactive decay is the spontaneous transformation of one element into another. This occurs by irreversibly changing the number of protons in the nucleus. Thus, sustainable designs of such enterprises must consider highly uncertain futures. For example, even if we properly place warning signs about these hazardous wastes, we do not know if the English language will be understood.
All four goals of green engineering mentioned above are supported by a long-term, life cycle point of view. A life cycle analysis is a holistic approach to consider the entirety of a product, process or activity, encompassing raw materials, manufacturing, transportation, distribution, use, maintenance, recycling, and final disposal. In other words, assessing its life cycle should yield a complete picture of the product.
The first step in a life-cycle assessment is to gather data on the flow of a material through an identifiable society. Once the quantities of various components of such a flow are known, the important functions and impacts of each step in the production, manufacture, use, and recovery/disposal are estimated. Thus, in sustainable design, engineers must optimize for variables that give the best performance in temporal frames.
Accomplishments from 1992 to 2002
The World Engineering Partnership for Sustainable Development (WEPSD) was formed and they are responsible for the following areas: redesign engineering responsibilities and ethic to sustainable development, analyze and develop a long-term plan, find solution by exchanging information with partners and using new technologies, and solve the critical global environment problems, such as fresh water and climate change
CASI Global was formed mainly as a platform for corporates and governments to share best practices; with a mission to promote the cause and knowledge of csr & sustainability. Thousands of corporates and colleges across the world are now a part of CASI Global with a view to support this mission. CASI also offers Global Fellow programs on finance / operations / manufacturing / supply chain / etc. with a dual specialization in Sustainability. The idea is every professional has inculcate sustainability within their core function & industry. CASI Global
Developed environmental policies, codes of ethics, and sustainable development guidelines
Earth Charter was restarted as a civil society initiative
The World Bank, United Nations Environmental Program, and the Global Environment Facility joined programs for sustainable development
Launched programs for engineering students and practicing engineers on how to apply sustainable development concepts in their work
Developed new approaches in industrial processes
Sustainable housing
In 2013, the average annual electricity consumption for a U.S. residential utility customer was 10,908 kilowatt hours (kWh), an average of 909 kWh per month. Louisiana had the highest annual consumption at 15,270 kWh, and Hawaii had the lowest at 6,176 kWh. Residential sector itself uses 18% of the total energy generated and therefore, incorporating sustainable construction practices there can be significant reduction in this number. Basic Sustainable construction practices include :
Sustainable Site and Location: One important element of building that is often overlooked is finding an appropriate location to build. Avoiding inappropriate sites such as farmland and locating the site near existing infrastructure, like roads, sewers, stormwater systems and transit, allows builders to lessen negative impact on a home's surroundings.
Water Conservation: Conserving water can be economically done by installing low-flow fixtures that often cost the same as less efficient models. Water can be saved in landscaping applications by choosing the proper plants.
Materials: Green materials include many different options. People commonly assume that "green" means recycled materials. Although that recycled materials represent one option, green materials also include reused materials, renewable materials like bamboo and cork, or materials local to one’s region. A green material does not have to cost more or be of lesser or higher quality. Most green products are comparable to their non-green counterparts.
Energy Conservation: Probably the most important part of building green is energy conservation. By implementing passive design, structural insulated panels (SIPs), efficient lighting, and renewable energy like solar energy and geothermal energy, a home can benefit from reduced energy consumption or qualify as a net zero energy home.
Indoor Environmental Quality: The quality of the indoor environment plays a pivotal role in a person's health. In many cases, a much healthier environment can be created through avoiding hazardous materials found in paint, carpet, and other finishes. It is also important to have proper ventilation and ample day lighting.
Savings
Water Conservation: A newly constructed home can implement products with the WaterSense label at no additional costs and achieve a water savings of 20% when including the water heater savings and the water itself.
Energy Conservation: Energy conservation is highly intensive when it comes to cost premiums for implementation. However, it also has large potential for savings. Minimum savings can be achieved at no additional cost by pursuing passive design strategies. The next step up from passive design in the level of green (and ultimately the level of savings) would be implementing advanced building envelopematerials, like structural insulated panels (SIPs). SIPs can be installed for approximately $2 per linear foot of exterior wall. That equals a total premium of less than $500 for a typical one-story home, which will bring an energy savings of 50%. According to the DOE, the average annual energy expense for a single family home is $2,200. So SIPs can save up to $1,100 per year. To reach the savings associated with a net-zero energy home, renewable energy would have to be implemented on top of the other features. A geothermal energy system could achieve this goal with a cost premium of approximately $7 per square foot, while a photovoltaic system (solar) would require up to a $25,000 total premium.
See also
Civil engineering
Ecotechnology
Environmental engineering
Environmental engineering science
Environmental technology
Green building
Green engineering
Sustainability
Sustainable design
References
External links
Vanegas, Jorge.(2004). "Sustainable Engineering Practice – An introduction". ASCE publishing.
Antalya, Turkey, (1997). "XI World Forestry Congress", (Volume 3, topic 2)
Sustainable Engineering & Design, Civil Engineering Company
CASI Global, The Global Certification body for CSR & Sustainability
CASI – A Global Certification Body for CSR & Sustainability
Sustainability of Products, Processes and Supply Chains: Theory and Applications. (2015) Elsevier. .
Research, Creating sustainable systems that can exist in harmony with the natural world, Purdue Environmental and Ecological Engineering
Sustainability Issues: Notes, The Centre for Sustainable Development, The University of Cambridge
The Role of Engineers in Sustainable Development
Engineering disciplines
Environmental engineering | 0.801645 | 0.977523 | 0.783626 |
Environmental stewardship | Environmental stewardship (or planetary stewardship) refers to the responsible use and protection of the natural environment through active participation in conservation efforts and sustainable practices by individuals, small groups, nonprofit organizations, federal agencies, and other collective networks. Aldo Leopold (1887–1949) championed environmental stewardship in land ethics, exploring the ethical implications of "dealing with man's relation to land and to the animals and plants which grow upon it."
Resilience-based ecosystem stewardship
Resilience-based ecosystem stewardship emphasizes resilience as an integral feature of responding to and interacting with the environment in a constantly changing world. Resilience refers to the ability of a system to recover from disturbance and return to its basic function and structure. For example, ecosystems do not serve as singular resources but rather are function-dependent in providing an array of ecosystem services. Additionally, this type of stewardship recognizes resource managers and management systems as influential and informed participants in the natural systems that are serviced by humans.
Social science implications
Studies have explored the benefits of environmental stewardship in various contexts such as the evaluation, modeling, and integration into policy, system management, and urban planning. One study examined how social attributes of environmental stewardship can be used to reconfigure local conservation efforts. Social ties to environmental stewardship are emphasized by the National Recreation and Park Association's efforts to place environmental stewardship at the forefront of childhood development and youths' consciousness of the outdoors. Practicing environmental stewardship has also been suggested as an effective mental health treatment and natural therapy.
Roles of environmental stewards
Based on pro-organizational stewardship theory principles, environmental stewards can be categorized into three roles: doers, donors, and practitioners.
Doers actively engage in environmental aid, such as volunteering for hands-on work like cleaning up oil spills. Donors support causes financially or through gifts in kind, including fundraising or personal donations. Practitioners work daily in environmental stewardship, acting as advocates in collaboration with various environmental agencies and groups. All three roles contribute to promoting environmental literacy and encouraging participation in conservation efforts.
From a biocultural conservation perspective, Ricardo Rozzi and collaborators propose participatory intercultural approaches to earth stewardship. This perspective emphasizes the role of long-term socio-ecological research (LTSER) sites in coordinating local initiatives with global networking and implementing culturally diverse earth stewardship forms.
Examples
Many programs, partnerships, and funding initiatives have tried to implement environmental stewardship into the workings of society. Pesticide Environmental Stewardship Program (PESP), a partnership program overseen by the US Environmental Protection Agency, provides pesticide-user consultation to reduce the use of hazardous chemicals and identify the detrimental impact these chemicals can have on social and environmental health.
In 2006, England placed environmental stewardship at the center of an agricultural incentives mechanism, encouraging cattle farmers to better manage their land, crops, animals, and material use. The Environmental Stewardship Award was created as part of this initiative to highlight members whose actions exemplify alignment with environmental stewardship.
See also
References
Environmental conservation
Stewardship
Sustainability and environmental management
Environmental protection
Natural resources | 0.793821 | 0.987111 | 0.783589 |
Bioeconomy | Biobased economy, bioeconomy or biotechonomy is economic activity involving the use of biotechnology and biomass in the production of goods, services, or energy. The terms are widely used by regional development agencies, national and international organizations, and biotechnology companies. They are closely linked to the evolution of the biotechnology industry and the capacity to study, understand, and manipulate genetic material that has been possible due to scientific research and technological development. This includes the application of scientific and technological developments to agriculture, health, chemical, and energy industries. The terms bioeconomy (BE) and bio-based economy (BBE) are sometimes used interchangeably. However, it is worth to distinguish them: the biobased economy takes into consideration the production of non-food goods, whilst bioeconomy covers both bio-based economy and the production and use of food and feed. More than 60 countries and regions have bioeconomy or bioscience-related strategies, of which 20 have published dedicated bioeconomy strategies in Africa, Asia, Europe, Oceania, and the Americas.
Definitions
Bioeconomy has large variety of definitions. The bioeconomy comprises those parts of the economy that use renewable biological resources from land and sea – such as crops, forests, fish, animals and micro-organisms – to produce food, health, materials, products, textiles and energy. The definitions and usage does however vary between different areas of the world.
An important aspect of the bioeconomy is understanding mechanisms and processes at the genetic, molecular, and genomic levels, and applying this understanding to creating or improving industrial processes, developing new products and services, and producing new energy. Bioeconomy aims to reduce our dependence on fossil natural resources, to prevent biodiversity loss and to create new economic growth and jobs that are in line with the principles of sustainable development.
Earlier definitions
The term 'biotechonomy' was used by Juan Enríquez and Rodrigo Martinez at the Genomics Seminar in the 1997 AAAS meeting. An excerpt of this paper was published in Science."
In 2010 it was defined in the report "The Knowledge Based Bio-Economy (KBBE) in Europe: Achievements and Challenges" by Albrecht & al. as follows: The bio-economy is the sustainable production and conversion of biomass, for a range of food, health, fibre and industrial products and energy, where renewable biomass encompasses any biological material to be used as raw material.”
According to a 2013 study, "the bioeconomy can be defined as an economy where the basic building blocks for materials, chemicals and energy are derived from renewable biological resources".
The First Global Bioeconomy Summit in Berlin in November 2015 defines bioeconomy as "knowledge-based production and utilization of biological resources, biological processes and principles to sustainably provide goods and services across all economic sectors". According to the summit, bioeconomy involves three elements: renewable biomass, enabling and converging technologies, and integration across applications concerning primary production (i.e. all living natural resources), health (i.e. pharmaceuticals and medical devices), and industry (i.e. chemicals, plastics, enzymes, pulp and paper, bioenergy).
History
Enríquez and Martinez' 2002 Harvard Business School working paper, "Biotechonomy 1.0: A Rough Map of Biodata Flow", showed the global flow of genetic material into and out of the three largest public genetic databases: GenBank, EMBL and DDBJ. The authors then hypothesized about the economic impact that such data flows might have on patent creation, evolution of biotech startups and licensing fees. An adaptation of this paper was published in Wired magazine in 2003.
The term 'bioeconomy' became popular from the mid-2000s with its adoption by the European Union and Organisation for Economic Co-operation and Development as a policy agenda and framework to promote the use of biotechnology to develop new products, markets, and uses of biomass. Since then, both the EU (2012) and OECD (2006) have created dedicated bioeconomy strategies, as have an increasing number of countries around the world. Often these strategies conflate the bioeconomy with the term 'bio-based economy'. For example, since 2005 the Netherlands has sought to promote the creation of a biobased economy. Pilot plants have been started i.e. in Lelystad (Zeafuels), and a centralised organisation exists (Interdepartementaal programma biobased economy), with supporting research (Food & Biobased Research) being conducted. Other European countries have also developed and implemented bioeconomy or bio-based economy policy strategies and frameworks.
In 2012 president Barack Obama of the USA announced intentions to encourage biological manufacturing methods, with a National Bioeconomy Blueprint.
Aims
Global population growth and over consumption of many resources are causing increasing environmental pressure and climate change. Bioeconomy tackles with these challenges. It aims to ensure food security and to promote more sustainable natural resource use as well as to reduce the dependence on non-renewable resources, e.g. fossil natural resources and minerals. In some extent bioeconomy also helps economy to reduces greenhouse gas emissions and assists in mitigating and adapting to climate change.
Genetic modification
Organisms, ranging from bacteria over yeasts up to plants are used for production of enzymatic catalysis. Genetically modified bacteria have been used to produce insulin, artemisinic acid was made in engineered yeast. Some bioplastics (based on polyhydroxylbutyrate or polyhydroxylalkanoates) are produced from sugar using genetically modified microbes.
Genetically modified organisms are also used for the production of biofuels. Biofuels are a type of carbon-neutral fuel.
Research is also being done towards CO2 fixation using a synthetic metabolic pathway. By genetically modifying E. coli bacteria so as to allow them to consume CO2, the bacterium may provide the infrastructure for the future renewable production of food and green fuels.
One of the organisms (Ideonella sakaiensis) that is able to break down PET (a plastic) into other substances has been genetically modified to break down PET even faster and also break down PEF. Once plastics (which are normally non-biodegradable) are broken down and recycled into other substances (i.e. biomatter in the case of Tenebrio molitor larvae) it can be used as an input for other animals.
Genetically modified crops are also used. Genetically modified energy crops for instance may provide some additional advantages such as reduced associated costs (i.e. costs during the manufacturing process ) and less water use. One example are trees have been genetically modified to either have less lignin, or to express lignin with chemically labile bonds.
With genetically modified crops however, there are still some challenges involved (hurdles to regulatory approvals, market adoption and public acceptance).
Fields
According to European Union Bioeconomy Strategy updated in 2018 the bioeconomy covers all sectors and systems that rely on biological resources (animals, plants, micro-organisms and derived biomass, including organic waste), their functions and principles. It covers all primary production and economic and industrial sectors that base on use, production or processing biological resources from agriculture, forestry, fisheries and aquaculture. The product of bioeconomy are typically food, feed and other biobased products, bioenergy and services based on biological resources. The bioeconomy aims to drive towards sustainability, circularity as well as the protection of the environment and will enhance biodiversity.
In some definitions, bioeconomy comprises also ecosystem services that are services offered by the environment, including binding carbon dioxide and opportunities for recreation. Another key aspect of the bioeconomy is not wasting natural resources but using and recycling them efficiently.
According to EU Bioeconomy Report 2016, the bioeconomy brings together various sectors of the economy that produce, process and reuse renewable biological resources (agriculture, forestry, fisheries, food, bio-based chemicals and materials and bioenergy).
Agriculture
However, not all synthetic nutrition products are animal food products such as meat and dairy – for instance, as of 2021 there are also products of synthetic coffee that are reported to be close to commercialization. Similar fields of research and production based on bioeconomy agriculture are:
Microbial food cultures and genetically engineered microbial production (e.g. of spider silk or solar-energy-based protein powder)
Controlled self-assembly of plant proteins (e.g. of spider silk similar plant-proteins-based plastics alternatives)
Cell-free artificial synthesis (e.g. of starch)
Bioproduced imitation foods (e.g. meat analogues and milk substitutes)
Many of the foods produced with tools and methods of the bioeconomy may not be intended for human consumption but for non-human animals such as for livestock feed, insect-based pet food or sustainable aquacultural feed. There are various startups and research teams around the world who use synthetic biology to create animal feed.
Moreover, crops could be genetically engineered in ways that e.g. safely increase yields, reduce the need for pesticides or ease indoor production.
One example of a product highly specific to the bioeconomy that is widely available is algae oil which is a dietary supplement that could substitute possibly less sustainable, larger-market-share fish oil supplements.
Vertical farming
Fungiculture
For example, there is ongoing research and development for indoor high-yield mechanisms.
Mycoprotein
Algaculture
Waste management, recycling and biomining
Biobased applications, research and development of waste management may form a part of the bioeconomy. Bio-based recycling (e-waste, plastics recycling, etc.) is linked to waste management and relevant standards and requirements of production and products. Some of the recycling of waste may be biomining and some biomining could be applied beyond recycling.
For example, in 2020, biotechnologists reported the genetically engineered refinement and mechanical description of synergistic enzymes – PETase, first discovered in 2016, and MHETase of Ideonella sakaiensis – for faster depolymerization of PET and also of PEF, which may be useful for depollution, recycling and upcycling of mixed plastics along with other approaches. Such approaches may be more environmentally-friendly as well as cost-effective than mechanical and chemical PET-recycling, enabling circular plastic bio-economy solutions via systems based on engineered strains. Moreover, microorganisms could be employed to mine useful elements from basalt rocks via bioleaching.
Medicine, nutritional science and the health economy
In 2020, the global industry for dietary supplements was valued at $140.3 billion by a "Grand View Research" analysis. Certain parts of the health economy may overlap with the bioeconomy, including anti-aging- and life extension-related products and activities, hygiene/beauty products, functional food, sports performance related products and bio-based tests (such as of one's microbiota) and banks (such as stool banks including oral "super stool" capsules) and databases (mainly DNA databases), all of which can in turn be used for individualized interventions, monitoring as well as for the development of new products. The pharmaceutical sector, including the research and development of new antibiotics, can also be considered to be a bioeconomy sector.
Forest bioeconomy
The forest bioeconomy is based on forests and their natural resources, and covers a variety of different industry and production processes. Forest bioeconomy includes, for example, the processing of forest biomass to provide products relating to, energy, chemistry, or the food industry. Thus, forest bioeconomy covers a variety of different manufacturing processes that are based on wood material and the range of end products is wide.
Besides different wood-based products, recreation, nature tourism and game are a crucial part of forest bioeconomy. Carbon sequestration and ecosystem services are also included in the concept of forest bioeconomy.
Pulp, paper, packaging materials and sawn timber are the traditional products of the forest industry. Wood is also traditionally used in furniture and construction industries. But in addition to these, as a renewable natural resource, ingredients from wood can be valorised into innovative bioproducts alongside a range of conventional forest industry products. Thus, traditional mill sites of large forest industry companies, for example in Finland, are in the process of becoming biorefineries. In different processes, forest biomass is used to produce textiles, chemicals, cosmetics, fuels, medicine, intelligent packaging, coatings, glues, plastics, food and feed.
Blue bioeconomy
The blue bioeconomy covers businesses that are based on the sustainable use of renewable aquatic resources as well water related expertise areas. It covers the development and marketing of blue bioeconomy products and services. In that respect, the key sectors include business activities based on water expertise and technology, water-based tourism, making use of aquatic biomass, and the value chain of fisheries. Furthermore, the immaterial value of aquatic natural resources is also very high. Water areas have also other values beyond being platforms of economic activities. It provides human well-being, recreation and health.
According to the European Union the blue bioeconomy has the focus on aquatic or marine environments, especially, on novel aquaculture applications, including non-food, food and feed.
In the European Report on the Blue Growth Strategy - Towards more sustainable growth and jobs in the blue economy (2017) the blue bioeconomy is defined differently to the blue economy. The blue economy means the industries that are related to marine environment activities, e.g. shipbuilding, transport, coastal tourism, renewable energies (such as off-shore windmills), living and non-living resources.
Energy
The bioeconomy also includes bioenergy, biohydrogen, biofuel and algae fuel.
According to World Bioenergy Association 17.8 % out of gross final energy consumption was covered with renewable energy. Among renewable energy sources, bioenergy (energy from bio-based sources) is the largest renewable energy source. In 2017, bioenergy accounted for 70% of renewable energy consumption.
The role of bioenergy varies in different countries and continents. In Africa it is the most important energy sources with the share of 96%. Bioenergy has significant shares in energy production in the Americas (59%), Asia (65%) and Europe (59%). The bioenergy is produced out of a large variety of biomass from forestry, agriculture and waste and side streams of industries to produce useful end products (pellets, wood chips, bioethanol, biogas and biodiesel) for electricity, heat and transportation fuel around the world.
Biomass is a renewable natural resource but it is still a limited resource. Globally there are huge resources, but environmental, social and economic aspects limit their use. Biomass can play an important role for low-carbon solutions in the fields of customer supplies, energy, food and feed. In practice, there are many competing uses.
The biobased economy uses first-generation biomass (crops), second-generation biomass (crop refuge), and third-generation biomass (seaweed, algae). Several methods of processing are then used (in biorefineries) to gather the most out of the biomass. This includes techniques such as
Anaerobic digestion
Pyrolysis
Torrefaction
Fermentation
Anaerobic digestion is generally used to produce biogas, fermentation of sugars produces ethanol, pyrolysis is used to produce pyrolysis-oil (which is solidified biogas), and torrefaction is used to create biomass-coal. Biomass-coal and biogas is then burnt for energy production, ethanol can be used as a (vehicle)-fuel, as well as for other purposes, such as skincare products.
Biobased energy can be used to manage intermittency of variable renewable energy like solar and wind.
Woodchips and pellets
Getting the most out of the biomass
For economic reasons, the processing of the biomass is done according to a specific pattern (a process called cascading). This pattern depends on the types of biomass used. The whole of finding the most suitable pattern is known as biorefining. A general list shows the products with high added value and lowest volume of biomass to the products with the lowest added value and highest volume of biomass:
fine chemicals/medicines
food
chemicals/bioplastics
transport fuels
electricity and heat
Recent studies have highlighted the potential of traditionally used plants, in providing value-added products in remote areas of the world. A study conducted on tobacco plants proposed a non-exhaustive list of compounds with potential economic interest that can be sourced from these plants.
Other fields and applications
Bioproducts or bio-based products are products that are made from biomass. The term “bioproduct” refers to a wide array of industrial and commercial products that are characterized by a variety of properties, compositions and processes, as well as different benefits and risks.
Bio-based products are developed in order to reduce dependency on fossil fuels and non-renewable resources. To achieve this, the key is to develop new bio-refining technologies to sustainably transform renewable natural resources into bio-based products, materials and fuels, e.g.
Transplantable organs and induced regeneration
Microtechnology (medicine and energy)
Climate change adaptation and mitigation
Activities and technologies for bio-based climate change adaptation could be considered as part of the bioeconomy. Examples may include:
reforestation (alongside forest protection)
algaculture carbon sequestration
artificial assistance to make coral reefs more resilient against climate change
restoration of seagrass, mangroves and salt marshes
Materials
There is a potential for biobased-production of building materials (insulation, surface materials, etc.) as well as new materials in general (polymers, plastics, composites, etc.). Photosynthetic microbial cells have been used as a step to synthetic production of spider silk.
Bioplastics
Bioplastics are not just one single material. They comprise a whole family of materials with different properties and applications. According to European Bioplastics, a plastic material is defined as a bioplastic if it is either bio-based plastic, biodegradable plastic, or is a material with both properties. Bioplastics have the same properties as conventional plastics and offer additional advantages, such as a reduced carbon footprint or additional waste management options, such as composting.
Bioplastics are divided into three main groups:
Bio-based or partially bio-based non-biodegradable plastics such as bio-based PE, PP, or PET (so-called drop-ins) and bio-based technical performance polymers such as PTT or TPC-ET
Plastics that are both bio-based and biodegradable, such as PLA and PHA or PBS
Plastics that are based on fossil resources and are biodegradable, such as PBAT
Additionally, new materials such as PLA, PHA, cellulose or starch-based materials offer solutions with completely new functionalities such as biodegradability and compostability, and in some cases optimized barrier properties. Along with the growth in variety of bioplastic materials, properties such as flexibility, durability, printability, transparency, barrier, heat resistance, gloss and many more have been significantly enhanced.
Bioplastics have been made from sugarbeet, by bacteria.
Examples of bioplastics
Paptic: There are packaging materials which combine the qualities of paper and plastic. For example, Paptic is produced from wood-based fibre that contains more than 70% wood. The material is formed with foam-forming technology that saves raw material and improves the qualities of the material. The material can be produced as reels, which enables it to be delivered with existing mills. The material is spatter-proof but is decomposed when put under water. It is more durable than paper and maintains its shape better than plastic. The material is recycled with cardboards.
Examples of bio-composites
Sulapac tins are made from wood chips and biodegradable natural binder and they have features similar to plastic. These packaging products tolerate water and fats, and they do not allow oxygen to pass. Sulapac products combine ecology, luxury and are not subject to design limitations. Sulapac can compete with traditional plastic tins by cost and is suitable for the same packing devices.
Woodio produces wood composite sinks and other bathroom furniture. The composite is produced by moulding a mixture of wood chips and crystal clear binder. Woodio has developed a solid wood composite that is entirely waterproof. The material has similar features to ceramic, but can be used for producing energy at the end of its lifespan, unlike ceramic waste. Solid wood composite is hard and can be moulded with wooden tools.
Woodcast is a renewable and biodegradable casting material. It is produced from woodchips and biodegradable plastic. It is hard and durable in room temperature but when heated is flexible and self-sticky. Woodcast can be applied to all plastering and supporting elements. The material is breathable and X-ray transparent. It is used in plastering and in occupational therapy and can be moulded to any anatomical shape. Excess pieces can be reused: used casts can be disposed of either as energy or biowaste. The composite differs from traditional lime cast in that it doesn’t need water and it is non-toxic. Therefore gas-masks, gauntlets or suction fans are not required when handling the cast.
For sustainable packaging
Textiles
The textile industry, or certain activities and elements of it, could be considered to be a strong global bioeconomy sector. Textiles are produced from natural fibres, regenerated fibres and synthetic fibres (Sinclair 2014). The natural fibre textile industry is based on cotton, linen, bamboo, hemp, wool, silk, angora, mohair and cashmere.
Activities related to textile production and processing that more clearly fall under the domain of the bioeconomy are developments such as the biofabrication of leather-like material using fungi, fungal cotton substitutes, and renewable fibers from fungal cell walls.
Textile fibres can be formed in chemical processes from bio-based materials. These fibres are called bio-based regenerated fibres. The oldest regenerated fibres are viscose and rayon, produced in the 19th century. The first industrial processes used a large amount of wood as raw material, as well as harmful chemicals and water. Later the process of regenerating fibres developed to reduce the use of raw materials, chemicals, water and energy.
In the 1990s the first more sustainable regenerated fibres, e.g. Lyocell, entered the market with the commercial name of Tencel. The production process uses wood cellulose and it processes the fibre without harmful chemicals.
The next generation of regenerated fibres are under development. The production processes use less or no chemicals, and the water consumption is also diminished.
Issues
Degrowth, green growth and circular economy
The bioeconomy has largely been associated with visions of "green growth". A study found that a "circular bioeconomy" may be "necessary to build a carbon neutral future in line with the climate objectives of the Paris Agreement". However, some are concerned that with a focus or reliance on technological progress a fundamentally unsustainable socioeconomic model might be maintained rather than be changed. Some are concerned it that may not lead to a ecologization of the economy but to an economization of the biological, "the living" and caution that potentials of non-bio-based techniques to achieve greater sustainability need to be considered. A study found that the, as of 2019, current EU interpretation of the bioeconomy is "diametrically opposite to the original narrative of Baranoff and Georgescu-Roegen that told us that expanding the share of activities based on renewable resources in the economy would slow down economic growth and set strict limits on the overall expansion of the economy". Furthermore, some caution that "Silicon Valley and food corporations" could use bioeconomy technologies for greenwashing and monopoly-concentrations. The bioeconomy, its potentials, disruptive new modes of production and innovations may distract from the need for systemic structural socioeconomic changes and provide a false illusion of technocapitalist utopianism/optimism that suggests technological fixes may make it possible to sustain contemporary patterns and structures, pre-empting structural changes.
Unemployment and work reallocation
Many farmers depend on conventional methods of producing crops and many of them live in developing economies. Cellular agriculture for products such as synthetic coffee could, if the contemporary socioeconomic context (the socioeconomic system's mechanisms such as incentives and resource distribution mechanisms like markets) remains unaltered (e.g. in nature, purposes, scopes, limits and degrees), threaten their employment and livelihoods as well as the respective nation's economy and social stability. A study concluded that "given the expertise required and the high investment costs of the innovation, it seems unlikely that cultured meat immediately benefits the poor in developing countries" and emphasized that animal agriculture is often essential for the subsistence for farmers in poor countries. However, not only developing countries may be affected.
Patents, intellectual property and monopolies
Observers worry that the bioeconomy will become as opaque and free of accountability as the industry it attempts to replace, that is the current food system. The fear is that its core products will be mass-produced, nutritionally dubious meat sold at the homogeneous fast-food joints of the future.
The medical community has warned that gene patents can inhibit the practice of medicine and progress of science. This can also apply to other areas where patents and private intellectual property licenses are being used, often entirely preventing the use and continued development of knowledge and techniques for many years or decades. On the other hand, some worry that without intellectual property protection as the type of R&D-incentive, particularly to current degrees and extents, companies would no longer have the resources or motives/incentives to perform competitive, viable biotech research – as otherwise they may not be able to generate sufficient returns from initial R&D investment or less returns than from other expenditures that are possible. "Biopiracy" refers to "the use of intellectual property systems to legitimize the exclusive ownership and control over biological resources and biological products that have been used over centuries in non-industrialized cultures".
Rather than leading to sustainable, healthy, inexpensive, safe, accessible food being produced with little labor locally – after knowledge- and technology transfer and timely, efficient innovation – the bioeconomy may lead to aggressive monopoly-formation and exacerbated inequality. For instance, while production costs may be minimal, costs – including of medicine – may be high.
Innovation management, public spending and governance
It has been argued that public investment would be a tool governments should use to regulate and license cellular agriculture. Private firms and venture capital would likely seek to maximise investor value rather than social welfare. Moreover, radical innovation is considered to be more risky, "and likely involves more information asymmetry, so that private financial markets may imperfectly manage these frictions". Governments may also help to coordinate "since several innovators may be needed to push the knowledge frontier and make the market profitable, but no single company wants to make the early necessary investments". And investments in the relevant sectors seem to be a bottleneck hindering the transition toward a bioeconomy.
Governments could also help innovators that lack the network "to naturally obtain the visibility and political influence necessary to obtain public funds" and could help determine relevant laws.
By establishing supporting infrastructure for entrepreneurial ecosystems they can help creating a beneficial environment for innovative bioeconomy startups. Enabling such bioeconomy startups to act on the opportunities provided through the bioeconomy transformation further contributes to its success.
In popular media
Biopunk – so called due to similarity with cyberpunk – is a genre of science fiction that often thematizes the bioeconomy as well as its potential issues and technologies. The novel The Windup Girl portrays a society driven by a ruthless bioeconomy and ailing under climate change. In the more recent novel Change Agent prevalent black market clinics offer wealthy people unauthorized human genetic enhancement services and e.g. custom narcotics are 3D-printed locally or smuggled with soft robots. Solarpunk is another emerging genre that focuses on the relationship between human societies and the environment and also addresses many of the bioeconomy's issues and technologies such as genetic engineering, synthetic meat and commodification.
See also
Bioremediation
Biosynthesis
Chemurgy
Cross-laminated timber
Degrowth
Digital economy
European Green Deal
Plyscraper
Oleochemical
Open innovation
Single-cell protein
Synthetic ivory
Straw-bale construction
Timeline of biotechnology
Wood frame building
Working animal
References
External links
Food and Agriculture Organization of the United Nations: Sustainable and circular bioeconomy
Biotechnology
Alternative energy economics
Industries (economics)
Sustainable development
Economy by field | 0.798768 | 0.980906 | 0.783517 |
Ecological footprint | The ecological footprint measures human demand on natural capital, i.e. the quantity of nature it takes to support people and their economies. It tracks human demand on nature through an ecological accounting system. The accounts contrast the biologically productive area people use to satisfy their consumption to the biologically productive area available within a region, nation, or the world (biocapacity). Biocapacity is the productive area that can regenerate what people demand from nature. Therefore, the metric is a measure of human impact on the environment. As Ecological Footprint accounts measure to what extent human activities operate within the means of our planet, they are a central metric for sustainability.
The metric is promoted by the Global Footprint Network which has developed standards to make results comparable. FoDaFo, supported by Global Footprint Network and York University are now providing the national assessments of Footprints and biocapacity.
Footprint and biocapacity can be compared at the individual, regional, national or global scale. Both footprint and demands on biocapacity change every year with number of people, per person consumption, efficiency of production, and productivity of ecosystems. At a global scale, footprint assessments show how big humanity's demand is compared to what Earth can renew. Global Footprint Network estimates that, as of 2022, humanity has been using natural capital 71% faster than Earth can renew it, which they describe as meaning humanity's ecological footprint corresponds to 1.71 planet Earths. This overuse is called ecological overshoot.
Ecological footprint analysis is widely used around the world in support of sustainability assessments. It enables people to measure and manage the use of resources throughout the economy and explore the sustainability of individual lifestyles, goods and services, organizations, industry sectors, neighborhoods, cities, regions, and nations.
Overview
The ecological footprint concept and calculation method was developed as the PhD dissertation of Mathis Wackernagel, in collaboration with his supervisor Prof. William Rees at the University of British Columbia in Vancouver, Canada, from 1990 to 1994. The first academic publication about ecological footprints was written by William Rees in 1992. Originally, Wackernagel and Rees called the concept "appropriated carrying capacity". To make the idea more accessible, Rees came up with the term "ecological footprint", inspired by a computer technician who praised his new computer's "small footprint on the desk". In 1996, Wackernagel and Rees published the book Our Ecological Footprint: Reducing Human Impact on the Earth.
The simplest way to define an ecological footprint is the amount of environmental resources necessary to produce the goods and services that support an individual's lifestyle, a nation's prosperity, or the economic activity of humanity as a whole.
The model is a means of comparing lifestyles, per capita consumption, and population numbers, and checking these against biocapacity. The tool can inform policy by examining to what extent a nation uses more (or less) than is available within its territory, or to what extent the nation's lifestyle and population density would be replicable worldwide. The footprint can be a useful tool to educate people about overconsumption and overpopulation, with the aim of altering personal behavior or public policies. Ecological footprints may be used to argue that current lifestyles and human numbers are not sustainable. Country-by-country comparisons show the inequalities of resource use on this planet.
The touristic ecological footprint (TEF) is the ecological footprint of visitors to a particular destination, and depends on the tourists' behavior. Comparisons of TEFs can indicate the benefits of alternative destinations, modes of travel, food choices, types of lodging, and activities.
The carbon footprint is a component of the total ecological footprint. Often, when only the carbon footprint is reported, it is expressed in weight of (or CO2e representing GHG warming potential (GGWP)), but it can also be expressed in land areas like ecological footprints. Both can be applied to products, people, or whole societies.
Methodology
Ecological footprint accounting is built on the recognition that regenerative resources are the physically most limiting resources of all. Even fossil fuel use is far more limited by the amount of sequestration the biosphere can provide rather than by the amounts left underground. The same is true for ores and minerals, where the limiting factor is how much damage to the biosphere we are willing to accept to extract and concentrate those materials, rather than by how much of them is still left underground. Therefore, the focus of ecological footprint accounting is human competition for regenerative resources.
The amount of the planet's regeneration, including how many resources are renewed and how much waste it the planet can absorb, is dubbed biocapacity. Ecological footprints therefore track how much biocapacity is needed to provide for all the inputs that human activities demand. It can be calculated at any scale: for an activity, a person, a community, a city, a region, a nation, or humanity as a whole.
Footprints can be split into consumption categories: food, housing, and goods and services. Or it can be organized by are types occupied: cropland, pasture, forests for forest products, forests for carbon sequestration, marine areas, etc.
When this approach is applied to an activity such as the manufacturing of a product or driving a car, it uses data from life-cycle analysis. Such applications translate the consumption of energy, biomass (food, fiber), building material, water and other resources into normalized land areas called global hectares (gha) needed to provide these inputs.
Since the Global Footprint Network's inception in 2003, it has calculated the ecological footprint from UN data sources for the world as a whole and for over 200 nations (known as the National Footprint and Biocapacity Accounts). This task has now been taken over by FoDaFo and York University. The total footprint number of Earths needed to sustain the world's population at that level of consumption are also calculated. Every year the calculations are updated to the latest year with complete UN statistics. The time series are also recalculated with every update, since UN statistics sometimes correct historical data sets. Results are available on an open data platform.
Lin et al. (2018) find that the trends for countries and the world have stayed consistent despite data updates. In addition, a recent study by the Swiss Ministry of Environment independently recalculated the Swiss trends and reproduced them within 1–4% for the time period that they studied (1996–2015). Since 2006, a first set of ecological footprint standards exist that detail both communication and calculation procedures. The latest version are the updated standards from 2009.
The ecological footprint accounting method at the national level is described on the website of the Global Footprint Network or in greater detail in academic papers, including Borucke et al.
The National Accounts Review Committee has published a research agenda on how to improve the accounts.
Footprint measurements
For 2023 Global Footprint Network estimated humanity's ecological footprint as 1.71 planet Earths. According to their calculations this means that humanity's demands were 1.71 times more than what the planet's ecosystems renewed.
If this rate of resource use is not reduced, persistent overshoot would suggest the occurrence of continued ecological deterioration and a potentially permanent decrease in Earth's human carrying capacity.
In 2022, the average biologically productive area per person worldwide was approximately 1.6 global hectares (gha) per capita. The U.S. footprint per person was 7.5 gha, and that of Switzerland was 3.7 gha, that of China 3.6 gha, and that of India 1.0 gha. In its Living Planet Report 2022, the WWF documents a 69% decline in the world's vertebrate populations between 1970 and the present, and links this decline to humanity greatly exceeding global biocapacity. Wackernagel and Rees originally estimated that the available biological capacity for the 6 billion people on Earth at that time was about 1.3 hectares per person, which is smaller than the 1.6 global hectares published for 2024, because the initial studies neither used global hectares nor included bioproductive marine areas.
According to the 2018 edition of the National footprint accounts, humanity's total ecological footprint has exhibited an increasing trend since 1961, growing an average of 2.1% per year (SD= 1.9). Humanity's ecological footprint was 7.0 billion gha in 1961 and increased to 20.6 billion gha in 2014, a function of higher per capita resource use and population increase. The world-average ecological footprint in 2014 was 2.8 global hectares per person. The carbon footprint is the fastest growing part of the ecological footprint and accounts currently for about 60% of humanity's total ecological footprint.
The Earth's biocapacity has not increased at the same rate as the ecological footprint. The increase of biocapacity averaged at only 0.5% per year (SD = 0.7). Because of agricultural intensification, biocapacity was at 9.6 billion gha in 1961 and grew to 12.2 billion gha in 2016.
However, this increased biocapacity for people came at the expense of other species. Agricultural intensification involved increased fertilizer use which led to eutrophication of streams and ponds; increased pesticide use which decimated pollinator populations; increased water withdrawals which decreased river health; and decreased land left wild or fallow which decreased wildlife populations on agricultural lands. This reminds us that ecological footprint calculations are anthropocentric, assuming that all Earth's biocapacity is legitimately available to human beings. If we assume that some biocapacity should be left for other species, the level of ecological overshoot increases.
According to Wackernagel and the organisation he has founded, the Earth has been in "overshoot", where humanity is using more resources and generating waste at a pace that the ecosystem cannot renew, since the 1970s. According to the Global Footprint Network's calculations, currently people use Earth's resources at approximately 171% of capacity. This implies that humanity is well over Earth's human carrying capacity at current levels of affluence. According to the GFN:In 2023, Earth Overshoot Day fell on August 2nd. Earth Overshoot Day marks the date when humanity has exhausted nature's budget for the year. For the rest of the year, we are maintaining our ecological deficit by drawing down local resource stocks and accumulating carbon dioxide in the atmosphere. We are operating in overshoot. Currently, more than 85% of humanity lives in countries that run an ecological deficit. This means their citizens use more resources and generate more waste and pollution than can be sustained by the biocapacity found within their national boundaries. In some cases, countries are running an ecological deficit because their per capita ecological footprints are higher than the hectares of bioproductive land available on average globally (this was estimated at <1.7 hectares per person in 2019). Examples include France, Germany and Saudi Arabia. In other cases, per capita resource use may be lower than the global available average, but countries are running an ecological deficit because their populations are high enough that they still use more bioproductive land than they have within their national borders. Examples include China, India and the Philippines. Finally, many countries run an ecological deficit because of both high per capita resource use and large populations; such countries tend to be way over their national available biocapacities. Examples include Japan, the United Kingdom and the United States.
According to William Rees, writing in 2011, "the average world citizen has an eco-footprint of about 2.7 global average hectares while there are only 2.1 global hectare of bioproductive land and water per capita on earth. This means that humanity has already overshot global biocapacity by 30% and now lives unsustainabily by depleting stocks of 'natural capital'."
Since then, due to population growth and further refinements in the calculations, available biocapacity per person has decreased to <1.7 hectares per person globally. More recently, Rees has written:The human enterprise is in potentially disastrous 'overshoot', exploiting the ecosphere beyond ecosystems' regenerative capacity and filling natural waste sinks to overflowing. Economic behavior that was once 'rational' has become maladaptive. This situation is the inevitable outcome of humanity's natural expansionist tendencies reinforced by ecologically vacuous growth-oriented 'neoliberal' economic theory.Rees now believes that economic and demographic degrowth are necessary to create societies with small enough ecological footprints to remain sustainable and avoid civilizational collapse.
Footprint by country
The world-average ecological footprint in 2013 was 2.8 global hectares per person. The average per country ranges from 14.3 (Qatar) to 0.5 (Yemen) global hectares per person. There is also a high variation within countries, based on individual lifestyles and wealth.
In 2022, countries with the top ten per capita ecological footprints were: Qatar (14.3 global hectares), Luxembourg (13.0), Cook Islands (8.3), Bahrain (8.2), United States (8.1), United Arab Emirates (8.1), Canada (8.1), Estonia (8.0), Kuwait (7.9) and Belize (7.9).
Total ecological footprint for a nation is found by multiplying its per capita ecological footprint by its total population. Total ecological footprint ranges from 5,540,000,000 global hectares used (China) to 145,000 (Cook Islands) global hectares used. In 2022, the top ten countries in total ecological footprint were: China (5.54 billion global hectares), United States (2.66 billion), India (1.64 billion), Russian Federation (774 million), Japan (586 million), Brazil (542 million), Indonesia (460 million), Germany (388 million), Republic of Korea (323 million) and Mexico (301 million). These were the ten nations putting the greatest strain on global ecosystem services.
The Western Australian government State of the Environment Report included an Ecological Footprint measure for the average Western Australian seven times the average footprint per person on the planet in 2007, a total of about 15 hectares.
The figure (right) examines sustainability at the scale of individual countries by contrasting their Ecological Footprint with their UN Human Development Index (a measure of standard of living). The graph shows what is necessary for countries to maintain an acceptable standard of living for their citizens while, at the same time, maintaining sustainable resource use. The general trend is for higher standards of living to become less sustainable. As always, population growth has a marked influence on total consumption and production, with larger populations becoming less sustainable. Most countries around the world continue to become more populous, although a few seem to have stabilized or are even beginning to shrink. The information generated by reports at the national, regional and city scales confirm the global trend towards societies becoming less sustainable over time.
Studies in the United Kingdom
The UK's average ecological footprint is 5.45 global hectares per capita (gha) with variations between regions ranging from 4.80 gha (Wales) to 5.56 gha (East England).
BedZED, a 96-home mixed-income housing development in South London, was designed by Bill Dunster Architects and sustainability consultants BioRegional for the Peabody Trust. Despite being populated by relatively average people, BedZED was found to have a footprint of 3.20 gha per capita (not including visitors), due to on-site renewable energy production, energy-efficient architecture, and an extensive green lifestyles program that included London's first carsharing club. Findhorn Ecovillage, a rural intentional community in Moray, Scotland, had a total footprint of 2.56 gha per capita, including both the many guests and visitors who travel to the community. However, the residents alone had a footprint of 2.71 gha, a little over half the UK national average and one of the lowest ecological footprints of any community measured so far in the industrialized world. Keveral Farm, an organic farming community in Cornwall, was found to have a footprint of 2.4 gha, though with substantial differences in footprints among community members.
Ecological footprint at the individual level
In a 2012 study of consumers acting 'green' vs. 'brown' (where green people are "expected to have significantly lower ecological impact than 'brown' consumers"), "the research found no significant difference between the carbon footprints of green and brown consumers". A 2013 study concluded the same.
Reviews and critiques
Early criticism was published by van den Bergh and Verbruggen in 1999, which was updated in 2014. Their colleague Fiala published similar criticism in 2008.
A comprehensive review commissioned by the Directorate-General for the Environment (European Commission) was published in June 2008. The European Commission's review found the concept unique and useful for assessing progress on the EU's Resource Strategy. They also recommended further improvements in data quality, methodologies and assumptions.
Blomqvist et al.. published a critical paper in 2013. It led to a reply from Rees and Wackernagel (2013), and a rejoinder by Blomqvist et al. (2013).
An additional strand of critique is from Giampietro and Saltelli (2014), with a reply from Goldfinger et al., 2014, and a rejoinder by Giampietro and Saltelli (2014). A joint paper authored by the critical researchers (Giampietro and Saltelli) and proponents (various Global Footprint Network researchers) summarized the terms of the controversy in a paper published by the journal Ecological Indicators. Additional comments were offered by van den Bergh and Grazi (2015).
A number of national government agencies have performed collaborative or independent research to test the reliability of the ecological footprint accounting method and its results. They have largely confirmed the accounts' results; those who reproduced the assessment generating near-identical results. Such reviews include those of Switzerland, Germany, France, Ireland, the United Arab Emirates and the European Commission.
Global Footprint Network has summarized methodological limitations and criticism in a comprehensive report available on its website.
Similarly, Newman (2006) has argued that the ecological footprint concept may have an anti-urban bias, as it does not consider the opportunities created by urban growth. He argues that calculating the ecological footprint for densely populated areas, such as a city or small country with a comparatively large population—e.g. New York and Singapore respectively—may lead to the perception of these populations as "parasitic". But in reality, ecological footprints just document the resource dependence of cities on rural hinterlands. Critics argue that this is a dubious characterization, since farmers in developed nations may easily consume more resources than urban inhabitants, due to transportation requirements and the unavailability of economies of scale. Furthermore, such moral conclusions seem to be an argument for autarky. But this is similar to blaming a scale for the user's dietary choices. Even if true, such criticisms do not negate the value of measuring different cities', regions', or nations' ecological footprints and comparing them. Such assessments can provide helpful insights into the success or failure of different environmental policies.
Since this metric tracks biocapacity, the replacement of original ecosystems with high-productivity agricultural monocultures can lead to attributing a higher biocapacity to such regions. For example, replacing ancient woodlands or tropical forests with monoculture forests or plantations may therefore decrease the ecological footprint. Similarly if organic farming yields were lower than those of conventional methods, this could result in the former being "penalized" with a larger ecological footprint. Complementary biodiversity indicators attempt to address this. The WWF's Living Planet Report combines the footprint calculations with the Living Planet Index of biodiversity. A modified ecological footprint that takes biodiversity into account has been created for use in Australia.
Ecological footprint for many years has been used by environmentalists as a way to quantify ecological degradation as it relates to an individual. Recently, there has been debate about the reliability of this method.
See also
Biocapacity
Carbon footprint
Carrying capacity
Dependency theory
Earth Overshoot Day formerly also called Ecological Debt Day
Ecological economics
Ecosystem valuation
Environmental impact assessment
Greenhouse debt
Greenhouse gas emissions accounting
Happy Planet Index
Human Footprint
Life cycle assessment
List of countries by ecological footprint
Netherlands fallacy
Our Common Future
Overshoot (population)
Physical balance of trade
Simon–Ehrlich wager
Social metabolism
The Limits to Growth
Water footprint
References
Further reading
Rees, W. E. and M. Wackernagel (1994) Ecological footprints and appropriated carrying capacity: Measuring the natural capital requirements of the human economy, in Jansson, A. et al. Investing in Natural Capital: The Ecological Economics Approach to Sustainability. Washington D.C.:Island Press.
Lenzen, M. and Murray, S. A. 2003. The Ecological Footprint – Issues and Trends. ISA Research Paper 01-03
Chambers, N., Simmons, C. and Wackernagel, M. (2000), Sharing Nature's Interest: Ecological Footprints as an Indicator of Sustainability. Earthscan, London (see also http://www.ecologicalfootprint.com )
External links
WWF "Living Planet Report", a biannual calculation of national and global footprints
Green Score City Index, a quarterly calculation of city footprints in Canada
US Environmental Footprint Factsheet
Interview with Bill Rees
Sustainability metrics and indices
Economic indicators
Waste minimisation
Human impact on the environment
Human ecology
Ecological economics
Environmental social science concepts
Environmental terminology | 0.788331 | 0.99372 | 0.78338 |
Sustainable energy | Energy is sustainable if it "meets the needs of the present without compromising the ability of future generations to meet their own needs." Definitions of sustainable energy usually look at its effects on the environment, the economy, and society. These impacts range from greenhouse gas emissions and air pollution to energy poverty and toxic waste. Renewable energy sources such as wind, hydro, solar, and geothermal energy can cause environmental damage but are generally far more sustainable than fossil fuel sources.
The role of non-renewable energy sources in sustainable energy is controversial. Nuclear power does not produce carbon pollution or air pollution, but has drawbacks that include radioactive waste, the risk of nuclear proliferation, and the risk of accidents. Switching from coal to natural gas has environmental benefits, including a lower climate impact, but may lead to a delay in switching to more sustainable options. Carbon capture and storage can be built into power plants to remove their carbon dioxide emissions, but this technology is expensive and has rarely been implemented.
Fossil fuels provide 85% of the world's energy consumption, and the energy system is responsible for 76% of global greenhouse gas emissions. Around 790 million people in developing countries lack access to electricity, and 2.6 billion rely on polluting fuels such as wood or charcoal to cook. Cooking with biomass plus fossil fuel pollution causes an estimated 7 million deaths each year. Limiting global warming to will require transforming energy production, distribution, storage, and consumption. Universal access to clean electricity can have major benefits to the climate, human health, and the economies of developing countries.
Climate change mitigation pathways have been proposed to limit global warming to . These include phasing out coal-fired power plants, conserving energy, producing more electricity from clean sources such as wind and solar, and switching from fossil fuels to electricity for transport and heating buildings. Power output from some renewable energy sources varies depending on when the wind blows and the sun shines. Switching to renewable energy can therefore require electrical grid upgrades, such as the addition of energy storage. Some processes that are difficult to electrify can use hydrogen fuel produced from low-emission energy sources. In the International Energy Agency's proposal for achieving net zero emissions by 2050, about 35% of the reduction in emissions depends on technologies that are still in development as of 2023.
Wind and solar market share grew to 8.5% of worldwide electricity in 2019, and costs continue to fall. The Intergovernmental Panel on Climate Change (IPCC) estimates that 2.5% of world gross domestic product (GDP) would need to be invested in the energy system each year between 2016 and 2035 to limit global warming to . Governments can fund the research, development, and demonstration of new clean energy technologies. They can also build infrastructure for electrification and sustainable transport. Finally, governments can encourage clean energy deployment with policies such as carbon pricing, renewable portfolio standards, and phase-outs of fossil fuel subsidies. These policies may also increase energy security.
Definitions and background
Definitions
The United Nations Brundtland Commission described the concept of sustainable development, for which energy is a key component, in its 1987 report Our Common Future. It defined sustainable development as meeting "the needs of the present without compromising the ability of future generations to meet their own needs". This description of sustainable development has since been referenced in many definitions and explanations of sustainable energy.
There is no universally accepted interpretation of how the concept of sustainability applies to energy on a global scale. Working definitions of sustainable energy encompass multiple dimensions of sustainability such as environmental, economic, and social dimensions. Historically, the concept of sustainable energy development has focused on emissions and on energy security. Since the early 1990s, the concept has broadened to encompass wider social and economic issues.
The environmental dimension of sustainability includes greenhouse gas emissions, impacts on biodiversity and ecosystems, hazardous waste and toxic emissions, water consumption, and depletion of non-renewable resources. Energy sources with low environmental impact are sometimes called green energy or clean energy. The economic dimension of sustainability covers economic development, efficient use of energy, and energy security to ensure that each country has constant access to sufficient energy. Social issues include access to affordable and reliable energy for all people, workers' rights, and land rights.
Environmental impacts
The current energy system contributes to many environmental problems, including climate change, air pollution, biodiversity loss, the release of toxins into the environment, and water scarcity. As of 2019, 85% of the world's energy needs are met by burning fossil fuels. Energy production and consumption are responsible for 76% of annual human-caused greenhouse gas emissions as of 2018. The 2015 international Paris Agreement on climate change aims to limit global warming to well below and preferably to 1.5 °C (2.7 °F); achieving this goal will require that emissions be reduced as soon as possible and reach net-zero by mid-century.
The burning of fossil fuels and biomass is a major source of air pollution, which causes an estimated 7 million deaths each year, with the greatest attributable disease burden seen in low and middle-income countries. Fossil-fuel burning in power plants, vehicles, and factories is the main source of emissions that combine with oxygen in the atmosphere to cause acid rain. Air pollution is the second-leading cause of death from non-infectious disease. An estimated 99% of the world's population lives with levels of air pollution that exceed the World Health Organization recommended limits.
Cooking with polluting fuels such as wood, animal dung, coal, or kerosene is responsible for nearly all indoor air pollution, which causes an estimated 1.6 to 3.8 million deaths annually, and also contributes significantly to outdoor air pollution. Health effects are concentrated among women, who are likely to be responsible for cooking, and young children.
Environmental impacts extend beyond the by-products of combustion. Oil spills at sea harm marine life and may cause fires which release toxic emissions. Around 10% of global water use goes to energy production, mainly for cooling in thermal energy plants. In dry regions, this contributes to water scarcity. Bioenergy production, coal mining and processing, and oil extraction also require large amounts of water. Excessive harvesting of wood and other combustible material for burning can cause serious local environmental damage, including desertification.
Sustainable development goals
Meeting existing and future energy demands in a sustainable way is a critical challenge for the global goal of limiting climate change while maintaining economic growth and enabling living standards to rise. Reliable and affordable energy, particularly electricity, is essential for health care, education, and economic development. As of 2020, 790 million people in developing countries do not have access to electricity, and around 2.6 billion rely on burning polluting fuels for cooking.
Improving energy access in the least-developed countries and making energy cleaner are key to achieving most of the United Nations 2030 Sustainable Development Goals, which cover issues ranging from climate action to gender equality. Sustainable Development Goal 7 calls for "access to affordable, reliable, sustainable and modern energy for all", including universal access to electricity and to clean cooking facilities by 2030.
Energy conservation
Energy efficiency—using less energy to deliver the same goods or services, or delivering comparable services with less goods—is a cornerstone of many sustainable energy strategies. The International Energy Agency (IEA) has estimated that increasing energy efficiency could achieve 40% of greenhouse gas emission reductions needed to fulfil the Paris Agreement's goals.
Energy can be conserved by increasing the technical efficiency of appliances, vehicles, industrial processes, and buildings. Another approach is to use fewer materials whose production requires a lot of energy, for example through better building design and recycling. Behavioural changes such as using videoconferencing rather than business flights, or making urban trips by cycling, walking or public transport rather than by car, are another way to conserve energy. Government policies to improve efficiency can include building codes, performance standards, carbon pricing, and the development of energy-efficient infrastructure to encourage changes in transport modes.
The energy intensity of the global economy (the amount of energy consumed per unit of gross domestic product (GDP)) is a rough indicator of the energy efficiency of economic production. In 2010, global energy intensity was 5.6 megajoules (1.6 kWh) per US dollar of GDP. United Nations goals call for energy intensity to decrease by 2.6% each year between 2010 and 2030. In recent years this target has not been met. For instance, between 2017 and 2018, energy intensity decreased by only 1.1%.
Efficiency improvements often lead to a rebound effect in which consumers use the money they save to buy more energy-intensive goods and services. For example, recent technical efficiency improvements in transport and buildings have been largely offset by trends in consumer behaviour, such as selecting larger vehicles and homes.
Sustainable energy sources
Renewable energy sources
Renewable energy sources are essential to sustainable energy, as they generally strengthen energy security and emit far fewer greenhouse gases than fossil fuels. Renewable energy projects sometimes raise significant sustainability concerns, such as risks to biodiversity when areas of high ecological value are converted to bioenergy production or wind or solar farms.
Hydropower is the largest source of renewable electricity while solar and wind energy are growing rapidly. Photovoltaic solar and onshore wind are the cheapest forms of new power generation capacity in most countries. For more than half of the 770 million people who currently lack access to electricity, decentralised renewable energy such as solar-powered mini-grids is likely the cheapest method of providing it by 2030. United Nations targets for 2030 include substantially increasing the proportion of renewable energy in the world's energy supply.
According to the International Energy Agency, renewable energy sources like wind and solar power are now a commonplace source of electricity, making up 70% of all new investments made in the world's power generation. The Agency expects renewables to become the primary energy source for electricity generation globally in the next three years, overtaking coal.
Solar
The Sun is Earth's primary source of energy, a clean and abundantly available resource in many regions. In 2019, solar power provided around 3% of global electricity, mostly through solar panels based on photovoltaic cells (PV). Solar PV is expected to be the electricity source with the largest installed capacity worldwide by 2027. The panels are mounted on top of buildings or installed in utility-scale solar parks. Costs of solar photovoltaic cells have dropped rapidly, driving strong growth in worldwide capacity. The cost of electricity from new solar farms is competitive with, or in many places, cheaper than electricity from existing coal plants. Various projections of future energy use identify solar PV as one of the main sources of energy generation in a sustainable mix.
Most components of solar panels can be easily recycled, but this is not always done in the absence of regulation. Panels typically contain heavy metals, so they pose environmental risks if put in landfills. It takes fewer than two years for a solar panel to produce as much energy as was used for its production. Less energy is needed if materials are recycled rather than mined.
In concentrated solar power, solar rays are concentrated by a field of mirrors, heating a fluid. Electricity is produced from the resulting steam with a heat engine. Concentrated solar power can support dispatchable power generation, as some of the heat is typically stored to enable electricity to be generated when needed. In addition to electricity production, solar energy is used more directly; solar thermal heating systems are used for hot water production, heating buildings, drying, and desalination.
Wind power
Wind has been an important driver of development over millennia, providing mechanical energy for industrial processes, water pumps, and sailing ships. Modern wind turbines are used to generate electricity and provided approximately 6% of global electricity in 2019. Electricity from onshore wind farms is often cheaper than existing coal plants and competitive with natural gas and nuclear. Wind turbines can also be placed offshore, where winds are steadier and stronger than on land but construction and maintenance costs are higher.
Onshore wind farms, often built in wild or rural areas, have a visual impact on the landscape. While collisions with wind turbines kill both bats and to a lesser extent birds, these impacts are lower than from other infrastructure such as windows and transmission lines. The noise and flickering light created by the turbines can cause annoyance and constrain construction near densely populated areas. Wind power, in contrast to nuclear and fossil fuel plants, does not consume water. Little energy is needed for wind turbine construction compared to the energy produced by the wind power plant itself. Turbine blades are not fully recyclable, and research into methods of manufacturing easier-to-recycle blades is ongoing.
Hydropower
Hydroelectric plants convert the energy of moving water into electricity. In 2020, hydropower supplied 17% of the world's electricity, down from a high of nearly 20% in the mid-to-late 20th century.
In conventional hydropower, a reservoir is created behind a dam. Conventional hydropower plants provide a highly flexible, dispatchable electricity supply. They can be combined with wind and solar power to meet peaks in demand and to compensate when wind and sun are less available.
Compared to reservoir-based facilities, run-of-the-river hydroelectricity generally has less environmental impact. However, its ability to generate power depends on river flow, which can vary with daily and seasonal weather. Reservoirs provide water quantity controls that are used for flood control and flexible electricity output while also providing security during drought for drinking water supply and irrigation.
Hydropower ranks among the energy sources with the lowest levels of greenhouse gas emissions per unit of energy produced, but levels of emissions vary enormously between projects. The highest emissions tend to occur with large dams in tropical regions. These emissions are produced when the biological matter that becomes submerged in the reservoir's flooding decomposes and releases carbon dioxide and methane. Deforestation and climate change can reduce energy generation from hydroelectric dams. Depending on location, large dams can displace residents and cause significant local environmental damage; potential dam failure could place the surrounding population at risk.
Geothermal
Geothermal energy is produced by tapping into deep underground heat and harnessing it to generate electricity or to heat water and buildings. The use of geothermal energy is concentrated in regions where heat extraction is economical: a combination is needed of high temperatures, heat flow, and permeability (the ability of the rock to allow fluids to pass through). Power is produced from the steam created in underground reservoirs. Geothermal energy provided less than 1% of global energy consumption in 2020.
Geothermal energy is a renewable resource because thermal energy is constantly replenished from neighbouring hotter regions and the radioactive decay of naturally occurring isotopes. On average, the greenhouse gas emissions of geothermal-based electricity are less than 5% that of coal-based electricity. Geothermal energy carries a risk of inducing earthquakes, needs effective protection to avoid water pollution, and releases toxic emissions which can be captured.
Bioenergy
Biomass is renewable organic material that comes from plants and animals. It can either be burned to produce heat and electricity or be converted into biofuels such as biodiesel and ethanol, which can be used to power vehicles.
The climate impact of bioenergy varies considerably depending on where biomass feedstocks come from and how they are grown. For example, burning wood for energy releases carbon dioxide; those emissions can be significantly offset if the trees that were harvested are replaced by new trees in a well-managed forest, as the new trees will absorb carbon dioxide from the air as they grow. However, the establishment and cultivation of bioenergy crops can displace natural ecosystems, degrade soils, and consume water resources and synthetic fertilisers.
Approximately one-third of all wood used for traditional heating and cooking in tropical areas is harvested unsustainably. Bioenergy feedstocks typically require significant amounts of energy to harvest, dry, and transport; the energy usage for these processes may emit greenhouse gases. In some cases, the impacts of land-use change, cultivation, and processing can result in higher overall carbon emissions for bioenergy compared to using fossil fuels.
Use of farmland for growing biomass can result in less land being available for growing food. In the United States, around 10% of motor gasoline has been replaced by corn-based ethanol, which requires a significant proportion of the harvest. In Malaysia and Indonesia, clearing forests to produce palm oil for biodiesel has led to serious social and environmental effects, as these forests are critical carbon sinks and habitats for diverse species. Since photosynthesis captures only a small fraction of the energy in sunlight, producing a given amount of bioenergy requires a large amount of land compared to other renewable energy sources.
Second-generation biofuels which are produced from non-food plants or waste reduce competition with food production, but may have other negative effects including trade-offs with conservation areas and local air pollution. Relatively sustainable sources of biomass include algae, waste, and crops grown on soil unsuitable for food production.
Carbon capture and storage technology can be used to capture emissions from bioenergy power plants. This process is known as bioenergy with carbon capture and storage (BECCS) and can result in net carbon dioxide removal from the atmosphere. However, BECCS can also result in net positive emissions depending on how the biomass material is grown, harvested, and transported. Deployment of BECCS at scales described in some climate change mitigation pathways would require converting large amounts of cropland.
Marine energy
Marine energy has the smallest share of the energy market. It includes OTEC, tidal power, which is approaching maturity, and wave power, which is earlier in its development. Two tidal barrage systems in France and in South Korea make up 90% of global production. While single marine energy devices pose little risk to the environment, the impacts of larger devices are less well known.
Non-renewable energy sources
Fossil fuel switching and mitigation
Switching from coal to natural gas has advantages in terms of sustainability. For a given unit of energy produced, the life-cycle greenhouse-gas emissions of natural gas are around 40 times the emissions of wind or nuclear energy but are much less than coal. Burning natural gas produces around half the emissions of coal when used to generate electricity and around two-thirds the emissions of coal when used to produce heat. Natural gas combustion also produces less air pollution than coal. However, natural gas is a potent greenhouse gas in itself, and leaks during extraction and transportation can negate the advantages of switching away from coal. The technology to curb methane leaks is widely available but it is not always used.
Switching from coal to natural gas reduces emissions in the short term and thus contributes to climate change mitigation. However, in the long term it does not provide a path to net-zero emissions. Developing natural gas infrastructure risks carbon lock-in and stranded assets, where new fossil infrastructure either commits to decades of carbon emissions, or has to be written off before it makes a profit.
The greenhouse gas emissions of fossil fuel and biomass power plants can be significantly reduced through carbon capture and storage (CCS). Most studies use a working assumption that CCS can capture 85–90% of the carbon dioxide emissions from a power plant. Even if 90% of emitted is captured from a coal-fired power plant, its uncaptured emissions are still many times greater than the emissions of nuclear, solar or wind energy per unit of electricity produced.
Since coal plants using CCS are less efficient, they require more coal and thus increase the pollution associated with mining and transporting coal. The CCS process is expensive, with costs depending considerably on the location's proximity to suitable geology for carbon dioxide storage. Deployment of this technology is still very limited, with only 21 large-scale CCS plants in operation worldwide as of 2020.
Nuclear power
Nuclear power has been used since the 1950s as a low-carbon source of baseload electricity. Nuclear power plants in over 30 countries generate about 10% of global electricity. As of 2019, nuclear generated over a quarter of all low-carbon energy, making it the second largest source after hydropower.
Nuclear power's lifecycle greenhouse gas emissions—including the mining and processing of uranium—are similar to the emissions from renewable energy sources. Nuclear power uses little land per unit of energy produced, compared to the major renewables. Additionally, Nuclear power does not create local air pollution. Although the uranium ore used to fuel nuclear fission plants is a non-renewable resource, enough exists to provide a supply for hundreds to thousands of years. However, uranium resources that can be accessed in an economically feasible manner, at the present state, are limited and uranium production could hardly keep up during the expansion phase. Climate change mitigation pathways consistent with ambitious goals typically see an increase in power supply from nuclear.
There is controversy over whether nuclear power is sustainable, in part due to concerns around nuclear waste, nuclear weapon proliferation, and accidents. Radioactive nuclear waste must be managed for thousands of years and nuclear power plants create fissile material that can be used for weapons. For each unit of energy produced, nuclear energy has caused far fewer accidental and pollution-related deaths than fossil fuels, and the historic fatality rate of nuclear is comparable to renewable sources. Public opposition to nuclear energy often makes nuclear plants politically difficult to implement.
Reducing the time and the cost of building new nuclear plants have been goals for decades but costs remain high and timescales long. Various new forms of nuclear energy are in development, hoping to address the drawbacks of conventional plants. Fast breeder reactors are capable of recycling nuclear waste and therefore can significantly reduce the amount of waste that requires geological disposal, but have not yet been deployed on a large-scale commercial basis. Nuclear power based on thorium (rather than uranium) may be able to provide higher energy security for countries that do not have a large supply of uranium. Small modular reactors may have several advantages over current large reactors: It should be possible to build them faster and their modularization would allow for cost reductions via learning-by-doing.
Several countries are attempting to develop nuclear fusion reactors, which would generate small amounts of waste and no risk of explosions. Although fusion power has taken steps forward in the lab, the multi-decade timescale needed to bring it to commercialization and then scale means it will not contribute to a 2050 net zero goal for climate change mitigation.
Energy system transformation
Decarbonisation of the global energy system
The emissions reductions necessary to keep global warming below 2°C will require a system-wide transformation of the way energy is produced, distributed, stored, and consumed. For a society to replace one form of energy with another, multiple technologies and behaviours in the energy system must change. For example, transitioning from oil to solar power as the energy source for cars requires the generation of solar electricity, modifications to the electrical grid to accommodate fluctuations in solar panel output or the introduction of variable battery chargers and higher overall demand, adoption of electric cars, and networks of electric vehicle charging facilities and repair shops.
Many climate change mitigation pathways envision three main aspects of a low-carbon energy system:
The use of low-emission energy sources to produce electricity
Electrification – that is increased use of electricity instead of directly burning fossil fuels
Accelerated adoption of energy efficiency measures
Some energy-intensive technologies and processes are difficult to electrify, including aviation, shipping, and steelmaking. There are several options for reducing the emissions from these sectors: biofuels and synthetic carbon-neutral fuels can power many vehicles that are designed to burn fossil fuels, however biofuels cannot be sustainably produced in the quantities needed and synthetic fuels are currently very expensive. For some applications, the most prominent alternative to electrification is to develop a system based on sustainably-produced hydrogen fuel.
Full decarbonisation of the global energy system is expected to take several decades and can mostly be achieved with existing technologies. In the IEA's proposal for achieving net zero emissions by 2050, about 35% of the reduction in emissions depends on technologies that are still in development as of 2023. Technologies that are relatively immature include batteries and processes to create carbon-neutral fuels. Developing new technologies requires research and development, demonstration, and cost reductions via deployment.
The transition to a zero-carbon energy system will bring strong co-benefits for human health: The World Health Organization estimates that efforts to limit global warming to 1.5 °C could save millions of lives each year from reductions to air pollution alone. With good planning and management, pathways exist to provide universal access to electricity and clean cooking by 2030 in ways that are consistent with climate goals. Historically, several countries have made rapid economic gains through coal usage. However, there remains a window of opportunity for many poor countries and regions to "leapfrog" fossil fuel dependency by developing their energy systems based on renewables, given adequate international investment and knowledge transfer.
Integrating variable energy sources
To deliver reliable electricity from variable renewable energy sources such as wind and solar, electrical power systems require flexibility. Most electrical grids were constructed for non-intermittent energy sources such as coal-fired power plants. As larger amounts of solar and wind energy are integrated into the grid, changes have to be made to the energy system to ensure that the supply of electricity is matched to demand. In 2019, these sources generated 8.5% of worldwide electricity, a share that has grown rapidly.
There are various ways to make the electricity system more flexible. In many places, wind and solar generation are complementary on a daily and a seasonal scale: there is more wind during the night and in winter when solar energy production is low. Linking different geographical regions through long-distance transmission lines allows for further cancelling out of variability. Energy demand can be shifted in time through energy demand management and the use of smart grids, matching the times when variable energy production is highest. With grid energy storage, energy produced in excess can be released when needed. Further flexibility could be provided from sector coupling, that is coupling the electricity sector to the heat and mobility sector via power-to-heat-systems and electric vehicles.
Building overcapacity for wind and solar generation can help ensure that enough electricity is produced even during poor weather. In optimal weather, energy generation may have to be curtailed if excess electricity cannot be used or stored. The final demand-supply mismatch may be covered by using dispatchable energy sources such as hydropower, bioenergy, or natural gas.
Energy storage
Energy storage helps overcome barriers to intermittent renewable energy and is an important aspect of a sustainable energy system. The most commonly used and available storage method is pumped-storage hydroelectricity, which requires locations with large differences in height and access to water. Batteries, especially lithium-ion batteries, are also deployed widely. Batteries typically store electricity for short periods; research is ongoing into technology with sufficient capacity to last through seasons.
Costs of utility-scale batteries in the US have fallen by around 70% since 2015, however the cost and low energy density of batteries makes them impractical for the very large energy storage needed to balance inter-seasonal variations in energy production. Pumped hydro storage and power-to-gas (converting electricity to gas and back) with capacity for multi-month usage has been implemented in some locations.
Electrification
Compared to the rest of the energy system, emissions can be reduced much faster in the electricity sector. As of 2019, 37% of global electricity is produced from low-carbon sources (renewables and nuclear energy). Fossil fuels, primarily coal, produce the rest of the electricity supply. One of the easiest and fastest ways to reduce greenhouse gas emissions is to phase out coal-fired power plants and increase renewable electricity generation.
Climate change mitigation pathways envision extensive electrification—the use of electricity as a substitute for the direct burning of fossil fuels for heating buildings and for transport. Ambitious climate policy would see a doubling of energy share consumed as electricity by 2050, from 20% in 2020.
One of the challenges in providing universal access to electricity is distributing power to rural areas. Off-grid and mini-grid systems based on renewable energy, such as small solar PV installations that generate and store enough electricity for a village, are important solutions. Wider access to reliable electricity would lead to less use of kerosene lighting and diesel generators, which are currently common in the developing world.
Infrastructure for generating and storing renewable electricity requires minerals and metals, such as cobalt and lithium for batteries and copper for solar panels. Recycling can meet some of this demand if product lifecycles are well-designed, however achieving net zero emissions would still require major increases in mining for 17 types of metals and minerals. A small group of countries or companies sometimes dominate the markets for these commodities, raising geopolitical concerns. Most of the world's cobalt, for instance, is mined in the Democratic Republic of the Congo, a politically unstable region where mining is often associated with human rights risks. More diverse geographical sourcing may ensure a more flexible and less brittle supply chain.
Hydrogen
Hydrogen gas is widely discussed in the context of energy, as an energy carrier with potential to reduce greenhouse gas emissions. This requires hydrogen to be produced cleanly, in quantities to supply in sectors and applications where cheaper and more energy efficient mitigation alternatives are limited. These applications include heavy industry and long-distance transport.
Hydrogen can be deployed as an energy source in fuel cells to produce electricity, or via combustion to generate heat. When hydrogen is consumed in fuel cells, the only emission at the point of use is water vapour. Combustion of hydrogen can lead to the thermal formation of harmful nitrogen oxides. The overall lifecycle emissions of hydrogen depend on how it is produced. Nearly all of the world's current supply of hydrogen is created from fossil fuels.
The main method is steam methane reforming, in which hydrogen is produced from a chemical reaction between steam and methane, the main component of natural gas. Producing one tonne of hydrogen through this process emits 6.6–9.3 tonnes of carbon dioxide. While carbon capture and storage (CCS) could remove a large fraction of these emissions, the overall carbon footprint of hydrogen from natural gas is difficult to assess , in part because of emissions (including vented and fugitive methane) created in the production of the natural gas itself.
Electricity can be used to split water molecules, producing sustainable hydrogen provided the electricity was generated sustainably. However, this electrolysis process is currently more expensive than creating hydrogen from methane without CCS and the efficiency of energy conversion is inherently low. Hydrogen can be produced when there is a surplus of variable renewable electricity, then stored and used to generate heat or to re-generate electricity. It can be further transformed into liquid fuels such as green ammonia and green methanol. Innovation in hydrogen electrolysers could make large-scale production of hydrogen from electricity more cost-competitive.
Hydrogen fuel can produce the intense heat required for industrial production of steel, cement, glass, and chemicals, thus contributing to the decarbonisation of industry alongside other technologies, such as electric arc furnaces for steelmaking. For steelmaking, hydrogen can function as a clean energy carrier and simultaneously as a low-carbon catalyst replacing coal-derived coke. Hydrogen used to decarbonise transportation is likely to find its largest applications in shipping, aviation and to a lesser extent heavy goods vehicles. For light duty vehicles including passenger cars, hydrogen is far behind other alternative fuel vehicles, especially compared with the rate of adoption of battery electric vehicles, and may not play a significant role in future.
Disadvantages of hydrogen as an energy carrier include high costs of storage and distribution due to hydrogen's explosivity, its large volume compared to other fuels, and its tendency to make pipes brittle.
Energy usage technologies
Transport
Transport accounts for 14% of global greenhouse gas emissions, but there are multiple ways to make transport more sustainable. Public transport typically emits fewer greenhouse gases per passenger than personal vehicles, since trains and buses can carry many more passengers at once. Short-distance flights can be replaced by high-speed rail, which is more efficient, especially when electrified. Promoting non-motorised transport such as walking and cycling, particularly in cities, can make transport cleaner and healthier.
The energy efficiency of cars has increased over time, but shifting to electric vehicles is an important further step towards decarbonising transport and reducing air pollution. A large proportion of traffic-related air pollution consists of particulate matter from road dust and the wearing-down of tyres and brake pads. Substantially reducing pollution from these non-tailpipe sources cannot be achieved by electrification; it requires measures such as making vehicles lighter and driving them less. Light-duty cars in particular are a prime candidate for decarbonization using battery technology. 25% of the world's emissions still originate from the transportation sector.
Long-distance freight transport and aviation are difficult sectors to electrify with current technologies, mostly because of the weight of batteries needed for long-distance travel, battery recharging times, and limited battery lifespans. Where available, freight transport by ship and rail is generally more sustainable than by air and by road. Hydrogen vehicles may be an option for larger vehicles such as lorries. Many of the techniques needed to lower emissions from shipping and aviation are still early in their development, with ammonia (produced from hydrogen) a promising candidate for shipping fuel. Aviation biofuel may be one of the better uses of bioenergy if emissions are captured and stored during manufacture of the fuel.
Buildings
Over one-third of energy use is in buildings and their construction. To heat buildings, alternatives to burning fossil fuels and biomass include electrification through heat pumps or electric heaters, geothermal energy, central solar heating, reuse of waste heat, and seasonal thermal energy storage. Heat pumps provide both heat and air conditioning through a single appliance. The IEA estimates heat pumps could provide over 90% of space and water heating requirements globally.
A highly efficient way to heat buildings is through district heating, in which heat is generated in a centralised location and then distributed to multiple buildings through insulated pipes. Traditionally, most district heating systems have used fossil fuels, but modern and cold district heating systems are designed to use high shares of renewable energy.Cooling of buildings can be made more efficient through passive building design, planning that minimises the urban heat island effect, and district cooling systems that cool multiple buildings with piped cold water. Air conditioning requires large amounts of electricity and is not always affordable for poorer households. Some air conditioning units still use refrigerants that are greenhouse gases, as some countries have not ratified the Kigali Amendment to only use climate-friendly refrigerants.
Cooking
In developing countries where populations suffer from energy poverty, polluting fuels such as wood or animal dung are often used for cooking. Cooking with these fuels is generally unsustainable, because they release harmful smoke and because harvesting wood can lead to forest degradation. The universal adoption of clean cooking facilities, which are already ubiquitous in rich countries, would dramatically improve health and have minimal negative effects on climate. Clean cooking facilities, e.g. cooking facilities that produce less indoor soot, typically use natural gas, liquefied petroleum gas (both of which consume oxygen and produce carbon-dioxide) or electricity as the energy source; biogas systems are a promising alternative in some contexts. Improved cookstoves that burn biomass more efficiently than traditional stoves are an interim solution where transitioning to clean cooking systems is difficult.
Industry
Over one-third of energy use is by industry. Most of that energy is deployed in thermal processes: generating heat, drying, and refrigeration. The share of renewable energy in industry was 14.5% in 2017—mostly low-temperature heat supplied by bioenergy and electricity. The most energy-intensive activities in industry have the lowest shares of renewable energy, as they face limitations in generating heat at temperatures over .
For some industrial processes, commercialisation of technologies that have not yet been built or operated at full scale will be needed to eliminate greenhouse gas emissions. Steelmaking, for instance, is difficult to electrify because it traditionally uses coke, which is derived from coal, both to create very high-temperature heat and as an ingredient in the steel itself. The production of plastic, cement, and fertilisers also requires significant amounts of energy, with limited possibilities available to decarbonise. A switch to a circular economy would make industry more sustainable as it involves recycling more and thereby using less energy compared to investing energy to mine and refine new raw materials.
Government policies
Well-designed government policies that promote energy system transformation can lower greenhouse gas emissions and improve air quality simultaneously, and in many cases can also increase energy security and lessen the financial burden of using energy.
Environmental regulations have been used since the 1970s to promote more sustainable use of energy. Some governments have committed to dates for phasing out coal-fired power plants and ending new fossil fuel exploration. Governments can require that new cars produce zero emissions, or new buildings are heated by electricity instead of gas. Renewable portfolio standards in several countries require utilities to increase the percentage of electricity they generate from renewable sources.
Governments can accelerate energy system transformation by leading the development of infrastructure such as long-distance electrical transmission lines, smart grids, and hydrogen pipelines. In transport, appropriate infrastructure and incentives can make travel more efficient and less car-dependent. Urban planning that discourages sprawl can reduce energy use in local transport and buildings while enhancing quality of life. Government-funded research, procurement, and incentive policies have historically been critical to the development and maturation of clean energy technologies, such as solar and lithium batteries. In the IEA's scenario for a net zero-emission energy system by 2050, public funding is rapidly mobilised to bring a range of newer technologies to the demonstration phase and to encourage deployment.
Carbon pricing (such as a tax on emissions) gives industries and consumers an incentive to reduce emissions while letting them choose how to do so. For example, they can shift to low-emission energy sources, improve energy efficiency, or reduce their use of energy-intensive products and services. Carbon pricing has encountered strong political pushback in some jurisdictions, whereas energy-specific policies tend to be politically safer. Most studies indicate that to limit global warming to 1.5°C, carbon pricing would need to be complemented by stringent energy-specific policies.
As of 2019, the price of carbon in most regions is too low to achieve the goals of the Paris Agreement. Carbon taxes provide a source of revenue that can be used to lower other taxes or help lower-income households afford higher energy costs. Some governments, such as the EU and the UK, are exploring the use of carbon border adjustments. These place tariffs on imports from countries with less stringent climate policies, to ensure that industries subject to internal carbon prices remain competitive.
The scale and pace of policy reforms that have been initiated as of 2020 are far less than needed to fulfil the climate goals of the Paris Agreement. In addition to domestic policies, greater international cooperation is required to accelerate innovation and to assist poorer countries in establishing a sustainable path to full energy access.
Countries may support renewables to create jobs. The International Labour Organization estimates that efforts to limit global warming to 2 °C would result in net job creation in most sectors of the economy. It predicts that 24 million new jobs would be created by 2030 in areas such as renewable electricity generation, improving energy-efficiency in buildings, and the transition to electric vehicles. Six million jobs would be lost, in sectors such as mining and fossil fuels. Governments can make the transition to sustainable energy more politically and socially feasible by ensuring a just transition for workers and regions that depend on the fossil fuel industry, to ensure they have alternative economic opportunities.
Finance
Raising enough money for innovation and investment is a prerequisite for the energy transition. The IPCC estimates that to limit global warming to 1.5 °C, US$2.4 trillion would need to be invested in the energy system each year between 2016 and 2035. Most studies project that these costs, equivalent to 2.5% of world GDP, would be small compared to the economic and health benefits. Average annual investment in low-carbon energy technologies and energy efficiency would need to be six times more by 2050 compared to 2015. Underfunding is particularly acute in the least developed countries, which are not attractive to the private sector.
The United Nations Framework Convention on Climate Change estimates that climate financing totalled $681 billion in 2016. Most of this is private-sector investment in renewable energy deployment, public-sector investment in sustainable transport, and private-sector investment in energy efficiency. The Paris Agreement includes a pledge of an extra $100 billion per year from developed countries to poor countries, to do climate change mitigation and adaptation. This goal has not been met and measurement of progress has been hampered by unclear accounting rules. If energy-intensive businesses like chemicals, fertilizers, ceramics, steel, and non-ferrous metals invest significantly in R&D, its usage in industry might amount to between 5% and 20% of all energy used.
Fossil fuel funding and subsidies are a significant barrier to the energy transition. Direct global fossil fuel subsidies were $319 billion in 2017. This rises to $5.2 trillion when indirect costs are priced in, like the effects of air pollution. Ending these could lead to a 28% reduction in global carbon emissions and a 46% reduction in air pollution deaths. Funding for clean energy has been largely unaffected by the COVID-19 pandemic, and pandemic-related economic stimulus packages offer possibilities for a green recovery.
References
Sources
External links
Climate change mitigation
Climate change policy
Emissions reduction
Energy economics
Environmental impact of the energy industry
Sustainable development | 0.785852 | 0.996801 | 0.783338 |
Sustainability measurement | Sustainability measurement is a set of frameworks or indicators used to measure how sustainable something is. This includes processes, products, services and businesses. Sustainability is difficult to quantify. It may even be impossible to measure as there is no fixed definition. To measure sustainability, frameworks and indicators consider environmental, social and economic domains. The metrics vary by use case and are still evolving. They include indicators, benchmarks and audits. They include sustainability standards and certification systems like Fairtrade and Organic. They also involve indices and accounting. They can include assessment, appraisal and other reporting systems. The metrics are used over a wide range of spatial and temporal scales. For organizations, sustainability measures include corporate sustainability reporting and Triple Bottom Line accounting. For countries, they include estimates of the quality of sustainability governance or quality of life measures, or environmental assessments like the Environmental Sustainability Index and Environmental Performance Index. Some methods let us track sustainable development. These include the UN Human Development Index and ecological footprints.
Two related concepts for sustainability measurement are planetary boundaries and ecological footprint. If the boundaries are not crossed and the ecological footprint does not exceed the carrying capacity of the biosphere, the mode of life can be regarded as sustainable.
A set of well defined and harmonized indicators can help to make sustainability tangible. Those indicators are expected to be identified and adjusted through empirical observations (trial and error). The most common critiques are related to issues like data quality, comparability, objective function and the necessary resources. However a more general criticism is coming from the project management community: "How can a sustainable development be achieved at global level if we cannot monitor it in any single project?".
Sustainability need and framework
Sustainable development has become the primary yardstick of improvement for industries and is being integrated into effective government and business strategies. The needs for sustainability measurement include improvement in the operations, benchmarking performances, tracking progress, and evaluating process, among others. For the purposes of building sustainability indicators, frameworks can be developed and the steps are as follows:
Defining the system- A proper and definite system is defined. A proper system boundary is drawn for further analysis.
Elements of the system- The whole input, output of materials, emissions, energy and other auxiliary elements are properly analysed. The working conditions, process parameters and characteristics are defined in this step.
Indicators selection- The indicators is selected of which measurement has to be done. This forms the metric for this system whose analysis is done in the further steps.
Assessment and Measurement- Proper assessing tools are used and tests or experiments are performed for the pre-defined indicators to give a value for the indicators measurement.
Analysis and reviewing the results- Once the results have been obtained, proper analysis and interpretation is done and tools are used to improve and revise the processes present in the system.
Sustainability indicators and their function
The principal objective of sustainability indicators is to inform public policy-making as part of the process of sustainability governance. Sustainability indicators can provide information on any aspect of the interplay between the environment and socio-economic activities. Building strategic indicator sets generally deals with just a few simple questions: what is happening? (descriptive indicators), does it matter and are we reaching targets? (performance indicators), are we improving? (efficiency indicators), are measures working? (policy effectiveness indicators), and are we generally better off? (total welfare indicators).
The International Institute for Sustainable Development and the United Nations Conference on Trade and Development established the Committee on Sustainability Assessment (COSA) in 2006 to evaluate sustainability initiatives operating in agriculture and develop indicators for their measurable social, economic and environmental objectives.
One popular general framework used by The European Environment Agency uses a slight modification of the Organisation for Economic Co-operation and Development DPSIR system. This breaks up environmental impact into five stages. Social and economic developments (consumption and production) (D)rive or initiate environmental (P)ressures which, in turn, produces a change in the (S)tate of the environment which leads to (I)mpacts of various kinds. Societal (R)esponses (policy guided by sustainability indicators) can be introduced at any stage of this sequence of events.
Politics
A study concluded that social indicators and, therefore, sustainable development indicators, are scientific constructs whose principal objective is to inform public policy-making. The International Institute for Sustainable Development has similarly developed a political policy framework, linked to a sustainability index for establishing measurable entities and metrics. The framework consists of six core areas:
International trade and investment
Economic policy
Climate change and energy
Measurement and assessment
Natural resource management
Communication technologies.
The United Nations Global Compact Cities Programme has defined sustainable political development in a way that broadens the usual definition beyond states and governance. The political is defined as the domain of practices and meanings associated with basic issues of social power as they pertain to the organisation, authorisation, legitimation and regulation of a social life held in common. This definition is in accord with the view that political change is important for responding to economic, ecological and cultural challenges. It also means that the politics of economic change can be addressed. They have listed seven subdomains of the domain of politics:
Organization and governance
Law and justice
Communication and critique
Representation and negotiation
Security and accord
Dialogue and reconciliation
Ethics and accountability
Metrics at the global scale
There are numerous indicators which could be used as basis for sustainability measurement. Few commonly used indicators are:
Environmental sustainability indicators:
Global warming potential
Acidification potential
Ozone depletion potential
Aerosol optical depth
Eutrophication potential
Ionization radiation potential
Photochemical ozone potential
Waste treatment
Freshwater use
Energy resources use
Level of Biodiversity
Economic indicators:
Gross domestic product
Trade balance
Local government income
Profit, value and tax
Investments
Social indicators:
Employment generated
Equity
Health and safety
Education
Housing/living conditions
Community cohesion
Social security
Due to the large numbers of various indicators that could be used for sustainability measurement, proper assessment and monitoring is required. In order to organize the chaos and disorder in selecting the metrics, specific organizations have been set up which groups the metrics under different categories and defines proper methodology to implement it for measurement. They provide modelling techniques and indexes to compare the measurement and have methods to convert the scientific measurement results into easy to understand terms.
United Nations indicators
The United Nations has developed extensive sustainability measurement tools in relation to sustainable development as well as a System of Integrated Environmental and Economic Accounting.
The UN Commission on Sustainable Development (CSD) has published a list of 140 indicators which covers environmental, social, economical and institutional aspects of sustainable development.
Benchmarks, indicators, indexes, auditing etc.
In the last couple of decades, there has arisen a crowded toolbox of quantitative methods used to assess sustainability — including measures of resource use like life cycle assessment, measures of consumption like the ecological footprint and measurements of quality of environmental governance like the Environmental Performance Index. The following is a list of quantitative "tools" used by sustainability scientists - the different categories are for convenience only as defining criteria will intergrade. It would be too difficult to list all those methods available at different levels of the organization so those listed here are at the global level only.
Benchmarks
A benchmark is a point of reference for a measurement. Once a benchmark is established it is possible to assess trends and measure progress. Baseline global data on a range of sustainability parameters is available in the list of global sustainability statistics.
Indices
A sustainability index is an aggregate sustainability indicator that combines multiple sources of data. There is a Consultative Group on Sustainable Development Indices
Air quality index
Child Development Index
Corruption Perceptions Index
Democracy Index
Environmental Performance Index
Energy Sustainability Index
Education Index
Environmental Sustainability Index
Environmental Vulnerability Index
GDP per capita
Gini coefficient
Gender Parity Index
Gender-related Development Index
Gender Empowerment Measure
Gross national happiness
Genuine Progress Indicator
(formerly Index of Sustainable Economic Welfare)
Green Score City Index
Gross National Product
Happy Planet Index
Human Development Index (see List of countries by HDI)
Legatum Prosperity Index
Index of Sustainable Economic Welfare
Life Expectancy Index
Sustainable Governance Indicators. The Status Index ranks 30 OECD countries in terms of sustainable reform performance
Sustainable Society Index
SDEWES Index
Water Poverty Index
Metrics
Many environmental problems ultimately relate to the human effect on those global biogeochemical cycles that are critical to life. Over the last decade monitoring these cycles have become a more urgent target for research:
water cycle
carbon cycle
phosphorus cycle
nitrogen cycle
sulphur cycle
oxygen cycle
Auditing
Sustainability auditing and reporting are used to evaluate the sustainability performance of a company, organization, or other entity using various performance indicators. Popular auditing procedures available at the global level include:
ISO 14000
ISO 14031
The Natural Step
Triple Bottom Line Accounting
input-output analysis can be used for any level of organization with a financial budget. It relates environmental impact to expenditure by calculating the resource intensity of goods and services.
Reporting
Global Reporting Initiative modelling and monitoring procedures. Many of these are currently in their developing phase.
State of the Environment reporting provides general background information on the environment and is progressively including more indicators.
European sustainability
Accounting
Some accounting methods attempt to include environmental costs rather than treating them as externalities
Green accounting
Sustainable value
Sustainability economics
Life cycle analysis
A life cycle analysis is often conducted when assessing the sustainability of a product or prototype. The decision to choose materials is heavily weighted on its longevity, renewability, and efficiency. These factors ensure that researchers are conscious of community values that align with positive environmental, social, and economic impacts.
Resource metrics
Part of this process can relate to resource use such as energy accounting or to economic metrics or price system values as compared to non-market economics potential, for understanding resource use.
An important task for resource theory (energy economics) is to develop methods to optimize resource conversion processes. These systems are described and analyzed by means of the methods of mathematics and the natural sciences. Human factors, however, have dominated the development of our perspective of the relationship between nature and society since at least the Industrial Revolution, and in particular, have influenced how we describe and measure the economic impacts of changes in resource quality. A balanced view of these issues requires an understanding of the physical framework in which all human ideas, institutions, and aspirations must operate.
Energy returned on energy invested
When oil production first began in the mid-nineteenth century, the largest oil fields recovered fifty barrels of oil for every barrel used in the extraction, transportation, and refining. This ratio is often referred to as the Energy Return on Energy Investment (EROI or EROEI). Currently, between one and five barrels of oil are recovered for each barrel-equivalent of energy used in the recovery process. As the EROEI drops to one, or equivalently the net energy gain falls to zero, the oil production is no longer a net energy source. This happens long before the resource is physically exhausted.
Note that it is important to understand the distinction between a barrel of oil, which is a measure of oil, and a barrel of oil equivalent (BOE), which is a measure of energy. Many sources of energy, such as fission, solar, wind, and coal, are not subject to the same near-term supply restrictions that oil is. Accordingly, even an oil source with an EROEI of 0.5 can be usefully exploited if the energy required to produce that oil comes from a cheap and plentiful energy source. Availability of cheap, but hard to transport, natural gas in some oil fields has led to using natural gas to fuel enhanced oil recovery. Similarly, natural gas in huge amounts is used to power most Athabasca Tar Sands plants. Cheap natural gas has also led to ethanol fuel produced with a net EROEI of less than 1, although figures in this area are controversial because methods to measure EROEI are in debate.
Growth-based economic models
Insofar as economic growth is driven by oil consumption growth, post-peak societies must adapt. M. King Hubbert believed:
Some economists describe the problem as uneconomic growth or a false economy. At the political right, Fred Ikle has warned about "conservatives addicted to the Utopia of Perpetual Growth". Brief oil interruptions in 1973 and 1979 markedly slowed – but did not stop – the growth of world GDP.
Between 1950 and 1984, as the Green Revolution transformed agriculture around the globe, world grain production increased by 250%. The energy for the Green Revolution was provided by fossil fuels in the form of fertilizers (natural gas), pesticides (oil), and hydrocarbon fueled irrigation.
David Pimentel, professor of ecology and agriculture at Cornell University, and Mario Giampietro, senior researcher at the National Research Institute on Food and Nutrition (INRAN), place in their study Food, Land, Population and the U.S. Economy the maximum U.S. population for a sustainable economy at 200 million. To achieve a sustainable economy world population will have to be reduced by two-thirds, says the study. Without population reduction, this study predicts an agricultural crisis beginning in 2020, becoming critical c. 2050. The peaking of global oil along with the decline in regional natural gas production may precipitate this agricultural crisis sooner than generally expected. Dale Allen Pfeiffer claims that coming decades could see spiraling food prices without relief and massive starvation on a global level such as never experienced before.
Hubbert peaks
There is an active debate about most suitable sustainability indicator's use and by adopting a thermodynamic approach through the concept of "exergy" and Hubbert peaks, it is possible to incorporate all into a single measure of resource depletion.The exergy analysis of minerals could constitute a universal and transparent tool for the management of the earth's physical stock.
Hubbert peak can be used as a metric for sustainability and depletion of non-renewable resources. It can be used as reference for many metrics for non-renewable resources such as:
Stagnating supplies
Rising prices
Individual country peaks
Decreasing discoveries
Finding and development costs
Spare capacity
Export capabilities of producing countries
System inertia and timing
Reserves-to-production ratio
Past history of depletion and optimism
Although Hubbert peak theory receives most attention in relation to peak oil production, it has also been applied to other natural resources.
Natural gas
Doug Reynolds predicted in 2005 that the North American peak would occur in 2007. Bentley (p. 189) predicted a world "decline in conventional gas production from about 2020".
Coal
Peak coal is significantly further out than peak oil, but we can observe the example of anthracite in the US, a high grade coal whose production peaked in the 1920s. Anthracite was studied by Hubbert, and matches a curve closely. Pennsylvania's coal production also matches Hubbert's curve closely, but this does not mean that coal in Pennsylvania is exhausted—far from it. If production in Pennsylvania returned at its all-time high, there are reserves for 190 years. Hubbert had recoverable coal reserves worldwide at 2500 × 109 metric tons and peaking around 2150(depending on usage).
More recent estimates suggest an earlier peak. Coal: Resources and Future Production (PDF 630KB ), published on April 5, 2007 by the Energy Watch Group (EWG), which reports to the German Parliament, found that global coal production could peak in as few as 15 years. Reporting on this Richard Heinberg also notes that the date of peak annual energetic extraction from coal will likely come earlier than the date of peak in quantity of coal (tons per year) extracted as the most energy-dense types of coal have been mined most extensively. A second study,
The Future of Coal by B. Kavalov and S. D. Peteves of the Institute for Energy (IFE), prepared for European Commission Joint Research Centre, reaches similar conclusions and states that
""coal might not be so abundant, widely available and reliable as an energy source in the future".
Work by David Rutledge of Caltech predicts that the total of world coal production will amount to only about 450 gigatonnes. This
implies that coal is running out faster than usually assumed.
Finally, insofar as global peak oil and peak in natural gas are expected anywhere from imminently to within decades at most, any increase in coal production (mining) per annum to compensate for declines in oil or NG production, would necessarily translate to an earlier date of peak as compared with peak coal under a scenario in which annual production remains constant.
Fissionable materials
In a paper in 1956, after a review of US fissionable reserves, Hubbert notes of nuclear power:
Technologies such as the thorium fuel cycle, reprocessing and fast breeders can, in theory, considerably extend the life of uranium reserves. Roscoe Bartlett claims
Caltech physics professor David Goodstein has stated that
Metals
Hubbert applied his theory to "rock containing an abnormally high concentration of a given metal" and reasoned that the peak production for metals such as copper, tin, lead, zinc and others would occur in the time frame of decades and iron in the time frame of two centuries like coal. The price of copper rose 500% between 2003 and 2007 was by some attributed to peak copper. Copper prices later fell, along with many other commodities and stock prices, as demand shrank from fear of a global recession. Lithium availability is a concern for a fleet of Li-ion battery using cars but a paper published in 1996 estimated that world reserves are adequate for at least 50 years. A similar prediction for platinum use in fuel cells notes that the metal could be easily recycled.
Phosphorus
Phosphorus supplies are essential to farming and depletion of reserves is estimated at somewhere from 60 to 130 years. Individual countries supplies vary widely; without a recycling initiative America's supply is estimated around 30 years. Phosphorus supplies affect total agricultural output which in turn limits alternative fuels such as biodiesel and ethanol.
Peak water
Hubbert's original analysis did not apply to renewable resources. However over-exploitation often results in a Hubbert peak nonetheless. A modified Hubbert curve applies to any resource that can be harvested faster than it can be replaced.
For example, a reserve such as the Ogallala Aquifer can be mined at a rate that far exceeds replenishment. This turns much of the world's underground water and lakes into finite resources with peak usage debates similar to oil. These debates usually center around agriculture and suburban water usage but generation of electricity from nuclear energy or coal and tar sands mining mentioned above is also water resource intensive. The term fossil water is sometimes used to describe aquifers whose water is not being recharged.
Renewable resources
Fisheries: At least one researcher has attempted to perform Hubbert linearization (Hubbert curve) on the whaling industry, as well as charting the transparently dependent price of caviar on sturgeon depletion. Another example is the cod of the North Sea. The comparison of the cases of fisheries and of mineral extraction tells us that the human pressure on the environment is causing a wide range of resources to go through a depletion cycle which follows a Hubbert curve.
Sustainability gaps
Sustainability measurements and indicators are part of an ever-evolving and changing process and has various gaps to be filled to achieve an integrated framework and model. The following are some of the breaks in continuity:
Global indicators- Due to differences in social, economical, and environmental conditions of countries, each country has its own indicators and indexes to measure sustainability, which can lead to improper and varying interpretation at the global level. Hence, there common indexes and measuring parameters would allow comparisons among countries. In agriculture, comparable indicators are already in use. Coffee and cocoa studies in twelve countries using common indicators are among the first to report insights from comparing across countries.
Policymaking- After the indicators are defined and analysis is done for the measurements from the indicators, proper policymaking methodology can be set up to improve the results achieved. Policymaking would implement changes in the particular inventory list used for measuring, which could lead to better results.
Development of individual indicators- Value-based indicators can be developed to measure the efforts by every human being part of the ecosystem. This can affect policymaking, as policy is most effective when there is public participation.
Data collection- Due to a number of factors including inappropriate methodology applied to data collection, dynamics of change in data, lack of adequate time and improper framework in analysis of data, measurements can quickly become outdated, inaccurate, and unpresentable. Data collections built up from the grass-roots level allow context-appropriate frameworks and regulations associated with it. A hierarchy of data collection starts from local zones to state level, to national level and finally contributing to the global level measurements. Data collected can be made easy to understand so that it could be correctly interpreted and presented through graphs, charts, and analysis bars.
Integration across academic disciplines- Sustainability involves the whole ecosystem and is intended to have a holistic approach. For this purpose measurements intend to involve data and knowledge from all academic backgrounds. Moreover, these disciplines and insights are intended to align with the societal actions.
See also
Balanced scorecard
Carbon accounting
Corporate social responsibility
Embodied energy
Environmental audits
Glossary of environmental science
Green accounting
Helix of sustainability
List of sustainability topics
Outline of sustainability
Social accounting
Sustainability science
Sustainable Value (2008 book)
References
External links
Curated bibliography at IDEAS/RePEc
Sustainable development
Economics of sustainability
Development economics
Economic data
Environmental statistics | 0.798717 | 0.980744 | 0.783337 |
Cultural ecology | Cultural ecology is the study of human adaptations to social and physical environments. Human adaptation refers to both biological and cultural processes that enable a population to survive and reproduce within a given or changing environment. This may be carried out diachronically (examining entities that existed in different epochs), or synchronically (examining a present system and its components). The central argument is that the natural environment, in small scale or subsistence societies dependent in part upon it, is a major contributor to social organization and other human institutions. In the academic realm, when combined with study of political economy, the study of economies as polities, it becomes political ecology, another academic subfield. It also helps interrogate historical events like the Easter Island Syndrome.
History
Anthropologist Julian Steward (1902-1972) coined the term, envisioning cultural ecology as a methodology for understanding how humans adapt to such a wide variety of environments. In his Theory of Culture Change: The Methodology of Multilinear Evolution (1955), cultural ecology represents the "ways in which culture change is induced by adaptation to the environment". A key point is that any particular human adaptation is in part historically inherited and involves the technologies, practices, and knowledge that allow people to live in an environment. This means that while the environment influences the character of human adaptation, it does not determine it. In this way, Steward wisely separated the vagaries of the environment from the inner workings of a culture that occupied a given environment. Viewed over the long term, this means that environment and culture are on more or less separate evolutionary tracks and that the ability of one to influence the other is dependent on how each is structured. It is this assertion - that the physical and biological environment affects culture - that has proved controversial, because it implies an element of environmental determinism over human actions, which some social scientists find problematic, particularly those writing from a Marxist perspective. Cultural ecology recognizes that ecological locale plays a significant role in shaping the cultures of a region.
Steward's method was to:
Document the technologies and methods used to exploit the environment to get a living from it.
Look at patterns of human behavior/culture associated with using the environment.
Assess how much these patterns of behavior influenced other aspects of culture (e.g., how, in a drought-prone region, great concern over rainfall patterns meant this became central to everyday life, and led to the development of a religious belief system in which rainfall and water figured very strongly. This belief system may not appear in a society where good rainfall for crops can be taken for granted, or where irrigation was practiced).
Steward's concept of cultural ecology became widespread among anthropologists and archaeologists of the mid-20th century, though they would later be critiqued for their environmental determinism. Cultural ecology was one of the central tenets and driving factors in the development of processual archaeology in the 1960s, as archaeologists understood cultural change through the framework of technology and its effects on environmental adaptation.
In anthropology
Cultural ecology as developed by Steward is a major subdiscipline of anthropology. It derives from the work of Franz Boas and has branched out to cover a number of aspects of human society, in particular the distribution of wealth and power in a society, and how that affects such behaviour as hoarding or gifting (e.g. the tradition of the potlatch on the Northwest North American coast).
As transdisciplinary project
One 2000s-era conception of cultural ecology is as a general theory that regards ecology as a paradigm not only for the natural and human sciences, but for cultural studies as well. In his Die Ökologie des Wissens (The Ecology of Knowledge), Peter Finke explains that this theory brings together the various cultures of knowledge that have evolved in history, and that have been separated into more and more specialized disciplines and subdisciplines in the evolution of modern science (Finke 2005). In this view, cultural ecology considers the sphere of human culture not as separate from but as interdependent with and transfused by ecological processes and natural energy cycles. At the same time, it recognizes the relative independence and self-reflexive dynamics of cultural processes. As the dependency of culture on nature, and the ineradicable presence of nature in culture, are gaining interdisciplinary attention, the difference between cultural evolution and natural evolution is increasingly acknowledged by cultural ecologists. Rather than genetic laws, information and communication have become major driving forces of cultural evolution (see Finke 2006, 2007). Thus, causal deterministic laws do not apply to culture in a strict sense, but there are nevertheless productive analogies that can be drawn between ecological and cultural processes.
Gregory Bateson was the first to draw such analogies in his project of an Ecology of Mind (Bateson 1973), which was based on general principles of complex dynamic life processes, e.g. the concept of feedback loops, which he saw as operating both between the mind and the world and within the mind itself. Bateson thinks of the mind neither as an autonomous metaphysical force nor as a mere neurological function of the brain, but as a "dehierarchized concept of a mutual dependency between the (human) organism and its (natural) environment, subject and object, culture and nature", and thus as "a synonym for a cybernetic system of information circuits that are relevant for the survival of the species." (Gersdorf/ Mayer 2005: 9).
Finke fuses these ideas with concepts from systems theory. He describes the various sections and subsystems of society as 'cultural ecosystems' with their own processes of production, consumption, and reduction of energy (physical as well as psychic energy). This also applies to the cultural ecosystems of art and of literature, which follow their own internal forces of selection and self-renewal, but also have an important function within the cultural system as a whole (see next section).
In literary studies
The interrelatedness between culture and nature has been a special focus of literary culture from its archaic beginnings in myth, ritual, and oral story-telling, in legends and fairy tales, in the genres of pastoral literature, nature poetry. Important texts in this tradition include the stories of mutual transformations between human and nonhuman life, most famously collected in Ovid’s Metamorphoses, which became a highly influential text throughout literary history and across different cultures. This attention to culture-nature interaction became especially prominent in the era of romanticism, but continues to be characteristic of literary stagings of human experience up to the present.
The mutual opening and symbolic reconnection of culture and nature, mind and body, human and nonhuman life in a holistic and yet radically pluralistic way seems to be one significant mode in which literature functions and in which literary knowledge is produced. From this perspective, literature can itself be described as the symbolic medium of a particularly powerful form of "cultural ecology" (Zapf 2002). Literary texts have staged and explored, in ever new scenarios, the complex feedback relationship of prevailing cultural systems with the needs and manifestations of human and nonhuman "nature." From this paradoxical act of creative regression they have derived their specific power of innovation and cultural self-renewal.
German ecocritic Hubert Zapf argues that literature draws its cognitive and creative potential from a threefold dynamics in its relationship to the larger cultural system: as a "cultural-critical metadiscourse," an "imaginative counterdiscourse," and a "reintegrative interdiscourse" (Zapf 2001, 2002). It is a textual form which breaks up ossified social structures and ideologies, symbolically empowers the marginalized, and reconnects what is culturally separated. In that way, literature counteracts economic, political or pragmatic forms of interpreting and instrumentalizing human life, and breaks up one-dimensional views of the world and the self, opening them up towards their repressed or excluded other. Literature is thus, on the one hand, a sensorium for what goes wrong in a society, for the biophobic, life-paralyzing implications of one-sided forms of consciousness and civilizational uniformity, and it is, on the other hand, a medium of constant cultural self-renewal, in which the neglected biophilic energies can find a symbolic space of expression and of (re-)integration into the larger ecology of cultural discourses. This approach has been applied and widened in volumes of essays by scholars from over the world (ed. Zapf 2008, 2016), as well as in a recent monograph (Zapf 2016). Similar approaches have also been developed in adjacent fields, such as film studies (Paalman 2011).
In geography
In geography, cultural ecology developed in response to the "landscape morphology" approach of Carl O. Sauer. Sauer's school was criticized for being unscientific and later for holding a "reified" or "superorganic" conception of culture. Cultural ecology applied ideas from ecology and systems theory to understand the adaptation of humans to their environment. These cultural ecologists focused on flows of energy and materials, examining how beliefs and institutions in a culture regulated its interchanges with the natural ecology that surrounded it. In this perspective humans were as much a part of the ecology as any other organism. Important practitioners of this form of cultural ecology include Karl Butzer and David Stoddart.
The second form of cultural ecology introduced decision theory from agricultural economics, particularly inspired by the works of Alexander Chayanov and Ester Boserup. These cultural ecologists were concerned with how human groups made decisions about how they use their natural environment. They were particularly concerned with the question of agricultural intensification, refining the competing models of Thomas Malthus and Boserup. Notable cultural ecologists in this second tradition include Harold Brookfield and Billie Lee Turner II. Starting in the 1980s, cultural ecology came under criticism from political ecology. Political ecologists charged that cultural ecology ignored the connections between the local-scale systems they studied and the global political economy. Today few geographers self-identify as cultural ecologists, but ideas from cultural ecology have been adopted and built on by political ecology, land change science, and sustainability science.
Conceptual views
Human species
Books about culture and ecology began to emerge in the 1950s and 1960s. One of the first to be published in the United Kingdom was The Human Species by a zoologist, Anthony Barnett. It came out in 1950-subtitled The biology of man but was about a much narrower subset of topics. It dealt with the cultural bearing of some outstanding areas of environmental knowledge about health and disease, food, the sizes and quality of human populations, and the diversity of human types and their abilities. Barnett's view was that his selected areas of information "....are all topics on which knowledge is not only desirable, but for a twentieth-century adult, necessary". He went on to point out some of the concepts underpinning human ecology towards the social problems facing his readers in the 1950s as well as the assertion that human nature cannot change, what this statement could mean, and whether it is true. The third chapter deals in more detail with some aspects of human genetics.
Then come five chapters on the evolution of man, and the differences between groups of men (or races) and between individual men and women today in relation to population growth (the topic of 'human diversity'). Finally, there is a series of chapters on various aspects of human populations (the topic of "life and death"). Like other animals man must, in order to survive, overcome the dangers of starvation and infection; at the same time he must be fertile. Four chapters therefore deal with food, disease and the growth and decline of human populations.
Barnett anticipated that his personal scheme might be criticized on the grounds that it omits an account of those human characteristics, which distinguish humankind most clearly, and sharply from other animals. That is to say, the point might be expressed by saying that human behaviour is ignored; or some might say that human psychology is left out, or that no account is taken of the human mind. He justified his limited view, not because little importance was attached to what was left out, but because the omitted topics were so important that each needed a book of similar size even for a summary account. In other words, the author was embedded in a world of academic specialists and therefore somewhat worried about taking a partial conceptual, and idiosyncratic view of the zoology of Homo sapiens.
Ecology of man
Moves to produce prescriptions for adjusting human culture to ecological realities were also afoot in North America. In his 1957 Condon Lecture at the University of Oregon, entitled "The Ecology of Man", American ecologist Paul Sears called for "serious attention to the ecology of man" and demanded "its skillful application to human affairs". Sears was one of the few prominent ecologists to successfully write for popular audiences. Sears documents the mistakes American farmers made in creating conditions that led to the disastrous Dust Bowl. This book gave momentum to the soil conservation movement in the United States.
The "ecology of man" as a limiting factor which "should be respected", placing boundaries around the extent to which the human species can be manipulated, is reflected in the views of Popes Benedict XVI, and Francis.
Impact on nature
During this same time was J.A. Lauwery's Man's Impact on Nature, which was part of a series on 'Interdependence in Nature' published in 1969. Both Russel's and Lauwerys' books were about cultural ecology, although not titled as such. People still had difficulty in escaping from their labels. Even Beginnings and Blunders, produced in 1970 by the polymath zoologist Lancelot Hogben, with the subtitle Before Science Began, clung to anthropology as a traditional reference point. However, its slant makes it clear that 'cultural ecology' would be a more apt title to cover his wide-ranging description of how early societies adapted to environment with tools, technologies and social groupings. In 1973 the physicist Jacob Bronowski produced The Ascent of Man, which summarised a magnificent thirteen part BBC television series about all the ways in which humans have moulded the Earth and its future.
Changing the Earth
By the 1980s the human ecological-functional view had prevailed. It had become a conventional way to present scientific concepts in the ecological perspective of human animals dominating an overpopulated world, with the practical aim of producing a greener culture. This is exemplified by I. G. Simmons' book Changing the Face of the Earth, with its telling subtitle "Culture, Environment History" which was published in 1989. Simmons was a geographer, and his book was a tribute to the influence of W.L Thomas' edited collection, Man's role in 'Changing the Face of the Earth that came out in 1956.
Simmons' book was one of many interdisciplinary culture/environment publications of the 1970s and 1980s, which triggered a crisis in geography with regards its subject matter, academic sub-divisions, and boundaries. This was resolved by officially adopting conceptual frameworks as an approach to facilitate the organisation of research and teaching that cuts cross old subject divisions. Cultural ecology is in fact a conceptual arena that has, over the past six decades allowed sociologists, physicists, zoologists and geographers to enter common intellectual ground from the sidelines of their specialist subjects.
21st Century
In the first decade of the 21st century, there are publications dealing with the ways in which humans can develop a more acceptable cultural relationship with the environment. An example is sacred ecology, a sub-topic of cultural ecology, produced by Fikret Berkes in 1999. It seeks lessons from traditional ways of life in Northern Canada to shape a new environmental perception for urban dwellers. This particular conceptualisation of people and environment comes from various cultural levels of local knowledge about species and place, resource management systems using local experience, social institutions with their rules and codes of behaviour, and a world view through religion, ethics and broadly defined belief systems.
Despite the differences in information concepts, all of the publications carry the message that culture is a balancing act between the mindset devoted to the exploitation of natural resources and that, which conserves them. Perhaps the best model of cultural ecology in this context is, paradoxically, the mismatch of culture and ecology that have occurred when Europeans suppressed the age-old native methods of land use and have tried to settle European farming cultures on soils manifestly incapable of supporting them. There is a sacred ecology associated with environmental awareness, and the task of cultural ecology is to inspire urban dwellers to develop a more acceptable sustainable cultural relationship with the environment that supports them.
Educational framework
Cultural Core
To further develop the field of Cultural Ecology, Julian Steward developed a framework which he referred to as the cultural core. This framework, a “constellation” as Steward describes it, organizes the fundamental features of a culture that are most closely related to subsistence and economic arrangements.
At the core of this framework is the fundamental human-environment relationship as it pertains to subsistence. Outside of the core, in the second layer, lies the innumerable direct features of this relationship - tools, knowledge, economics, labor, etc. Outside of that second, directly correlated layer is the less-direct but still influential layer, typically associated with larger historical, institutional, political or social factors.
According to Steward, the secondary features are determined greatly by the “cultural-historical factors” and they contribute to building the uniqueness of the outward appearance of cultures when compared to others with similar cores. The field of Cultural Ecology is able to utilize the cultural core framework as a tool for better determining and understanding the features that are most closely involved in the utilization of the environment by humans and cultural groups.
See also
Cultural materialism
Dual inheritance theory
Ecological anthropology
Environmental history
Environmental racism
Human behavioral ecology
Political ecology
Sexecology
References
Sources
Barnett, A. 1950 The Human Species: MacGibbon and Kee, London.
Bateson, G. 1973 Steps to an Ecology of Mind: Paladin, London
Berkes, F. 1999 Sacred ecology: traditional ecological knowledge and resource management. Taylor and Francis.
Bronowski, J. 1973 The Ascent of Man, BBC Publications, London
Finke, P. 2005 Die Ökologie des Wissens. Exkursionen in eine gefährdete Landschaft: Alber, Freiburg and Munich
Finke, P. 2006 "Die Evolutionäre Kulturökologie: Hintergründe, Prinzipien und Perspektiven einer neuen Theorie der Kultur", in: Anglia 124.1, 2006, p. 175-217
Finke, P. 2013 "A Brief Outline of Evolutionary Cultural Ecology," in Traditions of Systems Theory: Major Figures and Contemporary Developments, ed. Darrell P. Arnold, New York: Routledge.
Frake, Charles O. (1962) "Cultural Ecology and Ethnography" American Anthropologist. 64 (1: 53–59. ISSN 0002-7294.
Gersdorf, C. and S. Mayer, eds. Natur – Kultur – Text: Beiträge zu Ökologie und Literaturwissenschaft: Winter, Heidelberg
Hamilton, G. 1947 History of the Homeland: George Allen and Unwin, London.
Hogben, L. 1970 Beginnings and Blunders: Heinemann, London
Hornborg, Alf; Cultural Ecology
Lauwerys, J.A. 1969 Man's Impact on Nature: Aldus Books, London
Maass, Petra (2008): The Cultural Context of Biodiversity Conservation. Seen and Unseen Dimensions of Indigenous Knowledge among Q'eqchi' Communities in Guatemala. Göttinger Beiträge zur Ethnologie - Band 2, Göttingen: Göttinger Universitätsverlag online-version
Paalman, F. 2011 Cinematic Rotterdam: The Times and Tides of a Modern City: 010 Publsihers, Rotterdam.
Russel, W.M.S. 1967 Man Nature and History: Aldus Books, London
Simmons, I.G. 1989 Changing the Face of the Earth: Blackwell, Oxford
Steward, Julian H. 1972 Theory of Culture Change: The Methodology of Multilinear Evolution: University of Illinois Press
Technical Report PNW-GTR-369. 1996. Defining social responsibility in ecosystem management. A workshop proceedings. United States Department of Agriculture Forest Service.
Turner, B. L., II 2002. "Contested identities: human-environment geography and disciplinary implications in a restructuring academy." Annals of the Association of American Geographers 92(1): 52–74.
Worster, D. 1977 Nature’s Economy. Cambridge University Press
Zapf, H. 2001 "Literature as Cultural Ecology: Notes Towards a Functional Theory of Imaginative Texts, with Examples from American Literature", in: REAL: Yearbook of Research in English and American Literature 17, 2001, p. 85-100.
Zapf, H. 2002 Literatur als kulturelle Ökologie. Zur kulturellen Funktion imaginativer Texte an Beispielen des amerikanischen Romans: Niemeyer, Tübingen
Zapf, H. 2008 Kulturökologie und Literatur: Beiträge zu einem transdisziplinären Paradigma der Literaturwissenschaft (Cultural Ecology and Literature: Contributions on a Transdisciplinary Paradigm of Literary Studies): Winter, Heidelberg
Zapf, H. 2016 Literature as Cultural Ecology: Sustainable Texts: Bloomsbury Academic, London
Zapf, H. 2016 ed. Handbook of Ecocriticism and Cultural Ecology: De Gruyter, Berlin
External links
Cultural and Political Ecology Specialty Group of the Association of American Geographers. Archive of newsletters, officers, award and honor recipients, as well as other resources associated with this community of scholars.
Notes on the development of cultural ecology with an excellent reference list: Catherine Marquette
Cultural ecology: an ideational scaffold for environmental education: an outcome of the EC LIFE ENVIRONMENT programme
Cultural anthropology
Ecology terminology
Environmental humanities
Human geography
Interdisciplinary historical research | 0.797053 | 0.982779 | 0.783327 |
Life skills | Life skills are abilities for adaptive and positive behavior that enable humans to deal effectively with the demands and challenges of life. This concept is also termed as psychosocial competency. The subject varies greatly depending on social norms and community expectations but skills that function for well-being and aid individuals to develop into active and productive members of their communities are considered as life skills.
Enumeration and categorization
The UNICEF Evaluation Office suggests that "there is no definitive list" of psychosocial skills;
nevertheless UNICEF enumerates psychosocial and interpersonal skills that are generally well-being oriented, and essential alongside literacy and numeracy skills. Since it changes its meaning from culture to culture and life positions, it is considered a concept that is elastic in nature. But UNICEF acknowledges social and emotional life skills identified by Collaborative for Academic, Social and Emotional Learning (CASEL). Life skills are a product of synthesis: many skills are developed simultaneously through practice, like humor, which allows a person to feel in control of a situation and make it more manageable in perspective. It allows the person to release fears, anger, and stress & achieve a qualitative life.
For example, decision-making often involves critical thinking ("what are my options?") and values clarification ("what is important to me?"), ("How do I feel about this?"). Ultimately, the interplay between the skills is what produces powerful behavioral outcomes, especially where this approach is supported by other strategies.
Life skills can vary from financial literacy, through substance-abuse prevention, to therapeutic techniques to deal with disabilities such as autism.
Core skills
The World Health Organization in 1999 identified the following core cross-cultural areas of life skills:
decision-making and problem-solving;
creative thinking (see also: lateral thinking) and critical thinking;
communication and interpersonal skills;
self-awareness and empathy;
assertiveness and equanimity; and
resilience and coping with emotions and coping with stress.
UNICEF listed similar skills and related categories in its 2012 report.
Life skills curricular designed for K-12 often emphasize communications and practical skills needed for successful independent living as well as for developmental-disabilities/special-education students with an Individualized Education Program (IEP).
There are various courses being run based on WHO's list supported by UNFPA. In Madhya Pradesh, India, the programme is being run with Government to teach these through Government Schools.
Skills for work and life
Skills for work and life, known as technical and vocational education and training (TVET) is comprising education, training and skills development relating to a wide range of occupational fields, production, services and livelihoods. TVET, as part of lifelong learning, can take place at secondary, post-secondary and tertiary levels, and includes work-based learning and continuing training and professional development which may lead to qualifications. TVET also includes a wide range of skills development opportunities attuned to national and local contexts. Learning to learn and the development of literacy and numeracy skills, transversal skills and citizenship skills are integral components of TVET.
Parenting: a venue of life skills nourishment
Life skills are often taught in the domain of parenting, either indirectly through the observation and experience of the child, or directly with the purpose of teaching a specific skill. Parenting itself can be considered as a set of life skills which can be taught or comes natural to a person. Educating a person in skills for dealing with pregnancy and parenting can also coincide with additional life skills development for the child and enable the parents to guide their children in adulthood.
Many life skills programs are offered when traditional family structures and healthy relationships have broken down, whether due to parental lapses, divorce, psychological disorders or due to issues with the children (such as substance abuse or other risky behavior). For example, the International Labour Organization is teaching life skills to ex-child laborers and at-risk children in Indonesia to help them avoid and to recover from worst forms of child abuse.
Models: behavior prevention vs. positive development
While certain life skills programs focus on teaching the prevention of certain behaviors, they can be relatively ineffective. Based upon their research, the Family and Youth Services Bureau, a division of the U.S. Department of Health and Human Services advocates the theory of positive youth development (PYD) as a replacement for the less effective prevention programs. PYD focuses on the strengths of an individual as opposed to the older decrepit models which tend to focus on the "potential" weaknesses that have yet to be shown. "..life skills education, have found to be an effective psychosocial intervention strategy for promoting positive social, and mental health of adolescents which plays an important role in all aspects such as strengthening coping strategies and developing self-confidence and emotional intelligence..."
See also
Sources
Further reading
People Skills & Self-Management (free online guide), Alliances for Psychosocial Advancements in Living: Communication Connections (APAL-CC)
Reaching Your Potential: Personal and Professional Development, 4th Edition
Life Skills: A Course in Applied Problem Solving., Saskatchewan NewStart Inc., First Ave and River Street East, Prince Albert, Saskatchewan, Canada.
References | 0.788241 | 0.993628 | 0.783218 |
Biosignature | A biosignature (sometimes called chemical fossil or molecular fossil) is any substance – such as an element, isotope, molecule, or phenomenon – that provides scientific evidence of past or present life on a planet. Measurable attributes of life include its physical or chemical structures, its use of free energy, and the production of biomass and wastes.
The field of astrobiology uses biosignatures as evidence for the search for past or present extraterrestrial life.
Types
Biosignatures can be grouped into ten broad categories:
Isotope patterns: Isotopic evidence or patterns that require biological processes.
Chemistry: Chemical features that require biological activity.
Organic matter: Organics formed by biological processes.
Minerals: Minerals or biomineral-phases whose composition and/or morphology indicate biological activity (e.g., biomagnetite).
Microscopic structures and textures: Biologically-formed cements, microtextures, microfossils, and films.
Macroscopic physical structures and textures: Structures that indicate microbial ecosystems, biofilms (e.g., stromatolites), or fossils of larger organisms.
Temporal variability: Variations in time of atmospheric gases, reflectivity, or macroscopic appearance that indicates life's presence.
Surface reflectance features: Large-scale reflectance features due to biological pigments.
Atmospheric gases: Gases formed by metabolic processes, which may be present on a planet-wide scale.
Technosignatures: Signatures that indicate a technologically advanced civilization.
Viability
Determining whether an observed feature is a true biosignature is complex. There are three criteria that a potential biosignature must meet to be considered viable for further research: Reliability, survivability, and detectability.
Reliability
A biosignature must be able to dominate over all other processes that may produce similar physical, spectral, and chemical features. When investigating a potential biosignature, scientists must carefully consider all other possible origins of the biosignature in question. Many forms of life are known to mimic geochemical reactions. One of the theories on the origin of life involves molecules developing the ability to catalyse geochemical reactions to exploit the energy being released by them. These are some of the earliest known metabolisms (see methanogenesis). In such case, scientists might search for a disequilibrium in the geochemical cycle, which would point to a reaction happening more or less often than it should. A disequilibrium such as this could be interpreted as an indication of life.
Survivability
A biosignature must be able to last for long enough so that a probe, telescope, or human can be able to detect it. A consequence of a biological organism's use of metabolic reactions for energy is the production of metabolic waste. In addition, the structure of an organism can be preserved as a fossil and we know that some fossils on Earth are as old as 3.5 billion years. These byproducts can make excellent biosignatures since they provide direct evidence for life. However, in order to be a viable biosignature, a byproduct must subsequently remain intact so that scientists may discover it.
Detectability
A biosignature must be detectable with the most latest technology to be relevant in scientific investigation. This seems to be an obvious statement, however, there are many scenarios in which life may be present on a planet yet remain undetectable because of human-caused limitations.
False positives
Every possible biosignature is associated with its own set of unique false positive mechanisms or non-biological processes that can mimic the detectable feature of a biosignature. An important example is using oxygen as a biosignature. On Earth, the majority of life is centred around oxygen. It is a byproduct of photosynthesis and is subsequently used by other life forms to breathe. Oxygen is also readily detectable in spectra, with multiple bands across a relatively wide wavelength range, therefore, it makes a very good biosignature. However, finding oxygen alone in a planet's atmosphere is not enough to confirm a biosignature because of the false-positive mechanisms associated with it. One possibility is that oxygen can build up abiotically via photolysis if there is a low inventory of non-condensable gasses or if the planet loses a lot of water. Finding and distinguishing a biosignature from its potential false-positive mechanisms is one of the most complicated parts of testing for viability because it relies on human ingenuity to break an abiotic-biological degeneracy, if nature allows.
False negatives
Opposite to false positives, false negative biosignatures arise in a scenario where life may be present on another planet, but some processes on that planet make potential biosignatures undetectable. This is an ongoing problem and area of research in preparation for future telescopes that will be capable of observing exoplanetary atmospheres.
Human limitations
There are many ways in which humans may limit the viability of a potential biosignature. The resolution of a telescope becomes important when vetting certain false-positive mechanisms, and many current telescopes do not have the capabilities to observe at the resolution needed to investigate some of these. In addition, probes and telescopes are worked on by huge collaborations of scientists with varying interests. As a result, new probes and telescopes carry a variety of instruments that are a compromise to everyone's unique inputs. For a different type of scientist to detect something unrelated to biosignatures, a sacrifice may have to be made in the capability of an instrument to search for biosignatures.
General examples
Geomicrobiology
The ancient record on Earth provides an opportunity to see what geochemical signatures are produced by microbial life and how these signatures are preserved over geologic time. Some related disciplines such as geochemistry, geobiology, and geomicrobiology often use biosignatures to determine if living organisms are or were present in a sample. These possible biosignatures include: (a) microfossils and stromatolites; (b) molecular structures (biomarkers) and isotopic compositions of carbon, nitrogen and hydrogen in organic matter; (c) multiple sulfur and oxygen isotope ratios of minerals; and (d) abundance relationships and isotopic compositions of redox-sensitive metals (e.g., Fe, Mo, Cr, and rare earth elements).
For example, the particular fatty acids measured in a sample can indicate which types of bacteria and archaea live in that environment. Another example is the long-chain fatty alcohols with more than 23 atoms that are produced by planktonic bacteria. When used in this sense, geochemists often prefer the term biomarker. Another example is the presence of straight-chain lipids in the form of alkanes, alcohols, and fatty acids with 20–36 carbon atoms in soils or sediments. Peat deposits are an indication of originating from the epicuticular wax of higher plants.
Life processes may produce a range of biosignatures such as nucleic acids, lipids, proteins, amino acids, kerogen-like material and various morphological features that are detectable in rocks and sediments. Microbes often interact with geochemical processes, leaving features in the rock record indicative of biosignatures. For example, bacterial micrometer-sized pores in carbonate rocks resemble inclusions under transmitted light, but have distinct sizes, shapes, and patterns (swirling or dendritic) and are distributed differently from common fluid inclusions. A potential biosignature is a phenomenon that may have been produced by life, but for which alternate abiotic origins may also be possible.
Morphology
Another possible biosignature might be morphology since the shape and size of certain objects may potentially indicate the presence of past or present life. For example, microscopic magnetite crystals in the Martian meteorite ALH84001 are one of the longest-debated of several potential biosignatures in that specimen. The possible biomineral studied in the Martian ALH84001 meteorite includes putative microbial fossils, tiny rock-like structures whose shape was a potential biosignature because it resembled known bacteria. Most scientists ultimately concluded that these were far too small to be fossilized cells. A consensus that has emerged from these discussions, and is now seen as a critical requirement, is the demand for further lines of evidence in addition to any morphological data that supports such extraordinary claims. Currently, the scientific consensus is that "morphology alone cannot be used unambiguously as a tool for primitive life detection". Interpretation of morphology is notoriously subjective, and its use alone has led to numerous errors of interpretation.
Chemistry
No single compound will prove life once existed. Rather, it will be distinctive patterns present in any organic compounds showing a process of selection. For example, membrane lipids left behind by degraded cells will be concentrated, have a limited size range, and comprise an even number of carbons. Similarly, life only uses left-handed amino acids. Biosignatures need not be chemical, however, and can also be suggested by a distinctive magnetic biosignature.
Chemical biosignatures include any suite of complex organic compounds composed of carbon, hydrogen, and other elements or heteroatoms such as oxygen, nitrogen, and sulfur, which are found in crude oils, bitumen, petroleum source rock and eventually show simplification in molecular structure from the parent organic molecules found in all living organisms. They are complex carbon-based molecules derived from formerly living organisms. Each biomarker is quite distinctive when compared to its counterparts, as the time required for organic matter to convert to crude oil is characteristic. Most biomarkers also usually have high molecular mass.
Some examples of biomarkers found in petroleum are pristane, triterpanes, steranes, phytane and porphyrin. Such petroleum biomarkers are produced via chemical synthesis using biochemical compounds as their main constituents. For instance, triterpenes are derived from biochemical compounds found on land angiosperm plants. The abundance of petroleum biomarkers in small amounts in its reservoir or source rock make it necessary to use sensitive and differential approaches to analyze the presence of those compounds. The techniques typically used include gas chromatography and mass spectrometry.
Petroleum biomarkers are highly important in petroleum inspection as they help indicate the depositional territories and determine the geological properties of oils. For instance, they provide more details concerning their maturity and the source material. In addition to that they can also be good parameters of age, hence they are technically referred to as "chemical fossils". The ratio of pristane to phytane (pr:ph) is the geochemical factor that allows petroleum biomarkers to be successful indicators of their depositional environments.
Geologists and geochemists use biomarker traces found in crude oils and their related source rock to unravel the stratigraphic origin and migration patterns of presently existing petroleum deposits. The dispersion of biomarker molecules is also quite distinctive for each type of oil and its source; hence, they display unique fingerprints. Another factor that makes petroleum biomarkers more preferable than their counterparts is that they have a high tolerance to environmental weathering and corrosion. Such biomarkers are very advantageous and often used in the detection of oil spillage in the major waterways. The same biomarkers can also be used to identify contamination in lubricant oils. However, biomarker analysis of untreated rock cuttings can be expected to produce misleading results. This is due to potential hydrocarbon contamination and biodegradation in the rock samples.
Atmospheric
The atmospheric properties of exoplanets are of particular importance, as atmospheres provide the most likely observables for the near future, including habitability indicators and biosignatures. Over billions of years, the processes of life on a planet would result in a mixture of chemicals unlike anything that could form in an ordinary chemical equilibrium. For example, large amounts of oxygen and small amounts of methane are generated by life on Earth.
An exoplanet's color—or reflectance spectrum—can also be used as a biosignature due to the effect of pigments that are uniquely biologic in origin such as the pigments of phototrophic and photosynthetic life forms. Scientists use the Earth as an example of this when looked at from far away (see Pale Blue Dot) as a comparison to worlds observed outside of our solar system. Ultraviolet radiation on life forms could also induce biofluorescence in visible wavelengths that may be detected by the new generation of space observatories under development.
Some scientists have reported methods of detecting hydrogen and methane in extraterrestrial atmospheres. Habitability indicators and biosignatures must be interpreted within a planetary and environmental context. For example, the presence of oxygen and methane together could indicate the kind of extreme thermochemical disequilibrium generated by life. Two of the top 14,000 proposed atmospheric biosignatures are dimethyl sulfide and chloromethane. An alternative biosignature is the combination of methane and carbon dioxide.
The detection of phosphine in the atmosphere of Venus is being investigated as a possible biosignature.
Atmospheric disequilibrium
A disequilibrium in the abundance of gas species in an atmosphere can be interpreted as a biosignature. Life has greatly altered the atmosphere on Earth in a way that would be unlikely for any other processes to replicate. Therefore, a departure from equilibrium is evidence for a biosignature. For example, the abundance of methane in the Earth's atmosphere is orders of magnitude above the equilibrium value due to the constant methane flux that life on the surface emits. Depending on the host star, a disequilibrium in the methane abundance on another planet may indicate a biosignature.
Agnostic biosignatures
Because the only form of known life is that on Earth, the search for biosignatures is heavily influenced by the products that life produces on Earth. However, life that is different from life on Earth may still produce biosignatures that are detectable by humans, even though nothing is known about their specific biology. This form of biosignature is called an "agnostic biosignature" because it is independent of the form of life that produces it. It is widely agreed that all life–no matter how different it is from life on Earth–needs a source of energy to thrive. This must involve some sort of chemical disequilibrium, which can be exploited for metabolism. Geological processes are independent of life, and if scientists can constrain the geology well enough on another planet, then they know what the particular geologic equilibrium for that planet should be. A deviation from geological equilibrium can be interpreted as an atmospheric disequilibrium and agnostic biosignature.
Antibiosignatures
In the same way that detecting a biosignature would be a significant discovery about a planet, finding evidence that life is not present can also be an important discovery about a planet. Life relies on redox imbalances to metabolize the resources available into energy. The evidence that nothing on an earth is taking advantage of the "free lunch" available due to an observed redox imbalance is called antibiosignatures.
Polyelectrolytes
The Polyelectrolyte theory of the gene is a proposed generic biosignature. In 2002, Steven A. Benner and Daniel Hutter proposed that for a linear genetic biopolymer dissolved in water, such as DNA, to undergo Darwinian evolution anywhere in the universe, it must be a polyelectrolyte, a polymer containing repeating ionic charges. Benner and others proposed methods for concentrating and analyzing these polyelectrolyte genetic biopolymers on Mars, Enceladus, and Europa.
Specific examples
Methane on Mars
The presence of methane in the atmosphere of Mars is an area of ongoing research and a highly contentious subject. Because of its tendency to be destroyed in the atmosphere by photochemistry, the presence of excess methane on a planet can indicate that there must be an active source. With life being the strongest source of methane on Earth, observing a disequilibrium in the methane abundance on another planet could be a viable biosignature.
Since 2004, there have been several detections of methane in the Mars atmosphere by a variety of instruments onboard orbiters and ground-based landers on the Martian surface as well as Earth-based telescopes. These missions reported values anywhere between a 'background level' ranging between 0.24 and 0.65 parts per billion by volume (p.p.b.v.) to as much as 45 ± 10 p.p.b.v.
However, recent measurements using the ACS and NOMAD instruments on board the ESA-Roscosmos ExoMars Trace Gas Orbiter have failed to detect any methane over a range of latitudes and longitudes on both Martian hemispheres. These highly sensitive instruments were able to put an upper bound on the overall methane abundance at 0.05 p.p.b.v. This nondetection is a major contradiction to what was previously observed with less sensitive instruments and will remain a strong argument in the ongoing debate over the presence of methane in the Martian atmosphere.
Furthermore, current photochemical models cannot explain the presence of methane in the atmosphere of Mars and its reported rapid variations in space and time. Neither its fast appearance nor disappearance can be explained yet. To rule out a biogenic origin for the methane, a future probe or lander hosting a mass spectrometer will be needed, as the isotopic proportions of carbon-12 to carbon-14 in methane could distinguish between a biogenic and non-biogenic origin, similarly to the use of the δ13C standard for recognizing biogenic methane on Earth.
Martian atmosphere
The Martian atmosphere contains high abundances of photochemically produced CO and H2, which are reducing molecules. Mars' atmosphere is otherwise mostly oxidizing, leading to a source of untapped energy that life could exploit if it used by a metabolism compatible with one or both of these reducing molecules. Because these molecules can be observed, scientists use this as evidence for an antibiosignature. Scientists have used this concept as an argument against life on Mars.
Missions inside the Solar System
Astrobiological exploration is founded upon the premise that biosignatures encountered in space will be recognizable as extraterrestrial life. The usefulness of a biosignature is determined not only by the probability of life creating it but also by the improbability of non-biological (abiotic) processes producing it. Concluding that evidence of an extraterrestrial life form (past or present) has been discovered requires proving that a possible biosignature was produced by the activities or remains of life. As with most scientific discoveries, discovery of a biosignature will require evidence building up until no other explanation exists.
Possible examples of a biosignature include complex organic molecules or structures whose formation is virtually unachievable in the absence of life:
Cellular and extracellular morphologies
Biomolecules in rocks
Bio-organic molecular structures
Chirality
Biogenic minerals
Biogenic isotope patterns in minerals and organic compounds
Atmospheric gases
Photosynthetic pigments
The Viking missions to Mars
The Viking missions to Mars in the 1970s conducted the first experiments which were explicitly designed to look for biosignatures on another planet. Each of the two Viking landers carried three life-detection experiments which looked for signs of metabolism; however, the results were declared inconclusive.
Mars Science Laboratory
The Curiosity rover from the Mars Science Laboratory mission, with its Curiosity rover is currently assessing the potential past and present habitability of the Martian environment and is attempting to detect biosignatures on the surface of Mars. Considering the MSL instrument payload package, the following classes of biosignatures are within the MSL detection window: organism morphologies (cells, body fossils, casts), biofabrics (including microbial mats), diagnostic organic molecules, isotopic signatures, evidence of biomineralization and bioalteration, spatial patterns in chemistry, and biogenic gases. The Curiosity rover targets outcrops to maximize the probability of detecting 'fossilized' organic matter preserved in sedimentary deposits.
ExoMars Orbiter
The 2016 ExoMars Trace Gas Orbiter (TGO) is a Mars telecommunications orbiter and atmospheric gas analyzer mission. It delivered the Schiaparelli EDM lander and then began to settle into its science orbit to map the sources of methane on Mars and other gases, and in doing so, will help select the landing site for the Rosalind Franklin rover to be launched in 2022. The primary objective of the Rosalind Franklin rover mission is the search for biosignatures on the surface and subsurface by using a drill able to collect samples down to a depth of , away from the destructive radiation that bathes the surface.
Mars 2020 Rover
The Mars 2020 rover, which launched in 2020, is intended to investigate an astrobiologically relevant ancient environment on Mars, investigate its surface geological processes and history, including the assessment of its past habitability, the possibility of past life on Mars, and potential for preservation of biosignatures within accessible geological materials. In addition, it will cache the most interesting samples for possible future transport to Earth.
Titan Dragonfly
NASA's Dragonfly lander/aircraft concept is proposed to launch in 2025 and would seek evidence of biosignatures on the organic-rich surface and atmosphere of Titan, as well as study its possible prebiotic primordial soup. Titan is the largest moon of Saturn and is widely believed to have a large subsurface ocean consisting of a salty brine. In addition, scientists believe that Titan may have the conditions necessary to promote prebiotic chemistry, making it a prime candidate for biosignature discovery.
Europa Clipper
NASA's Europa Clipper probe is designed as a flyby mission to Jupiter's smallest Galilean moon, Europa. The mission launched in October 2024 and is set to reach Europa in April 2030, where it will investigate the potential for habitability on Europa. Europa is one of the best candidates for biosignature discovery in the Solar System because of the scientific consensus that it retains a subsurface ocean, with two to three times the volume of water on Earth. Evidence for this subsurface ocean includes:
Voyager 1 (1979): The first close-up photos of Europa are taken. Scientists propose that a subsurface ocean could cause the tectonic-like marks on the surface.
Galileo (1997): The magnetometer aboard this probe detected a subtle change in the magnetic field near Europa. This was later interpreted as a disruption in the expected magnetic field due to the current induction in a conducting layer on Europa. The composition of this conducting layer is consistent with a salty subsurface ocean.
Hubble Space Telescope (2012): An image was taken of Europa which showed evidence for a plume of water vapor coming off the surface.
The Europa Clipper probe includes instruments to help confirm the existence and composition of a subsurface ocean and thick icy layer. In addition, the instruments will be used to map and study surface features that may indicate tectonic activity due to a subsurface ocean.
Enceladus
Although there are no set plans to search for biosignatures on Saturn's sixth-largest moon, Enceladus, the prospects of biosignature discovery there are exciting enough to warrant several mission concepts that may be funded in the future. Similar to Jupiter's moon Europa, there is much evidence for a subsurface ocean to also exist on Enceladus. Plumes of water vapor were first observed in 2005 by the Cassini mission and were later determined to contain salt as well as organic compounds. In 2014, more evidence was presented using gravimetric measurements on Enceladus to conclude that there is in fact a large reservoir of water underneath an icy surface. Mission design concepts include:
Enceladus Life Finder (ELF)
Enceladus Life Signatures and Habitability
Enceladus Organic Analyzer
Enceladus Explorer (En-Ex)
Explorer of Enceladus and Titan (E2T)
Journey to Enceladus and Titan (JET)
Life Investigation For Enceladus (LIFE)
Testing the Habitability of Enceladus's Ocean (THEO)
All of these concept missions have similar science goals: To assess the habitability of Enceladus and search for biosignatures, in line with the strategic map for exploring the ocean-world Enceladus.
Searching outside of the Solar System
At 4.2 light-years (1.3 parsecs, 40 trillion km, or 25 trillion miles) away from Earth, the closest potentially habitable exoplanet is Proxima Centauri b, which was discovered in 2016. This means it would take more than 18,100 years to get there if a vessel could consistently travel as fast as the Juno spacecraft (250,000 kilometers per hour or 150,000 miles per hour). It is currently not feasible to send humans or even probes to search for biosignatures outside of the Solar System. The only way to search for biosignatures outside of the Solar System is by observing exoplanets with telescopes.
There have been no plausible or confirmed biosignature detections outside of the Solar System. Despite this, it is a rapidly growing field of research due to the prospects of the next generation of telescopes. The James Webb Space Telescope, which launched in December 2021, will be a promising next step in the search for biosignatures. Although its wavelength range and resolution will not be compatible with some of the more important atmospheric biosignature gas bands like oxygen, it will still be able to detect some evidence for oxygen false positive mechanisms.
The new generation of ground-based 30-meter class telescopes (Thirty Meter Telescope and Extremely Large Telescope) will have the ability to take high-resolution spectra of exoplanet atmospheres at a variety of wavelengths. These telescopes will be capable of distinguishing some of the more difficult false positive mechanisms such as the abiotic buildup of oxygen via photolysis. In addition, their large collecting area will enable high angular resolution, making direct imaging studies more feasible.
See also
Bioindicator
MERMOZ (remote detection of lifeforms)
Taphonomy
Technosignature
References
Astrobiology
Astrochemistry
Bioindicators
Biology terminology
Search for extraterrestrial intelligence
Petroleum geology | 0.801299 | 0.977392 | 0.783183 |
Vitality | Vitality (, , ) is the capacity to live, grow, or develop. Vitality is also the characteristic that distinguishes living from non-living things. To experience vitality is regarded as a basic psychological drive and, in philosophy, a component to the will to live. As such, people seek to maximize their vitality or their experience of vitality—that which corresponds to an enhanced physiological capacity and mental state.
Overview
The pursuit and maintenance of health and vitality have been at the forefront of medicine and natural philosophy throughout history. Life depends upon various biological processes known as vital processes. Historically, these vital processes have been viewed as having either mechanistic or non-mechanistic causes. The latter point of view is characteristic of vitalism, the doctrine that the phenomena of life cannot be explained by purely chemical and physical mechanisms.
Prior to the 19th century, theoreticians often held that human lifespan had been less limited in the past, and that aging was due to a loss of, and failure to maintain, vitality.
A commonly held view was that people are born with finite vitality, which diminishes over time until illness and debility set in, and finally death.
Religion
In traditional cultures, the capacity for life is often directly equated with the or . This can be found in the Hindu concept , where vitality in the body derives from a subtle principle in the air and in food, as well as in Hebrew and ancient Greek texts.
Jainism
Vitality and DNA damage
Low vitality or fatigue is a common complaint by older patients. and may reflect an underlying medical illness. Vitality level was measured in 2,487 Copenhagen patients using a standardized, subjective, self-reported vitality scale and was found to be inversely related to DNA damage (as measured in peripheral blood mononuclear cells). DNA damage indicates cellular disfunction.
See also
Urban vitality
Vitalism
References
Jain philosophical concepts
Natural philosophy
Philosophy of life
Quality of life | 0.797079 | 0.982506 | 0.783135 |
Social ecology (academic field) | Social ecology studies relationships between people and their environment, often the interdependence of people, collectives and institutions. Evolving out of biological ecology, human ecology, systems theory and ecological psychology, social ecology takes a “broad, interdisciplinary perspective that gives greater attention to the social, psychological, institutional, and cultural contexts of people-environment relations than did earlier versions of human ecology.” The concept has been employed to study a diverse array of social problems and policies within the behavioural and social sciences.
Conceptual orientation
As described by Stokols, the core principles of social ecology include:
Multidimensional structure of human environments—physical & social, natural & built features; objective-material as well as perceived-symbolic (or semiotic); virtual & place-based features
Cross-disciplinary, multi-level, contextual analyses of people-environment relationships spanning proximal and distal scales (from narrow to broad spatial, sociocultural, and temporal scope)
Systems principles, especially feedback loops, interdependence of system elements, anticipating unintended side effects of public policies and environmental interventions
Translation of theory and research findings into community interventions and public policies
Privileging and combining both academic and non-academic perspectives, including scientists and academicians, lay citizens and community stakeholder groups, business leaders and other professional groups, and government decision makers.
Transdisciplinary values and orientation, synthesizing concepts and methods from different fields that pertain to particular research topics.
Academic programs
Several academic programs combine a broad definition of “environmental studies” with analyses of social processes, biological considerations, and the physical environment. A number of social ecology degree-granting programs and research institutes shape the global evolution of the social ecological paradigm. For example, see:
College of the Atlantic
UC Irvine School of Social Ecology
Yale School of Forestry & Environmental Studies
Cornell University College of Human Ecology
New York University, Environmental Education
The Institute for Social Ecology in Plainfield, VT
The Institute for Social-Ecological Research, Frankfurt
Institute of Social Ecology, Vienna
Stockholm Resilience Centre
Most of the 120 listed programs at the link below are in human ecology, but many overlap with social ecology:
Society for Human Ecology list of programs and institutions
See also
Social ecology (Bookchin)
Social ecological model
Ecology
Environmental stewardship
References
External links
Conceptual social ecology
Google scholar search on social ecological systems
Article on expansion of social ecological systems science | 0.804348 | 0.973533 | 0.78306 |
Earth system science | Earth system science (ESS) is the application of systems science to the Earth. In particular, it considers interactions and 'feedbacks', through material and energy fluxes, between the Earth's sub-systems' cycles, processes and "spheres"—atmosphere, hydrosphere, cryosphere, geosphere, pedosphere, lithosphere, biosphere, and even the magnetosphere—as well as the impact of human societies on these components. At its broadest scale, Earth system science brings together researchers across both the natural and social sciences, from fields including ecology, economics, geography, geology, glaciology, meteorology, oceanography, climatology, paleontology, sociology, and space science. Like the broader subject of systems science, Earth system science assumes a holistic view of the dynamic interaction between the Earth's spheres and their many constituent subsystems fluxes and processes, the resulting spatial organization and time evolution of these systems, and their variability, stability and instability. Subsets of Earth System science include systems geology and systems ecology, and many aspects of Earth System science are fundamental to the subjects of physical geography and climate science.
Definition
The Science Education Resource Center, Carleton College, offers the following description: "Earth System science embraces chemistry, physics, biology, mathematics and applied sciences in transcending disciplinary boundaries to treat the Earth as an integrated system. It seeks a deeper understanding of the physical, chemical, biological and human interactions that determine the past, current and future states of the Earth. Earth System science provides a physical basis for understanding the world in which we live and upon which humankind seeks to achieve sustainability".
Earth System science has articulated four overarching, definitive and critically important features of the Earth System, which include:
Variability: Many of the Earth System's natural 'modes' and variabilities across space and time are beyond human experience, because of the stability of the recent Holocene. Much Earth System science therefore relies on studies of the Earth's past behaviour and models to anticipate future behaviour in response to pressures.
Life: Biological processes play a much stronger role in the functioning and responses of the Earth System than previously thought. It appears to be integral to every part of the Earth System.
Connectivity: Processes are connected in ways and across depths and lateral distances that were previously unknown and inconceivable.
Non-linear: The behaviour of the Earth System is typified by strong non-linearities. This means that abrupt change can result when relatively small changes in a 'forcing function' push the System across a 'threshold'.
History
For millennia, humans have speculated how the physical and living elements on the surface of the Earth combine, with gods and goddesses frequently posited to embody specific elements. The notion that the Earth, itself, is alive was a regular theme of Greek philosophy and religion.
Early scientific interpretations of the Earth system began in the field of geology, initially in the Middle East and China, and largely focused on aspects such as the age of the Earth and the large-scale processes involved in mountain and ocean formation. As geology developed as a science, understanding of the interplay of different facets of the Earth system increased, leading to the inclusion of factors such as the Earth's interior, planetary geology, living systems and Earth-like worlds.
In many respects, the foundational concepts of Earth System science can be seen in the natural philosophy 19th century geographer Alexander von Humboldt. In the 20th century, Vladimir Vernadsky (1863–1945) saw the functioning of the biosphere as a geological force generating a dynamic disequilibrium, which in turn promoted the diversity of life.
In parallel, the field of systems science was developing across numerous other scientific fields, driven in part by the increasing availability and power of computers, and leading to the development of climate models that began to allow the detailed and interacting simulations of the Earth's weather and climate. Subsequent extension of these models has led to the development of "Earth system models" (ESMs) that include facets such as the cryosphere and the biosphere.
In the 1980s, where a NASA committee called the Earth System Science Committee was formed in 1983. The earliest reports of NASA's ESSC, Earth System Science: Overview (1986), and the book-length Earth System Science: A Closer View (1988), constitute a major landmark in the formal development of Earth system science. Early works discussing Earth system science, like these NASA reports, generally emphasized the increasing human impacts on the Earth system as a primary driver for the need of greater integration among the life and geo-sciences, making the origins of Earth system science parallel to the beginnings of global change studies and programs.
Climate science
Climatology and climate change have been central to Earth System science since its inception, as evidenced by the prominent place given to climate change in the early NASA reports discussed above. The Earth's climate system is a prime example of an emergent property of the whole planetary system, that is, one which cannot be fully understood without regarding it as a single integrated entity. It is also a system where human impacts have been growing rapidly in recent decades, lending immense importance to the successful development and advancement of Earth System science research. As just one example of the centrality of climatology to the field, leading American climatologist Michael E. Mann is the Director of one of the earliest centers for Earth System science research, the Earth System Science Center at Pennsylvania State University, and its mission statement reads, "the Earth System Science Center (ESSC) maintains a mission to describe, model, and understand the Earth's climate system".
Education
Earth System science can be studied at a postgraduate level at some universities. In general education, the American Geophysical Union, in cooperation with the Keck Geology Consortium and with support from five divisions within the National Science Foundation, convened a workshop in 1996, "to define common educational goals among all disciplines in the Earth sciences". In its report, participants noted that, "The fields that make up the Earth and space sciences are currently undergoing a major advancement that promotes understanding the Earth as a number of interrelated systems". Recognizing the rise of this systems approach, the workshop report recommended that an Earth System science curriculum be developed with support from the National Science Foundation.
In 2000, the Earth System Science Education Alliance (ESSEA) was begun, and currently includes the participation of 40+ institutions, with over 3,000 teachers having completed an ESSEA course as of fall 2009".
Related concepts
The concept of earth system law (still in its infancy as per 2021) is a sub-discipline of earth system governance, itself a subfield of earth system sciences analyzed from a social sciences perspective.
See also
References
External links
Earth system science at Nature.com
Global natural environment
Complex systems theory | 0.787593 | 0.994132 | 0.782972 |
Evolutionary biology | Evolutionary biology is the subfield of biology that studies the evolutionary processes (natural selection, common descent, speciation) that produced the diversity of life on Earth. It is also defined as the study of the history of life forms on Earth. Evolution holds that all species are related and gradually change over generations. In a population, the genetic variations affect the phenotypes (physical characteristics) of an organism. These changes in the phenotypes will be an advantage to some organisms, which will then be passed on to their offspring. Some examples of evolution in species over many generations are the peppered moth and flightless birds. In the 1930s, the discipline of evolutionary biology emerged through what Julian Huxley called the modern synthesis of understanding, from previously unrelated fields of biological research, such as genetics and ecology, systematics, and paleontology.
The investigational range of current research has widened to encompass the genetic architecture of adaptation, molecular evolution, and the different forces that contribute to evolution, such as sexual selection, genetic drift, and biogeography. Moreover, the newer field of evolutionary developmental biology ("evo-devo") investigates how embryogenesis is controlled, thus yielding a wider synthesis that integrates developmental biology with the fields of study covered by the earlier evolutionary synthesis.
Subfields
Evolution is the central unifying concept in biology. Biology can be divided into various ways. One way is by the level of biological organization, from molecular to cell, organism to population. Another way is by perceived taxonomic group, with fields such as zoology, botany, and microbiology, reflecting what was once seen as the major divisions of life. A third way is by approaches, such as field biology, theoretical biology, experimental evolution, and paleontology. These alternative ways of dividing up the subject have been combined with evolutionary biology to create subfields like evolutionary ecology and evolutionary developmental biology.
More recently, the merge between biological science and applied sciences gave birth to new fields that are extensions of evolutionary biology, including evolutionary robotics, engineering, algorithms, economics, and architecture. The basic mechanisms of evolution are applied directly or indirectly to come up with novel designs or solve problems that are difficult to solve otherwise. The research generated in these applied fields, contribute towards progress, especially from work on evolution in computer science and engineering fields such as mechanical engineering.
Different types of evolution
Adaptive evolution
Adaptive evolution relates to evolutionary changes that happen due to the changes in the environment, this makes the organism suitable to its habitat. This change increases the chances of survival and reproduction of the organism (this can be referred to as an organism's fitness). For example, Darwin's Finches on Galapagos island developed different shaped beaks in order to survive for a long time. Adaptive evolution can also be convergent evolution if two distantly related species live in similar environments facing similar pressures.
Convergent evolution
Convergent evolution is the process in which related or distantly related organisms evolve similar characteristics independently. This type of evolution creates analogous structures which have a similar function, structure, or form between the two species. For example, sharks and dolphins look alike but they are not related. Likewise, birds, flying insects, and bats all have the ability to fly, but they are not related to each other. These similar traits tend to evolve from having similar environmental pressures.
Divergent evolution
Divergent evolution is the process of speciation. This can happen in several ways:
Allopatric speciation is when species are separated by a physical barrier that separates the population into two groups. Evolutionary mechanisms such as genetic drift and natural selection can then act independently on each population.
Peripatric speciation is a type of allopatric speciation that occurs when one of the new populations is considerably smaller than the other initial population. This leads to the founder's effect and the population can have different allele frequencies and phenotypes than the original population. These small populations are also more likely to see effects from genetic drift.
Parapatric speciation is allopatric speciation but occurs when the species diverge without a physical barrier separating the population. This tends to occur when a population of a species is incredibly large and occupies a vast environment.
Sympatric speciation is when a new species or subspecies sprouts from the original population while still occupying the same small environment, and without any physical barriers separating them from members of their original population. There is scientific debate as to whether sympatric speciation actually exists.
Artificial speciation is when scientists purposefully cause new species to emerge to use in laboratory procedures.
Coevolution
The influence of two closely associated species is known as coevolution. When two or more species evolve in company with each other, one species adapts to changes in other species. This type of evolution often happens in species that have symbiotic relationships. For example, predator-prey coevolution, this is the most common type of co-evolution. In this, the predator must evolve to become a more effective hunter because there is a selective pressure on the prey to steer clear of capture. The prey in turn need to develop better survival strategies. The Red Queen hypothesis is an example of predator-prey interations. The relationship between pollinating insects like bees and flowering plants, herbivores and plants, are also some common examples of diffuse or guild coevolution.
Mechanism: The process of evolution
The mechanisms of evolution focus mainly on mutation, genetic drift, gene flow, non-random mating, and natural selection.
Mutation: Mutation is a change in the DNA sequence inside a gene or a chromosome of an organism. Most mutations are deleterious, or neutral; i.e. they can neither harm nor benefit, but can also be beneficial sometimes.
Genetic drift: Genetic drift is a variational process, it happens as a result of the sampling errors from one generation to another generation where a random event that happens by chance in nature changes or influences allele frequency within a population. It has a much stronger effect on small populations than large ones.
Gene flow: Gene flow is the transfer of genetic material from the gene pool of one population to another. In a population, migration occurs from one species to another, resulting in the change of allele frequency.
Natural selection: The survival and reproductive rate of a species depends on the adaptability of the species to their environment. This process is called natural selection. Some species with certain traits in a population have higher survival and reproductive rate than others (fitness), and they pass on these genetic features to their offsprings.
Evolutionary developmental biology
In evolutionary developmental biology, scientists look at how the different processes in development play a role in how a specific organism reaches its current body plan. The genetic regulation of ontogeny and the phylogenetic process is what allows for this kind of understanding of biology to be possible. By looking at different processes during development, and going through the evolutionary tree, one can determine at which point a specific structure came about. For example, the three germ layers can be observed to not be present in cnidarians and ctenophores, which instead present in worms, being more or less developed depending on the kind of worm itself. Other structures like the development of Hox genes and sensory organs such as eyes can also be traced with this practice.
Phylogenetic Trees
Phylogenetic Trees are representations of genetic lineage. They are figures that show how related species are to one another. They formed by analyzing the physical traits as well as the similarities of the DNA between species. Then by using a molecular clock scientists can estimate when the species diverged. An example of a phylogeny would be the tree of life.
Homologs
Genes that have shared ancestry are homologs. If a speciation event occurs and one gene ends up in two different species the genes are now orthologous. If a gene is duplicated within the a singular species then it is a paralog. A molecular clock can be used to estimate when these events occurred.
History
The idea of evolution by natural selection was proposed by Charles Darwin in 1859, but evolutionary biology, as an academic discipline in its own right, emerged during the period of the modern synthesis in the 1930s and 1940s. It was not until the 1980s that many universities had departments of evolutionary biology. In the United States, many universities have created departments of molecular and cell biology or ecology and evolutionary biology, in place of the older departments of botany and zoology. Palaeontology is often grouped with earth science.
Microbiology too is becoming an evolutionary discipline now that microbial physiology and genomics are better understood. The quick generation time of bacteria and viruses such as bacteriophages makes it possible to explore evolutionary questions.
Many biologists have contributed to shaping the modern discipline of evolutionary biology. Theodosius Dobzhansky and E. B. Ford established an empirical research programme. Ronald Fisher, Sewall Wright, and J. B. S. Haldane created a sound theoretical framework. Ernst Mayr in systematics, George Gaylord Simpson in paleontology and G. Ledyard Stebbins in botany helped to form the modern synthesis. James Crow, Richard Lewontin, Dan Hartl, Marcus Feldman, and Brian Charlesworth trained a generation of evolutionary biologists.
Current research topics
Current research in evolutionary biology covers diverse topics and incorporates ideas from diverse areas, such as molecular genetics and computer science.
First, some fields of evolutionary research try to explain phenomena that were poorly accounted for in the modern evolutionary synthesis. These include speciation, the evolution of sexual reproduction, the evolution of cooperation, the evolution of ageing, and evolvability.
Second, some evolutionary biologists ask the most straightforward evolutionary question: "what happened and when?". This includes fields such as paleobiology, where paleobiologists and evolutionary biologists, including Thomas Halliday and Anjali Goswami, studied the evolution of early mammals going far back in time during the Mesozoic and Cenozoic eras (between 299 million to 12,000 years ago). Other fields related to generic exploration of evolution ("what happened and when?" ) include systematics and phylogenetics.
Third, the modern evolutionary synthesis was devised at a time when nobody understood the molecular basis of genes. Today, evolutionary biologists try to determine the genetic architecture of interesting evolutionary phenomena such as adaptation and speciation. They seek answers to questions such as how many genes are involved, how large are the effects of each gene, how interdependent are the effects of different genes, what do the genes do, and what changes happen to them (e.g., point mutations vs. gene duplication or even genome duplication). They try to reconcile the high heritability seen in twin studies with the difficulty in finding which genes are responsible for this heritability using genome-wide association studies.
One challenge in studying genetic architecture is that the classical population genetics that catalysed the modern evolutionary synthesis must be updated to take into account modern molecular knowledge. This requires a great deal of mathematical development to relate DNA sequence data to evolutionary theory as part of a theory of molecular evolution. For example, biologists try to infer which genes have been under strong selection by detecting selective sweeps.
Fourth, the modern evolutionary synthesis involved agreement about which forces contribute to evolution, but not about their relative importance. Current research seeks to determine this. Evolutionary forces include natural selection, sexual selection, genetic drift, genetic draft, developmental constraints, mutation bias and biogeography.
This evolutionary approach is key to much current research in organismal biology and ecology, such as life history theory. Annotation of genes and their function relies heavily on comparative approaches. The field of evolutionary developmental biology ("evo-devo") investigates how developmental processes work, and compares them in different organisms to determine how they evolved.
Many physicians do not have enough background in evolutionary biology, making it difficult to use it in modern medicine. However, there are efforts to gain a deeper understanding of disease through evolutionary medicine and to develop evolutionary therapies.
Drug resistance today
Evolution plays a role in resistance of drugs; for example, how HIV becomes resistant to medications and the body's immune system. The mutation of resistance of HIV is due to the natural selection of the survivors and their offspring. The few HIV that survive the immune system reproduced and had offspring that were also resistant to the immune system. Drug resistance also causes many problems for patients such as a worsening sickness or the sickness can mutate into something that can no longer be cured with medication. Without the proper medicine, a sickness can be the death of a patient. If their body has resistance to a certain number of drugs, then the right medicine will be harder and harder to find. Not completing the prescribed full course of antibiotic is also an example of resistance that will cause the bacteria against which the antibiotic is being taken to evolve and continue to spread in the body. When the full dosage of the medication does not enter the body and perform its proper job, the bacteria that survive the initial dosage will continue to reproduce. This can make for another bout of sickness later on that will be more difficult to cure because the bacteria involved will be resistant to the first medication used. Taking the full course of medicine that is prescribed is a vital step in avoiding antibiotic resistance.
Individuals with chronic illnesses, especially those that can recur throughout a lifetime, are at greater risk of antibiotic resistance than others. This is because overuse of a drug or too high of a dosage can cause a patient's immune system to weaken and the illness will evolve and grow stronger. For example, cancer patients will need a stronger and stronger dosage of medication because of their low functioning immune system.
Journals
Some scientific journals specialise exclusively in evolutionary biology as a whole, including the journals Evolution, Journal of Evolutionary Biology, and BMC Evolutionary Biology. Some journals cover sub-specialties within evolutionary biology, such as the journals Systematic Biology, Molecular Biology and Evolution and its sister journal Genome Biology and Evolution, and Cladistics.
Other journals combine aspects of evolutionary biology with other related fields. For example, Molecular Ecology, Proceedings of the Royal Society of London Series B, The American Naturalist and Theoretical Population Biology have overlap with ecology and other aspects of organismal biology. Overlap with ecology is also prominent in the review journals Trends in Ecology and Evolution and Annual Review of Ecology, Evolution, and Systematics. The journals Genetics and PLoS Genetics overlap with molecular genetics questions that are not obviously evolutionary in nature.
See also
Comparative anatomy
Computational phylogenetics
Evolutionary computation
Evolutionary dynamics
Evolutionary neuroscience
Evolutionary physiology
On the Origin of Species
Macroevolution
Phylogenetic comparative methods
Quantitative genetics
Selective breeding
Taxonomy (biology)
Speculative evolution
References
External links
Evolution and Paleobotany at the Encyclopædia Britannica
Philosophy of biology | 0.785853 | 0.996308 | 0.782952 |
Environmental hazard | Environmental hazards are those hazards that affect biomes or ecosystems. Well known examples include oil spills, water pollution, slash and burn deforestation, air pollution, ground fissures, and build-up of atmospheric carbon dioxide. Physical exposure to environmental hazards is usually involuntary
Types
Environmental hazards can be categorized in many different ways. One of them is — chemical, physical, biological, and psychological.
Chemical hazards are substances that can cause harm or damage to humans, animals, or the environment. They can be in the form of solids, liquids, gases, mists, dusts, fumes, and vapors. Exposure can occur through inhalation, skin absorption, ingestion, or direct contact. Chemical hazards include substances such as pesticides, solvents, acids, bases, reactive metals, and poisonous gases. Exposure to these substances can result in health effects such as skin irritation, respiratory problems, organ damage, neurological effects, and cancer.
Physical hazards are factors within the environment that can harm the body without necessarily touching it. They include a wide range of environmental factors such as noise, vibration, extreme temperatures, radiation, and ergonomic hazards. Physical hazards may lead to injuries like burns, fractures, hearing loss, vision impairment, or other physical harm. They can be present in many work settings such as construction sites, manufacturing plants, and even office spaces.
Biological hazards, also known as biohazards, are organic substances that pose a threat to the health of living organisms, primarily humans. This can include medical waste, samples of a microorganism, virus, or toxin (from a biological source) that can impact human health. Biological hazards can also include substances harmful to animals. Examples of biological hazards include bacteria, viruses, fungi, other microorganisms and their associated toxins. They may cause a myriad of diseases, from flu to more serious and potentially fatal diseases.
Psychological hazards are aspects of work and work environments that can cause psychological harm or mental ill-health. These include factors such as stress, workplace bullying, fatigue, burnout, and violence, among others. These hazards can lead to psychological issues like anxiety, depression, and post-traumatic stress disorder (PTSD). Psychological hazards can exist in any type of workplace, and their management is a crucial aspect of occupational health and safety.
Environmental hazard identification
Environmental hazard identification is the first step in environmental risk assessment, which is the process of assessing the likelihood, or risk, of adverse effects resulting from a given environmental stressor. Hazard identification is the determination of whether, and under what conditions, a given environmental stressor has the potential to cause harm.
In hazard identification, sources of data on the risks associated with prospective hazards are identified. For instance, if a site is known to be contaminated with a variety of industrial pollutants, hazard identification will determine which of these chemicals could result in adverse human health effects, and what effects they could cause. Risk assessors rely on both laboratory (e.g., toxicological) and epidemiological data to make these determinations.Conceptual model of exposure
Hazards have the potential to cause adverse effects only if they come into contact with populations that may be harmed. For this reason, hazard identification includes the development of a conceptual model of exposure. Conceptual models communicate the pathway connecting sources of a given hazard to the potentially exposed population(s). The U.S. Agency for Toxic Substances and Disease Registry establishes five elements that should be included in a conceptual model of exposure:
The source of the hazard in question
Environmental fate and transport, or how the hazard moves and changes in the environment after its release
Exposure point or area, or the place at which an exposed person comes into contact with the hazard
Exposure route, or the manner by which an exposed person comes into contact with the hazard (e.g., orally, dermally, or by inhalation)
Potentially exposed populations.
Evaluating hazard data
Once a conceptual model of exposure is developed for a given hazard, measurements should be taken to determine the presence and quantity of the hazard. These measurements should be compared to appropriate reference levels to determine whether a hazard exists. For instance, if arsenic is detected in tap water from a given well, the detected concentrations should be compared with regulatory thresholds for allowable levels of arsenic in drinking water. If the detected levels are consistently lower than these limits, arsenic may not be a chemical of potential concern for the purposes of this risk assessment. When interpreting hazard data, risk assessors must consider the sensitivity of the instrument and method used to take these measurements, including any relevant detection limits (i.e., the lowest level of a given substance that an instrument or method is capable of detecting).
Chemical
Chemical hazards are defined in the Globally Harmonized System and in the European Union chemical regulations. They are caused by chemical substances causing significant damage to the environment. The label is particularly applicable towards substances with aquatic toxicity. An example is zinc oxide, a common paint pigment, which is extremely toxic to aquatic life.
Toxicity or other hazards do not imply an environmental hazard, because elimination by sunlight (photolysis), water (hydrolysis) or organisms (biological elimination) neutralizes many reactive or poisonous substances. Persistence towards these elimination mechanisms combined with toxicity gives the substance the ability to do damage in the long term. Also, the lack of immediate human toxicity does not mean the substance is environmentally nonhazardous. For example, tanker truck-sized spills of substances such as milk can cause a lot of damage in the local aquatic ecosystems: the added biological oxygen demand causes rapid eutrophication, leading to anoxic conditions in the water body.
All hazards in this category are mainly anthropogenic although there exist a number of natural carcinogens and chemical elements like radon and lead may turn up in health-critical concentrations in the natural environment:
temp break
agents in animals destined for human consumption
- a contaminant of fresh water sources (water wells)
- carcinogenic
s
s
s
s
s
s
s in animals destined for human consumption
in paint
s
s
s
and other natural sources of radioactivity
Physical
A physical hazard is a type of occupational hazard that involves environmental hazards that can cause harm with or without contact. Below is a list of examples:
s
Biological
Biological hazards, also known as biohazards, refer to biological substances that pose a threat to the health of living organisms, primarily that of humans. This can include medical waste or samples of a microorganism, virus or toxin (from a biological source) that can affect human health. Examples include:
, a common allergen
(BSE)
s
s
(river blindness)
s
s
(SARS)
Psychological
Psychological hazards include but are not limited to stress, violence and other workplace stressors. Work is generally beneficial to mental health and personal wellbeing. It provides people with structure and purpose and a sense of identity.
See also
References
Environmental health
Hazards
Public health | 0.787904 | 0.993679 | 0.782924 |
Macrosociology | Macrosociology is a large-scale approach to sociology, emphasizing the analysis of social systems and populations at the structural level, often at a necessarily high level of theoretical abstraction. Though macrosociology does concern itself with individuals, families, and other constituent aspects of a society, it does so in relation to larger social system of which such elements are a part. The approach is also able to analyze generalized collectivities (e.g. "the city", "the church").
In contrast, microsociology focuses on the individual social agency. Macrosociology, however, deals with broad societal trends that can later be applied to smaller features of society, or vice versa. To differentiate, macrosociology deals with issues such as war as a whole; 'distress of Third-World countries'; poverty on a national/international level; and environmental deprivation, whereas microsociology analyses issues such as the individual features of war (e.g. camaraderie, one's pleasure in violence, etc.); the role of women in third-world countries; poverty's effect on "the family"; and how immigration impacts a country's environment.
A "society" can be considered as a collective of human populations that are politically autonomous, in which members engage in a broad range of cooperative activities. The people of Germany, for example, can be deemed "a society", whereas people with German heritage as a whole, including those who populate other countries, would not be considered a society, per se.
Theoretical strategies
There are a number of theoretical strategies within contemporary macrosociology, though four approaches, in particular, have the most influence:
Idealist Strategy: Attempts to explain the basic features of social life by reference to the creative capacity of the human mind. "Idealists believe that human uniqueness lies in the fact that humans attach symbolic meanings to their actions."
Materialist Strategy: Attempts to explain the basic features of human social life in terms of the practical, material conditions of their existence, including the nature of a physical environment; the level of technology; and the organization of an economic system.
Functionalist Strategy (or structural functionalism): Functionalism essentially states that societies are complex systems of interrelated and interdependent parts, and each part of a society significantly influences the others. Moreover, each part of society exists because it has a specific function to perform in contributing to the society as a whole. As such, societies tend toward a state of equilibrium or homeostasis, and if there is a disturbance in any part of the society then the other parts will adjust to restore the stability of the society as a whole.
Conflict Theoretical Strategy (or conflict theory): Rejects the idea that societies tend toward some basic consensus of harmony in which the features of society work for everyone's good. Rather, the basic structure of society is determined by individuals and groups acquiring scarce resources to satisfy their own needs and wants, thus creating endless conflicts.
Historical macrosociology
Historical macrosociology can be understood as an approach that uses historical knowledge to try to solve some of the problems seen in the field of macrosociology. As globalization has affected the world, it has also influenced historical macrosociology, leading to the development of two distinct branches:
Comparative and historical sociology (CHS): a branch of historical macrosociology that bases its analysis on states, searching for "generalizations about common properties and principles of variation among instances across time and space." As of recently, it has been argued that globalization poses a threat to the CHS way of thinking because it often leads to the dissolution of distinct states.
Political Economy of the World-Systems (PEWS): a branch of historical macrosociology that bases its analysis on the systems of states, searching for "generalizations about interdependencies among a system's components and of principles of variation among systemic conditions across time and space."
Historical macrosociologists include:
Charles Tilly: developed theory of CHS, in which analysis is based on national states.
Immanuel Wallerstein: developed world systems theory, in which analysis is based on world capitalist systems.
Linking micro- and macro-sociology
Perhaps the most highly developed integrative effort to link micro- and macro-sociological phenomena is found in Anthony Giddens's theory of structuration, in which "social structure is defined as both constraining and enabling of human activity as well as both internal and external to the actor."
Attempts to link micro and macro phenomena are evident in a growing body of empirical research. Such work appears to follow Giddens' view of the constraining and enabling nature of social structure for human activity and the need to link structure and action. "It appears safe to say that while macrosociology will always remain a central component of sociological theory and research, increasing effort will be devoted to creating workable models that link it with its microcounterpart."
See also
Base and superstructure
Cliodynamics
General systems theory
Modernization theory
Sociocybernetics
Structure and agency
Systems philosophy
References
Further reading
Tilly, Charles. 1995. "Macrosociology Past and Future." In Newsletter of the Comparative & Historical Sociology 8(1&2):1,3–4. American Sociological Association.
Francois, P., J. G. Manning, Harvey Whitehouse, Rob Brennan, et al. 2016. "A Macroscope for Global History. Seshat Global History Databank: A Methodological Overview." Digital Humanities Quarterly Journal 4(26).
Methods in sociology | 0.800923 | 0.977435 | 0.782851 |
Agriculture | Agriculture encompasses crop and livestock production, aquaculture, and forestry for food and non-food products. Agriculture was a key factor in the rise of sedentary human civilization, whereby farming of domesticated species created food surpluses that enabled people to live in cities. While humans started gathering grains at least 105,000 years ago, nascent farmers only began planting them around 11,500 years ago. Sheep, goats, pigs, and cattle were domesticated around 10,000 years ago. Plants were independently cultivated in at least 11 regions of the world. In the 20th century, industrial agriculture based on large-scale monocultures came to dominate agricultural output.
, small farms produce about one-third of the world's food, but large farms are prevalent. The largest 1% of farms in the world are greater than and operate more than 70% of the world's farmland. Nearly 40% of agricultural land is found on farms larger than . However, five of every six farms in the world consist of fewer than , and take up only around 12% of all agricultural land. Farms and farming greatly influence rural economics and greatly shape rural society, effecting both the direct agricultural workforce and broader businesses that support the farms and farming populations.
The major agricultural products can be broadly grouped into foods, fibers, fuels, and raw materials (such as rubber). Food classes include cereals (grains), vegetables, fruits, cooking oils, meat, milk, eggs, and fungi. Global agricultural production amounts to approximately 11 billion tonnes of food, 32 million tonnes of natural fibers and 4 billion m3 of wood. However, around 14% of the world's food is lost from production before reaching the retail level.
Modern agronomy, plant breeding, agrochemicals such as pesticides and fertilizers, and technological developments have sharply increased crop yields, but also contributed to ecological and environmental damage. Selective breeding and modern practices in animal husbandry have similarly increased the output of meat, but have raised concerns about animal welfare and environmental damage. Environmental issues include contributions to climate change, depletion of aquifers, deforestation, antibiotic resistance, and other agricultural pollution. Agriculture is both a cause of and sensitive to environmental degradation, such as biodiversity loss, desertification, soil degradation, and climate change, all of which can cause decreases in crop yield. Genetically modified organisms are widely used, although some countries ban them.
Etymology and scope
The word agriculture is a late Middle English adaptation of Latin , from 'field' and 'cultivation' or 'growing'. While agriculture usually refers to human activities, certain species of ant, termite and beetle have been cultivating crops for up to 60 million years. Agriculture is defined with varying scopes, in its broadest sense using natural resources to "produce commodities which maintain life, including food, fiber, forest products, horticultural crops, and their related services". Thus defined, it includes arable farming, horticulture, animal husbandry and forestry, but horticulture and forestry are in practice often excluded.
It may also be broadly decomposed into plant agriculture, which concerns the cultivation of useful plants, and animal agriculture, the production of agricultural animals.
History
Origins
The development of agriculture enabled the human population to grow many times larger than could be sustained by hunting and gathering. Agriculture began independently in different parts of the globe, and included a diverse range of taxa, in at least 11 separate centers of origin. Wild grains were collected and eaten from at least 105,000 years ago. In the Paleolithic Levant, 23,000 years ago, cereals cultivation of emmer, barley, and oats has been observed near the sea of Galilee. Rice was domesticated in China between 11,500 and 6,200 BC with the earliest known cultivation from 5,700 BC, followed by mung, soy and azuki beans. Sheep were domesticated in Mesopotamia between 13,000 and 11,000 years ago. Cattle were domesticated from the wild aurochs in the areas of modern Turkey and Pakistan some 10,500 years ago. Pig production emerged in Eurasia, including Europe, East Asia and Southwest Asia, where wild boar were first domesticated about 10,500 years ago. In the Andes of South America, the potato was domesticated between 10,000 and 7,000 years ago, along with beans, coca, llamas, alpacas, and guinea pigs. Sugarcane and some root vegetables were domesticated in New Guinea around 9,000 years ago. Sorghum was domesticated in the Sahel region of Africa by 7,000 years ago. Cotton was domesticated in Peru by 5,600 years ago, and was independently domesticated in Eurasia. In Mesoamerica, wild teosinte was bred into maize (corn) from 10,000 to 6,000 years ago. The horse was domesticated in the Eurasian Steppes around 3500 BC.
Scholars have offered multiple hypotheses to explain the historical origins of agriculture. Studies of the transition from hunter-gatherer to agricultural societies indicate an initial period of intensification and increasing sedentism; examples are the Natufian culture in the Levant, and the Early Chinese Neolithic in China. Then, wild stands that had previously been harvested started to be planted, and gradually came to be domesticated.
Civilizations
In Eurasia, the Sumerians started to live in villages from about 8,000 BC, relying on the Tigris and Euphrates rivers and a canal system for irrigation. Ploughs appear in pictographs around 3,000 BC; seed-ploughs around 2,300 BC. Farmers grew wheat, barley, vegetables such as lentils and onions, and fruits including dates, grapes, and figs. Ancient Egyptian agriculture relied on the Nile River and its seasonal flooding. Farming started in the predynastic period at the end of the Paleolithic, after 10,000 BC. Staple food crops were grains such as wheat and barley, alongside industrial crops such as flax and papyrus. In India, wheat, barley and jujube were domesticated by 9,000 BC, soon followed by sheep and goats. Cattle, sheep and goats were domesticated in Mehrgarh culture by 8,000–6,000 BC. Cotton was cultivated by the 5th–4th millennium BC. Archeological evidence indicates an animal-drawn plough from 2,500 BC in the Indus Valley civilization.
In China, from the 5th century BC, there was a nationwide granary system and widespread silk farming. Water-powered grain mills were in use by the 1st century BC, followed by irrigation. By the late 2nd century, heavy ploughs had been developed with iron ploughshares and mouldboards. These spread westwards across Eurasia. Asian rice was domesticated 8,200–13,500 years ago – depending on the molecular clock estimate that is used– on the Pearl River in southern China with a single genetic origin from the wild rice Oryza rufipogon. In Greece and Rome, the major cereals were wheat, emmer, and barley, alongside vegetables including peas, beans, and olives. Sheep and goats were kept mainly for dairy products.
In the Americas, crops domesticated in Mesoamerica (apart from teosinte) include squash, beans, and cacao. Cocoa was domesticated by the Mayo Chinchipe of the upper Amazon around 3,000 BC.
The turkey was probably domesticated in Mexico or the American Southwest. The Aztecs developed irrigation systems, formed terraced hillsides, fertilized their soil, and developed chinampas or artificial islands. The Mayas used extensive canal and raised field systems to farm swampland from 400 BC. In South America agriculture may have begun about 9000 BC with the domestication of squash (Cucurbita) and other plants. Coca was domesticated in the Andes, as were the peanut, tomato, tobacco, and pineapple. Cotton was domesticated in Peru by 3,600 BC. Animals including llamas, alpacas, and guinea pigs were domesticated there. In North America, the indigenous people of the East domesticated crops such as sunflower, tobacco, squash and Chenopodium. Wild foods including wild rice and maple sugar were harvested. The domesticated strawberry is a hybrid of a Chilean and a North American species, developed by breeding in Europe and North America. The indigenous people of the Southwest and the Pacific Northwest practiced forest gardening and fire-stick farming. The natives controlled fire on a regional scale to create a low-intensity fire ecology that sustained a low-density agriculture in loose rotation; a sort of "wild" permaculture. A system of companion planting called the Three Sisters was developed in North America. The three crops were winter squash, maize, and climbing beans.
Indigenous Australians, long supposed to have been nomadic hunter-gatherers, practiced systematic burning, possibly to enhance natural productivity in fire-stick farming. Scholars have pointed out that hunter-gatherers need a productive environment to support gathering without cultivation. Because the forests of New Guinea have few food plants, early humans may have used "selective burning" to increase the productivity of the wild karuka fruit trees to support the hunter-gatherer way of life.
The Gunditjmara and other groups developed eel farming and fish trapping systems from some 5,000 years ago. There is evidence of 'intensification' across the whole continent over that period. In two regions of Australia, the central west coast and eastern central, early farmers cultivated yams, native millet, and bush onions, possibly in permanent settlements.
Revolution
In the Middle Ages, compared to the Roman period, agriculture in Western Europe became more focused on self-sufficiency. The agricultural population under feudalism was typically organized into manors consisting of several hundred or more acres of land presided over by a lord of the manor with a Roman Catholic church and priest.
Thanks to the exchange with the Al-Andalus where the Arab Agricultural Revolution was underway, European agriculture transformed, with improved techniques and the diffusion of crop plants, including the introduction of sugar, rice, cotton and fruit trees (such as the orange).
After 1492, the Columbian exchange brought New World crops such as maize, potatoes, tomatoes, sweet potatoes, and manioc to Europe, and Old World crops such as wheat, barley, rice, and turnips, and livestock (including horses, cattle, sheep and goats) to the Americas.
Irrigation, crop rotation, and fertilizers advanced from the 17th century with the British Agricultural Revolution, allowing global population to rise significantly. Since 1900, agriculture in developed nations, and to a lesser extent in the developing world, has seen large rises in productivity as mechanization replaces human labor, and assisted by synthetic fertilizers, pesticides, and selective breeding. The Haber-Bosch method allowed the synthesis of ammonium nitrate fertilizer on an industrial scale, greatly increasing crop yields and sustaining a further increase in global population.
Modern agriculture has raised or encountered ecological, political, and economic issues including water pollution, biofuels, genetically modified organisms, tariffs and farm subsidies, leading to alternative approaches such as the organic movement. Unsustainable farming practices in North America led to the Dust Bowl of the 1930s.
Types
Pastoralism involves managing domesticated animals. In nomadic pastoralism, herds of livestock are moved from place to place in search of pasture, fodder, and water. This type of farming is practiced in arid and semi-arid regions of Sahara, Central Asia and some parts of India.
In shifting cultivation, a small area of forest is cleared by cutting and burning the trees. The cleared land is used for growing crops for a few years until the soil becomes too infertile, and the area is abandoned. Another patch of land is selected and the process is repeated. This type of farming is practiced mainly in areas with abundant rainfall where the forest regenerates quickly. This practice is used in Northeast India, Southeast Asia, and the Amazon Basin.
Subsistence farming is practiced to satisfy family or local needs alone, with little left over for transport elsewhere. It is intensively practiced in Monsoon Asia and South-East Asia. An estimated 2.5 billion subsistence farmers worked in 2018, cultivating about 60% of the earth's arable land.
Intensive farming is cultivation to maximize productivity, with a low fallow ratio and a high use of inputs (water, fertilizer, pesticide and automation). It is practiced mainly in developed countries.
Contemporary agriculture
Status
From the twentieth century onwards, intensive agriculture increased crop productivity. It substituted synthetic fertilizers and pesticides for labour, but caused increased water pollution, and often involved farm subsidies. Soil degradation and diseases such as stem rust are major concerns globally; approximately 40% of the world's agricultural land is seriously degraded. In recent years there has been a backlash against the environmental effects of conventional agriculture, resulting in the organic, regenerative, and sustainable agriculture movements. One of the major forces behind this movement has been the European Union, which first certified organic food in 1991 and began reform of its Common Agricultural Policy (CAP) in 2005 to phase out commodity-linked farm subsidies, also known as decoupling. The growth of organic farming has renewed research in alternative technologies such as integrated pest management, selective breeding, and controlled-environment agriculture. There are concerns about the lower yield associated with organic farming and its impact on global food security. Recent mainstream technological developments include genetically modified food.
By 2015, the agricultural output of China was the largest in the world, followed by the European Union, India and the United States. Economists measure the total factor productivity of agriculture, according to which agriculture in the United States is roughly 1.7 times more productive than it was in 1948.
Agriculture employed 873 million people in 2021, or 27% of the global workforce, compared with 1 027 million (or 40%) in 2000. The share of agriculture in global GDP was stable at around 4% since 2000 - 2023.
Despite increases in agricultural production and productivity, between 702 and 828 million people were affected by hunger in 2021. Food insecurity and malnutrition can be the result of conflict, climate extremes and variability and economic swings. It can also be caused by a country's structural characteristics such as income status and natural resource endowments as well as its political economy.
Pesticide use in agriculture went up 62% between 2000 and 2021, with the Americas accounting for half the use in 2021.
The International Fund for Agricultural Development posits that an increase in smallholder agriculture may be part of the solution to concerns about food prices and overall food security, given the favorable experience of Vietnam.
Workforce
Agriculture provides about one-quarter of all global employment, more than half in sub-Saharan Africa and almost 60 percent in low-income countries. As countries develop, other jobs have historically pulled workers away from agriculture, and labor-saving innovations increase agricultural productivity by reducing labor requirements per unit of output. Over time, a combination of labor supply and labor demand trends have driven down the share of population employed in agriculture.
During the 16th century in Europe, between 55 and 75% of the population was engaged in agriculture; by the 19th century, this had dropped to between 35 and 65%. In the same countries today, the figure is less than 10%.
At the start of the 21st century, some one billion people, or over 1/3 of the available work force, were employed in agriculture. This constitutes approximately 70% of the global employment of children, and in many countries constitutes the largest percentage of women of any industry. The service sector overtook the agricultural sector as the largest global employer in 2007.
In many developed countries, immigrants help fill labor shortages in high-value agriculture activities that are difficult to mechanize. Foreign farm workers from mostly Eastern Europe, North Africa and South Asia constituted around one-third of the salaried agricultural workforce in Spain, Italy, Greece and Portugal in 2013. In the United States of America, more than half of all hired farmworkers (roughly 450,000 workers) were immigrants in 2019, although the number of new immigrants arriving in the country to work in agriculture has fallen by 75 percent in recent years and rising wages indicate this has led to a major labor shortage on U.S. farms.
Women in agriculture
Around the world, women make up a large share of the population employed in agriculture. This share is growing in all developing regions except East and Southeast Asia where women already make up about 50 percent of the agricultural workforce. Women make up 47 percent of the agricultural workforce in sub-Saharan Africa, a rate that has not changed significantly in the past few decades. However, the Food and Agriculture Organization of the United Nations (FAO) posits that the roles and responsibilities of women in agriculture may be changing – for example, from subsistence farming to wage employment, and from contributing household members to primary producers in the context of male-out-migration.
In general, women account for a greater share of agricultural employment at lower levels of economic development, as inadequate education, limited access to basic infrastructure and markets, high unpaid work burden and poor rural employment opportunities outside agriculture severely limit women's opportunities for off-farm work.
Women who work in agricultural production tend to do so under highly unfavorable conditions. They tend to be concentrated in the poorest countries, where alternative livelihoods are not available, and they maintain the intensity of their work in conditions of climate-induced weather shocks and in situations of conflict. Women are less likely to participate as entrepreneurs and independent farmers and are engaged in the production of less lucrative crops.
The gender gap in land productivity between female- and male managed farms of the same size is 24 percent. On average, women earn 18.4 percent less than men in wage employment in agriculture; this means that women receive 82 cents for every dollar earned by men. Progress has been slow in closing gaps in women's access to irrigation and in ownership of livestock, too.
Women in agriculture still have significantly less access than men to inputs, including improved seeds, fertilizers and mechanized equipment. On a positive note, the gender gap in access to mobile internet in low- and middle-income countries fell from 25 percent to 16 percent between 2017 and 2021, and the gender gap in access to bank accounts narrowed from 9 to 6 percentage points. Women are as likely as men to adopt new technologies when the necessary enabling factors are put in place and they have equal access to complementary resources.
Safety
Agriculture, specifically farming, remains a hazardous industry, and farmers worldwide remain at high risk of work-related injuries, lung disease, noise-induced hearing loss, skin diseases, as well as certain cancers related to chemical use and prolonged sun exposure. On industrialized farms, injuries frequently involve the use of agricultural machinery, and a common cause of fatal agricultural injuries in developed countries is tractor rollovers. Pesticides and other chemicals used in farming can be hazardous to worker health, and workers exposed to pesticides may experience illness or have children with birth defects. As an industry in which families commonly share in work and live on the farm itself, entire families can be at risk for injuries, illness, and death. Ages 0–6 may be an especially vulnerable population in agriculture; common causes of fatal injuries among young farm workers include drowning, machinery and motor accidents, including with all-terrain vehicles.
The International Labour Organization considers agriculture "one of the most hazardous of all economic sectors". It estimates that the annual work-related death toll among agricultural employees is at least 170,000, twice the average rate of other jobs. In addition, incidences of death, injury and illness related to agricultural activities often go unreported. The organization has developed the Safety and Health in Agriculture Convention, 2001, which covers the range of risks in the agriculture occupation, the prevention of these risks and the role that individuals and organizations engaged in agriculture should play.
In the United States, agriculture has been identified by the National Institute for Occupational Safety and Health as a priority industry sector in the National Occupational Research Agenda to identify and provide intervention strategies for occupational health and safety issues.
In the European Union, the European Agency for Safety and Health at Work has issued guidelines on implementing health and safety directives in agriculture, livestock farming, horticulture, and forestry. The Agricultural Safety and Health Council of America (ASHCA) also holds a yearly summit to discuss safety.
Production
Overall production varies by country as listed.
Crop cultivation systems
Cropping systems vary among farms depending on the available resources and constraints; geography and climate of the farm; government policy; economic, social and political pressures; and the philosophy and culture of the farmer.
Shifting cultivation (or slash and burn) is a system in which forests are burnt, releasing nutrients to support cultivation of annual and then perennial crops for a period of several years. Then the plot is left fallow to regrow forest, and the farmer moves to a new plot, returning after many more years (10–20). This fallow period is shortened if population density grows, requiring the input of nutrients (fertilizer or manure) and some manual pest control. Annual cultivation is the next phase of intensity in which there is no fallow period. This requires even greater nutrient and pest control inputs.
Further industrialization led to the use of monocultures, when one cultivar is planted on a large acreage. Because of the low biodiversity, nutrient use is uniform and pests tend to build up, necessitating the greater use of pesticides and fertilizers. Multiple cropping, in which several crops are grown sequentially in one year, and intercropping, when several crops are grown at the same time, are other kinds of annual cropping systems known as polycultures.
In subtropical and arid environments, the timing and extent of agriculture may be limited by rainfall, either not allowing multiple annual crops in a year, or requiring irrigation. In all of these environments perennial crops are grown (coffee, chocolate) and systems are practiced such as agroforestry. In temperate environments, where ecosystems were predominantly grassland or prairie, highly productive annual farming is the dominant agricultural system.
Important categories of food crops include cereals, legumes, forage, fruits and vegetables. Natural fibers include cotton, wool, hemp, silk and flax. Specific crops are cultivated in distinct growing regions throughout the world. Production is listed in millions of metric tons, based on FAO estimates.
Livestock production systems
Animal husbandry is the breeding and raising of animals for meat, milk, eggs, or wool, and for work and transport. Working animals, including horses, mules, oxen, water buffalo, camels, llamas, alpacas, donkeys, and dogs, have for centuries been used to help cultivate fields, harvest crops, wrangle other animals, and transport farm products to buyers.
Livestock production systems can be defined based on feed source, as grassland-based, mixed, and landless. , 30% of Earth's ice- and water-free area was used for producing livestock, with the sector employing approximately 1.3 billion people. Between the 1960s and the 2000s, there was a significant increase in livestock production, both by numbers and by carcass weight, especially among beef, pigs and chickens, the latter of which had production increased by almost a factor of 10. Non-meat animals, such as milk cows and egg-producing chickens, also showed significant production increases. Global cattle, sheep and goat populations are expected to continue to increase sharply through 2050. Aquaculture or fish farming, the production of fish for human consumption in confined operations, is one of the fastest growing sectors of food production, growing at an average of 9% a year between 1975 and 2007.
During the second half of the 20th century, producers using selective breeding focused on creating livestock breeds and crossbreeds that increased production, while mostly disregarding the need to preserve genetic diversity. This trend has led to a significant decrease in genetic diversity and resources among livestock breeds, leading to a corresponding decrease in disease resistance and local adaptations previously found among traditional breeds.
Grassland based livestock production relies upon plant material such as shrubland, rangeland, and pastures for feeding ruminant animals. Outside nutrient inputs may be used, however manure is returned directly to the grassland as a major nutrient source. This system is particularly important in areas where crop production is not feasible because of climate or soil, representing 30–40 million pastoralists. Mixed production systems use grassland, fodder crops and grain feed crops as feed for ruminant and monogastric (one stomach; mainly chickens and pigs) livestock. Manure is typically recycled in mixed systems as a fertilizer for crops.
Landless systems rely upon feed from outside the farm, representing the de-linking of crop and livestock production found more prevalently in Organization for Economic Co-operation and Development member countries. Synthetic fertilizers are more heavily relied upon for crop production and manure use becomes a challenge as well as a source for pollution. Industrialized countries use these operations to produce much of the global supplies of poultry and pork. Scientists estimate that 75% of the growth in livestock production between 2003 and 2030 will be in confined animal feeding operations, sometimes called factory farming. Much of this growth is happening in developing countries in Asia, with much smaller amounts of growth in Africa. Some of the practices used in commercial livestock production, including the usage of growth hormones, are controversial.
Production practices
Tillage is the practice of breaking up the soil with tools such as the plow or harrow to prepare for planting, for nutrient incorporation, or for pest control. Tillage varies in intensity from conventional to no-till. It can improve productivity by warming the soil, incorporating fertilizer and controlling weeds, but also renders soil more prone to erosion, triggers the decomposition of organic matter releasing CO2, and reduces the abundance and diversity of soil organisms.
Pest control includes the management of weeds, insects, mites, and diseases. Chemical (pesticides), biological (biocontrol), mechanical (tillage), and cultural practices are used. Cultural practices include crop rotation, culling, cover crops, intercropping, composting, avoidance, and resistance. Integrated pest management attempts to use all of these methods to keep pest populations below the number which would cause economic loss, and recommends pesticides as a last resort.
Nutrient management includes both the source of nutrient inputs for crop and livestock production, and the method of use of manure produced by livestock. Nutrient inputs can be chemical inorganic fertilizers, manure, green manure, compost and minerals. Crop nutrient use may also be managed using cultural techniques such as crop rotation or a fallow period. Manure is used either by holding livestock where the feed crop is growing, such as in managed intensive rotational grazing, or by spreading either dry or liquid formulations of manure on cropland or pastures.
Water management is needed where rainfall is insufficient or variable, which occurs to some degree in most regions of the world. Some farmers use irrigation to supplement rainfall. In other areas such as the Great Plains in the U.S. and Canada, farmers use a fallow year to conserve soil moisture for the following year. Recent technological innovations in precision agriculture allow for water status monitoring and automate water usage, leading to more efficient management. Agriculture represents 70% of freshwater use worldwide. However, water withdrawal ratios for agriculture vary significantly by income level. In least developed countries and landlocked developing countries, water withdrawal ratios for agriculture are as high as 90 percent of total water withdrawals and about 60 percent in Small Island Developing States.
According to 2014 report by the International Food Policy Research Institute, agricultural technologies will have the greatest impact on food production if adopted in combination with each other. Using a model that assessed how eleven technologies could impact agricultural productivity, food security and trade by 2050, the International Food Policy Research Institute found that the number of people at risk from hunger could be reduced by as much as 40% and food prices could be reduced by almost half.
Payment for ecosystem services is a method of providing additional incentives to encourage farmers to conserve some aspects of the environment. Measures might include paying for reforestation upstream of a city, to improve the supply of fresh water.
Agricultural automation
Different definitions exist for agricultural automation and for the variety of tools and technologies that are used to automate production. One view is that agricultural automation refers to autonomous navigation by robots without human intervention. Alternatively it is defined as the accomplishment of production tasks through mobile, autonomous, decision-making, mechatronic devices. However, FAO finds that these definitions do not capture all the aspects and forms of automation, such as robotic milking machines that are static, most motorized machinery that automates the performing of agricultural operations, and digital tools (e.g., sensors) that automate only diagnosis. FAO defines agricultural automation as the use of machinery and equipment in agricultural operations to improve their diagnosis, decision-making or performing, reducing the drudgery of agricultural work or improving the timeliness, and potentially the precision, of agricultural operations.
The technological evolution in agriculture has involved a progressive move from manual tools to animal traction, to motorized mechanization, to digital equipment and finally, to robotics with artificial intelligence (AI). Motorized mechanization using engine power automates the performance of agricultural operations such as ploughing and milking. With digital automation technologies, it also becomes possible to automate diagnosis and decision-making of agricultural operations. For example, autonomous crop robots can harvest and seed crops, while drones can gather information to help automate input application. Precision agriculture often employs such automation technologies. Motorized machines are increasingly complemented, or even superseded, by new digital equipment that automates diagnosis and decision-making. A conventional tractor, for example, can be converted into an automated vehicle allowing it to sow a field autonomously.
Motorized mechanization has increased significantly across the world in recent years, although reliable global data with broad country coverage exist only for tractors and only up to 2009. Sub-Saharan Africa is the only region where the adoption of motorized mechanization has stalled over the past decades.
Automation technologies are increasingly used for managing livestock, though evidence on adoption is lacking. Global automatic milking system sales have increased over recent years, but adoption is likely mostly in Northern Europe, and likely almost absent in low- and middle-income countries. Automated feeding machines for both cows and poultry also exist, but data and evidence regarding their adoption trends and drivers is likewise scarce.
Measuring the overall employment impacts of agricultural automation is difficult because it requires large amounts of data tracking all the transformations and the associated reallocation of workers both upstream and downstream. While automation technologies reduce labor needs for the newly automated tasks, they also generate new labor demand for other tasks, such as equipment maintenance and operation. Agricultural automation can also stimulate employment by allowing producers to expand production and by creating other agrifood systems jobs. This is especially true when it happens in context of rising scarcity of rural labor, as is the case in high-income countries and many middle-income countries. On the other hand, if forcedly promoted, for example through government subsidies in contexts of abundant rural labor, it can lead to labor displacement and falling or stagnant wages, particularly affecting poor and low-skilled workers.
Effects of climate change on yields
Climate change and agriculture are interrelated on a global scale. Climate change affects agriculture through changes in average temperatures, rainfall, and weather extremes (like storms and heat waves); changes in pests and diseases; changes in atmospheric carbon dioxide and ground-level ozone concentrations; changes in the nutritional quality of some foods; and changes in sea level. Global warming is already affecting agriculture, with effects unevenly distributed across the world.
In a 2022 report, the Intergovernmental Panel on Climate Change describes how human-induced warming has slowed growth of agricultural productivity over the past 50 years in mid and low latitudes. Methane emissions have negatively impacted crop yields by increasing temperatures and surface ozone concentrations. Warming is also negatively affecting crop and grassland quality and harvest stability. Ocean warming has decreased sustainable yields of some wild fish populations while ocean acidification and warming have already affected farmed aquatic species. Climate change will probably increase the risk of food insecurity for some vulnerable groups, such as the poor.
Crop alteration and biotechnology
Plant breeding
Crop alteration has been practiced by humankind for thousands of years, since the beginning of civilization. Altering crops through breeding practices changes the genetic make-up of a plant to develop crops with more beneficial characteristics for humans, for example, larger fruits or seeds, drought-tolerance, or resistance to pests. Significant advances in plant breeding ensued after the work of geneticist Gregor Mendel. His work on dominant and recessive alleles, although initially largely ignored for almost 50 years, gave plant breeders a better understanding of genetics and breeding techniques. Crop breeding includes techniques such as plant selection with desirable traits, self-pollination and cross-pollination, and molecular techniques that genetically modify the organism.
Domestication of plants has, over the centuries increased yield, improved disease resistance and drought tolerance, eased harvest and improved the taste and nutritional value of crop plants. Careful selection and breeding have had enormous effects on the characteristics of crop plants. Plant selection and breeding in the 1920s and 1930s improved pasture (grasses and clover) in New Zealand. Extensive X-ray and ultraviolet induced mutagenesis efforts (i.e. primitive genetic engineering) during the 1950s produced the modern commercial varieties of grains such as wheat, corn (maize) and barley.
The Green Revolution popularized the use of conventional hybridization to sharply increase yield by creating "high-yielding varieties". For example, average yields of corn (maize) in the US have increased from around 2.5 tons per hectare (t/ha) (40 bushels per acre) in 1900 to about 9.4 t/ha (150 bushels per acre) in 2001. Similarly, worldwide average wheat yields have increased from less than 1 t/ha in 1900 to more than 2.5 t/ha in 1990. South American average wheat yields are around 2 t/ha, African under 1 t/ha, and Egypt and Arabia up to 3.5 to 4 t/ha with irrigation. In contrast, the average wheat yield in countries such as France is over 8 t/ha. Variations in yields are due mainly to variation in climate, genetics, and the level of intensive farming techniques (use of fertilizers, chemical pest control, and growth control to avoid lodging).
Investments into innovation for agriculture are long term. This is because it takes time for research to become commercialized and for technology to be adapted to meet multiple regions’ needs, as well as meet national guidelines before being adopted and planted in a farmer’s fields. For instance, it took at least 60 years from the introduction of hybrid corn technology before its adoption became widespread.
Agricultural innovation developed for the specific agroecological conditions of one region is not easily transferred and used in another region with different agroecological conditions. Instead, the innovation would have to be adapted to the specific conditions of that other region and respect its biodiversity and environmental requirements and guidelines. Some such adaptations can be seen through the steadily increasing number of plant varieties protected under the plant variety protection instrument administered by the International Union for the Protection of New Varieties of Plants (UPOV).
Genetic engineering
Genetically modified organisms (GMO) are organisms whose genetic material has been altered by genetic engineering techniques generally known as recombinant DNA technology. Genetic engineering has expanded the genes available to breeders to use in creating desired germlines for new crops. Increased durability, nutritional content, insect and virus resistance and herbicide tolerance are a few of the attributes bred into crops through genetic engineering. For some, GMO crops cause food safety and food labeling concerns. Numerous countries have placed restrictions on the production, import or use of GMO foods and crops. The Biosafety Protocol, an international treaty, regulates the trade of GMOs. There is ongoing discussion regarding the labeling of foods made from GMOs, and while the EU currently requires all GMO foods to be labeled, the US does not.
Herbicide-resistant seeds have a gene implanted into their genome that allows the plants to tolerate exposure to herbicides, including glyphosate. These seeds allow the farmer to grow a crop that can be sprayed with herbicides to control weeds without harming the resistant crop. Herbicide-tolerant crops are used by farmers worldwide. With the increasing use of herbicide-tolerant crops, comes an increase in the use of glyphosate-based herbicide sprays. In some areas glyphosate resistant weeds have developed, causing farmers to switch to other herbicides. Some studies also link widespread glyphosate usage to iron deficiencies in some crops, which is both a crop production and a nutritional quality concern, with potential economic and health implications.
Other GMO crops used by growers include insect-resistant crops, which have a gene from the soil bacterium Bacillus thuringiensis (Bt), which produces a toxin specific to insects. These crops resist damage by insects. Some believe that similar or better pest-resistance traits can be acquired through traditional breeding practices, and resistance to various pests can be gained through hybridization or cross-pollination with wild species. In some cases, wild species are the primary source of resistance traits; some tomato cultivars that have gained resistance to at least 19 diseases did so through crossing with wild populations of tomatoes.
Environmental impact
Effects and costs
Agriculture is both a cause of and sensitive to environmental degradation, such as biodiversity loss, desertification, soil degradation and climate change, which cause decreases in crop yield. Agriculture is one of the most important drivers of environmental pressures, particularly habitat change, climate change, water use and toxic emissions. Agriculture is the main source of toxins released into the environment, including insecticides, especially those used on cotton. The 2011 UNEP Green Economy report stated that agricultural operations produced some 13 per cent of anthropogenic global greenhouse gas emissions. This includes gases from the use of inorganic fertilizers, agro-chemical pesticides, and herbicides, as well as fossil fuel-energy inputs.
Agriculture imposes multiple external costs upon society through effects such as pesticide damage to nature (especially herbicides and insecticides), nutrient runoff, excessive water usage, and loss of natural environment. A 2000 assessment of agriculture in the UK determined total external costs for 1996 of £2,343 million, or £208 per hectare. A 2005 analysis of these costs in the US concluded that cropland imposes approximately $5 to $16 billion ($30 to $96 per hectare), while livestock production imposes $714 million. Both studies, which focused solely on the fiscal impacts, concluded that more should be done to internalize external costs. Neither included subsidies in their analysis, but they noted that subsidies also influence the cost of agriculture to society.
Agriculture seeks to increase yield and to reduce costs, often employing measures that cut biodiversity to very low levels. Yield increases with inputs such as fertilizers and removal of pathogens, predators, and competitors (such as weeds). Costs decrease with increasing scale of farm units, such as making fields larger; this means removing hedges, ditches and other areas of habitat. Pesticides kill insects, plants and fungi. Effective yields fall with on-farm losses, which may be caused by poor production practices during harvesting, handling, and storage.
The environmental effects of climate change show that research on pests and diseases that do not generally afflict areas is essential. In 2021, farmers discovered stem rust on wheat in the Champagne area of France, a disease that had previously only occurred in Morocco for 20 to 30 years. Because of climate change, insects that used to die off over the winter are now alive and multiplying.
Livestock issues
A senior UN official, Henning Steinfeld, said that "Livestock are one of the most significant contributors to today's most serious environmental problems". Livestock production occupies 70% of all land used for agriculture, or 30% of the land surface of the planet. It is one of the largest sources of greenhouse gases, responsible for 18% of the world's greenhouse gas emissions as measured in CO2 equivalents. By comparison, all transportation emits 13.5% of the CO2. It produces 65% of human-related nitrous oxide (which has 296 times the global warming potential of CO2) and 37% of all human-induced methane (which is 23 times as warming as CO2.) It also generates 64% of the ammonia emission. Livestock expansion is cited as a key factor driving deforestation; in the Amazon basin 70% of previously forested area is now occupied by pastures and the remainder used for feed crops. Through deforestation and land degradation, livestock is also driving reductions in biodiversity. A well documented phenomenon is woody plant encroachment, caused by overgrazing in rangelands. Furthermore, the United Nations Environment Programme (UNEP) states that "methane emissions from global livestock are projected to increase by 60 per cent by 2030 under current practices and consumption patterns."
Land and water issues
Land transformation, the use of land to yield goods and services, is the most substantial way humans alter the Earth's ecosystems, and is the driving force causing biodiversity loss. Estimates of the amount of land transformed by humans vary from 39 to 50%. It is estimated that 24% of land globally experiences land degradation, a long-term decline in ecosystem function and productivity, with cropland being disproportionately affected. Land management is the driving factor behind degradation; 1.5 billion people rely upon the degrading land. Degradation can be through deforestation, desertification, soil erosion, mineral depletion, acidification, or salinization. In 2021, the global agricultural land area was 4.79 billion hectares (ha), down 2 percent, or 0.09 billion ha compared with 2000. Between 2000 and 2021, roughly two-thirds of agricultural land were used for permanent meadows and pastures (3.21 billion ha in 2021), which declined by 5 percent (0.17 billion ha). One-third of the total agricultural land was cropland (1.58 billion ha in 2021), which increased by 6 percent (0.09 billion ha).
Eutrophication, excessive nutrient enrichment in aquatic ecosystems resulting in algal blooms and anoxia, leads to fish kills, loss of biodiversity, and renders water unfit for drinking and other industrial uses. Excessive fertilization and manure application to cropland, as well as high livestock stocking densities cause nutrient (mainly nitrogen and phosphorus) runoff and leaching from agricultural land. These nutrients are major nonpoint pollutants contributing to eutrophication of aquatic ecosystems and pollution of groundwater, with harmful effects on human populations. Fertilizers also reduce terrestrial biodiversity by increasing competition for light, favoring those species that are able to benefit from the added nutrients.
Agriculture simultaneously is facing growing freshwater demand and precipitation anomalies (droughts, floods, and extreme rainfall and weather events) on rainfed areas fields and grazing lands. Agriculture accounts for 70 percent of withdrawals of freshwater resources, and an estimated 41 percent of current global irrigation water use occurs at the expense of environmental flow requirements. It is long known that aquifers in areas as diverse as northern China, the Upper Ganges and the western US are being depleted, and new research extends these problems to aquifers in Iran, Mexico and Saudi Arabia. Increasing pressure is being placed on water resources by industry and urban areas, meaning that water scarcity is increasing and agriculture is facing the challenge of producing more food for the world's growing population with reduced water resources. While industrial withdrawals have declined in the past few decades and municipal withdrawals have increased only marginally since 2010, agricultural withdrawals have continued to grow at an ever faster pace. Agricultural water usage can also cause major environmental problems, including the destruction of natural wetlands, the spread of water-borne diseases, and land degradation through salinization and waterlogging, when irrigation is performed incorrectly.
Pesticides
Pesticide use has increased since 1950 to 2.5 million short tons annually worldwide, yet crop loss from pests has remained relatively constant. The World Health Organization estimated in 1992 that three million pesticide poisonings occur annually, causing 220,000 deaths. Pesticides select for pesticide resistance in the pest population, leading to a condition termed the "pesticide treadmill" in which pest resistance warrants the development of a new pesticide.
An alternative argument is that the way to "save the environment" and prevent famine is by using pesticides and intensive high yield farming, a view exemplified by a quote heading the Center for Global Food Issues website: 'Growing more per acre leaves more land for nature'. However, critics argue that a trade-off between the environment and a need for food is not inevitable, and that pesticides can replace good agronomic practices such as crop rotation. The Push–pull agricultural pest management technique involves intercropping, using plant aromas to repel pests from crops (push) and to lure them to a place from which they can then be removed (pull).
Contribution to climate change
Agriculture contributes towards climate change through greenhouse gas emissions and by the conversion of non-agricultural land such as forests into agricultural land. The agriculture, forestry and land use sector contribute between 13% and 21% of global greenhouse gas emissions. Emissions of nitrous oxide, methane make up over half of total greenhouse gas emission from agriculture. Animal husbandry is a major source of greenhouse gas emissions.
Approximately 57% of global GHG emissions from the production of food are from the production of animal-based food while plant-based foods contribute 29% and the remaining 14% is for other utilizations. Farmland management and land-use change represented major shares of total emissions (38% and 29%, respectively), whereas rice and beef were the largest contributing plant- and animal-based commodities (12% and 25%, respectively). South and Southeast Asia and South America were the largest emitters of production-based GHGs.
Sustainability
Current farming methods have resulted in over-stretched water resources, high levels of erosion and reduced soil fertility. There is not enough water to continue farming using current practices; therefore how water, land, and ecosystem resources are used to boost crop yields must be reconsidered. A solution would be to give value to ecosystems, recognizing environmental and livelihood tradeoffs, and balancing the rights of a variety of users and interests. Inequities that result when such measures are adopted would need to be addressed, such as the reallocation of water from poor to rich, the clearing of land to make way for more productive farmland, or the preservation of a wetland system that limits fishing rights.
Technological advancements help provide farmers with tools and resources to make farming more sustainable. Technology permits innovations like conservation tillage, a farming process which helps prevent land loss to erosion, reduces water pollution, and enhances carbon sequestration.
Agricultural automation can help address some of the challenges associated with climate change and thus facilitate adaptation efforts. For example, the application of digital automation technologies (e.g. in precision agriculture) can improve resource-use efficiency in conditions which are increasingly constrained for agricultural producers. Moreover, when applied to sensing and early warning, they can help address the uncertainty and unpredictability of weather conditions associated with accelerating climate change.
Other potential sustainable practices include conservation agriculture, agroforestry, improved grazing, avoided grassland conversion, and biochar. Current mono-crop farming practices in the United States preclude widespread adoption of sustainable practices, such as 2–3 crop rotations that incorporate grass or hay with annual crops, unless negative emission goals such as soil carbon sequestration become policy.
The food demand of Earth's projected population, with current climate change predictions, could be satisfied by improvement of agricultural methods, expansion of agricultural areas, and a sustainability-oriented consumer mindset.
Energy dependence
Since the 1940s, agricultural productivity has increased dramatically, due largely to the increased use of energy-intensive mechanization, fertilizers and pesticides. The vast majority of this energy input comes from fossil fuel sources. Between the 1960s and the 1980s, the Green Revolution transformed agriculture around the globe, with world grain production increasing significantly (between 70% and 390% for wheat and 60% to 150% for rice, depending on geographic area) as world population doubled. Heavy reliance on petrochemicals has raised concerns that oil shortages could increase costs and reduce agricultural output.
Industrialized agriculture depends on fossil fuels in two fundamental ways: direct consumption on the farm and manufacture of inputs used on the farm. Direct consumption includes the use of lubricants and fuels to operate farm vehicles and machinery.
Indirect consumption includes the manufacture of fertilizers, pesticides, and farm machinery. In particular, the production of nitrogen fertilizer can account for over half of agricultural energy usage. Together, direct and indirect consumption by US farms accounts for about 2% of the nation's energy use. Direct and indirect energy consumption by U.S. farms peaked in 1979, and has since gradually declined. Food systems encompass not just agriculture but off-farm processing, packaging, transporting, marketing, consumption, and disposal of food and food-related items. Agriculture accounts for less than one-fifth of food system energy use in the US.
Plastic pollution
Plastic products are used extensively in agriculture, including to increase crop yields and improve the efficiency of water and agrichemical use. "Agriplastic" products include films to cover greenhouses and tunnels, mulch to cover soil (e.g. to suppress weeds, conserve water, increase soil temperature and aid fertilizer application), shade cloth, pesticide containers, seedling trays, protective mesh and irrigation tubing. The polymers most commonly used in these products are low- density polyethylene (LPDE), linear low-density polyethylene (LLDPE), polypropylene (PP) and polyvinyl chloride (PVC).
The total amount of plastics used in agriculture is difficult to quantify. A 2012 study reported that almost 6.5 million tonnes per year were consumed globally while a later study estimated that global demand in 2015 was between 7.3 million and 9 million tonnes. Widespread use of plastic mulch and lack of systematic collection and management have led to the generation of large amounts of mulch residue. Weathering and degradation eventually cause the mulch to fragment. These fragments and larger pieces of plastic accumulate in soil. Mulch residue has been measured at levels of 50 to 260 kg per hectare in topsoil in areas where mulch use dates back more than 10 years, which confirms that mulching is a major source of both microplastic and macroplastic soil contamination.
Agricultural plastics, especially plastic films, are not easy to recycle because of high contamination levels (up to 40–50% by weight contamination by pesticides, fertilizers, soil and debris, moist vegetation, silage juice water, and UV stabilizers) and collection difficulties . Therefore, they are often buried or abandoned in fields and watercourses or burned. These disposal practices lead to soil degradation and can result in contamination of soils and leakage of microplastics into the marine environment as a result of precipitation run-off and tidal washing. In addition, additives in residual plastic film (such as UV and thermal stabilizers) may have deleterious effects on crop growth, soil structure, nutrient transport and salt levels. There is a risk that plastic mulch will deteriorate soil quality, deplete soil organic matter stocks, increase soil water repellence and emit greenhouse gases. Microplastics released through fragmentation of agricultural plastics can absorb and concentrate contaminants capable of being passed up the trophic chain.
Disciplines
Agricultural economics
Agricultural economics is economics as it relates to the "production, distribution and consumption of [agricultural] goods and services". Combining agricultural production with general theories of marketing and business as a discipline of study began in the late 1800s, and grew significantly through the 20th century. Although the study of agricultural economics is relatively recent, major trends in agriculture have significantly affected national and international economies throughout history, ranging from tenant farmers and sharecropping in the post-American Civil War Southern United States to the European feudal system of manorialism. In the United States, and elsewhere, food costs attributed to food processing, distribution, and agricultural marketing, sometimes referred to as the value chain, have risen while the costs attributed to farming have declined. This is related to the greater efficiency of farming, combined with the increased level of value addition (e.g. more highly processed products) provided by the supply chain. Market concentration has increased in the sector as well, and although the total effect of the increased market concentration is likely increased efficiency, the changes redistribute economic surplus from producers (farmers) and consumers, and may have negative implications for rural communities.
National government policies, such as taxation, subsidies, tariffs and others, can significantly change the economic marketplace for agricultural products. Since at least the 1960s, a combination of trade restrictions, exchange rate policies and subsidies have affected farmers in both the developing and the developed world. In the 1980s, non-subsidized farmers in developing countries experienced adverse effects from national policies that created artificially low global prices for farm products. Between the mid-1980s and the early 2000s, several international agreements limited agricultural tariffs, subsidies and other trade restrictions.
However, , there was still a significant amount of policy-driven distortion in global agricultural product prices. The three agricultural products with the most trade distortion were sugar, milk and rice, mainly due to taxation. Among the oilseeds, sesame had the most taxation, but overall, feed grains and oilseeds had much lower levels of taxation than livestock products. Since the 1980s, policy-driven distortions have decreases more among livestock products than crops during the worldwide reforms in agricultural policy. Despite this progress, certain crops, such as cotton, still see subsidies in developed countries artificially deflating global prices, causing hardship in developing countries with non-subsidized farmers. Unprocessed commodities such as corn, soybeans, and cattle are generally graded to indicate quality, affecting the price the producer receives. Commodities are generally reported by production quantities, such as volume, number or weight.
Agricultural science
Agricultural science is a broad multidisciplinary field of biology that encompasses the parts of exact, natural, economic and social sciences used in the practice and understanding of agriculture. It covers topics such as agronomy, plant breeding and genetics, plant pathology, crop modelling, soil science, entomology, production techniques and improvement, study of pests and their management, and study of adverse environmental effects such as soil degradation, waste management, and bioremediation.
The scientific study of agriculture began in the 18th century, when Johann Friedrich Mayer conducted experiments on the use of gypsum (hydrated calcium sulphate) as a fertilizer. Research became more systematic when in 1843, John Lawes and Henry Gilbert began a set of long-term agronomy field experiments at Rothamsted Research Station in England; some of them, such as the Park Grass Experiment, are still running. In America, the Hatch Act of 1887 provided funding for what it was the first to call "agricultural science", driven by farmers' interest in fertilizers. In agricultural entomology, the USDA began to research biological control in 1881; it instituted its first large program in 1905, searching Europe and Japan for natural enemies of the spongy moth and brown-tail moth, establishing parasitoids (such as solitary wasps) and predators of both pests in the US.
Policy
Agricultural policy is the set of government decisions and actions relating to domestic agriculture and imports of foreign agricultural products. Governments usually implement agricultural policies with the goal of achieving a specific outcome in the domestic agricultural product markets. Some overarching themes include risk management and adjustment (including policies related to climate change, food safety and natural disasters), economic stability (including policies related to taxes), natural resources and environmental sustainability (especially water policy), research and development, and market access for domestic commodities (including relations with global organizations and agreements with other countries). Agricultural policy can also touch on food quality, ensuring that the food supply is of a consistent and known quality, food security, ensuring that the food supply meets the population's needs, and conservation. Policy programs can range from financial programs, such as subsidies, to encouraging producers to enroll in voluntary quality assurance programs.
A 2021 report finds that globally, support to agricultural producers accounts for almost US$540 billion a year. This amounts to 15 percent of total agricultural production value, and is heavily biased towards measures that are leading to inefficiency, as well as are unequally distributed and harmful for the environment and human health.
There are many influences on the creation of agricultural policy, including consumers, agribusiness, trade lobbies and other groups. Agribusiness interests hold a large amount of influence over policy making, in the form of lobbying and campaign contributions. Political action groups, including those interested in environmental issues and labor unions, also provide influence, as do lobbying organizations representing individual agricultural commodities. The Food and Agriculture Organization of the United Nations (FAO) leads international efforts to defeat hunger and provides a forum for the negotiation of global agricultural regulations and agreements. Samuel Jutzi, director of FAO's animal production and health division, states that lobbying by large corporations has stopped reforms that would improve human health and the environment. For example, proposals in 2010 for a voluntary code of conduct for the livestock industry that would have provided incentives for improving standards for health, and environmental regulations, such as the number of animals an area of land can support without long-term damage, were successfully defeated due to large food company pressure.
See also
Aeroponics
Agricultural aircraft
Agricultural engineering
Agricultural finance
Agricultural machinery
Agricultural robot
Agroecology
Agribusiness
Agrominerals
Building-integrated agriculture
Contract farming
Corporate farming
Crofting
Ecoagriculture
Farmworker
Food loss and waste
Food security
Hill farming
List of documentary films about agriculture
Pharming (genetics)
Remote sensing
Rural Development
Soil biodiversity
Subsistence economy
Sustainable agriculture
Urban agriculture
Vertical farming
Vegetable farming
References
Cited sources
External links
Food and Agriculture Organization
United States Department of Agriculture
Agriculture material from the World Bank Group
Agronomy
Food industry | 0.783075 | 0.999685 | 0.782829 |
Autotroph | An autotroph is an organism that can convert abiotic sources of energy into energy stored in organic compounds, which can be used by other organisms. Autotrophs produce complex organic compounds (such as carbohydrates, fats, and proteins) using carbon from simple substances such as carbon dioxide, generally using energy from light or inorganic chemical reactions. Autotrophs do not need a living source of carbon or energy and are the producers in a food chain, such as plants on land or algae in water. Autotrophs can reduce carbon dioxide to make organic compounds for biosynthesis and as stored chemical fuel. Most autotrophs use water as the reducing agent, but some can use other hydrogen compounds such as hydrogen sulfide.
The primary producers can convert the energy in the light (phototroph and photoautotroph) or the energy in inorganic chemical compounds (chemotrophs or chemolithotrophs) to build organic molecules, which is usually accumulated in the form of biomass and will be used as carbon and energy source by other organisms (e.g. heterotrophs and mixotrophs). The photoautotrophs are the main primary producers, converting the energy of the light into chemical energy through photosynthesis, ultimately building organic molecules from carbon dioxide, an inorganic carbon source. Examples of chemolithotrophs are some archaea and bacteria (unicellular organisms) that produce biomass from the oxidation of inorganic chemical compounds, these organisms are called chemoautotrophs, and are frequently found in hydrothermal vents in the deep ocean. Primary producers are at the lowest trophic level, and are the reasons why Earth sustains life to this day.
Most chemoautotrophs are lithotrophs, using inorganic electron donors such as hydrogen sulfide, hydrogen gas, elemental sulfur, ammonium and ferrous oxide as reducing agents and hydrogen sources for biosynthesis and chemical energy release. Autotrophs use a portion of the ATP produced during photosynthesis or the oxidation of chemical compounds to reduce NADP+ to NADPH to form organic compounds.
History
The term autotroph was coined by the German botanist Albert Bernhard Frank in 1892. It stems from the ancient Greek word , meaning "nourishment" or "food". The first autotrophic organisms likely evolved early in the Archean but proliferated across Earth's Great Oxidation Event with an increase to the rate of oxygenic photosynthesis by cyanobacteria. Photoautotrophs evolved from heterotrophic bacteria by developing photosynthesis. The earliest photosynthetic bacteria used hydrogen sulphide. Due to the scarcity of hydrogen sulphide, some photosynthetic bacteria evolved to use water in photosynthesis, leading to cyanobacteria.
Variants
Some organisms rely on organic compounds as a source of carbon, but are able to use light or inorganic compounds as a source of energy. Such organisms are mixotrophs. An organism that obtains carbon from organic compounds but obtains energy from light is called a photoheterotroph, while an organism that obtains carbon from organic compounds and energy from the oxidation of inorganic compounds is termed a chemolithoheterotroph.
Evidence suggests that some fungi may also obtain energy from ionizing radiation: Such radiotrophic fungi were found growing inside a reactor of the Chernobyl nuclear power plant.
Examples
There are many different types of autotrophs in Earth's ecosystems. Lichens located in tundra climates are an exceptional example of a primary producer that, by mutualistic symbiosis, combines photosynthesis by algae (or additionally nitrogen fixation by cyanobacteria) with the protection of a decomposer fungus. Also, plant-like primary producers (trees, algae) use the sun as a form of energy and put it into the air for other organisms. There are of course H2O primary producers, including a form of bacteria, and phytoplankton. As there are many examples of primary producers, two dominant types are coral and one of the many types of brown algae, kelp.
Photosynthesis
Gross primary production occurs by photosynthesis. This is also the main way that primary producers take energy and produce/release it somewhere else. Plants, coral, bacteria, and algae do this. During photosynthesis, primary producers take energy from the sun and convert it into energy, sugar, and oxygen. Primary producers also need the energy to convert this same energy elsewhere, so they get it from nutrients. One type of nutrient is nitrogen.
Ecology
Without primary producers, organisms that are capable of producing energy on their own, the biological systems of Earth would be unable to sustain themselves. Plants, along with other primary producers, produce the energy that other living beings consume, and the oxygen that they breathe. It is thought that the first organisms on Earth were primary producers located on the ocean floor.
Autotrophs are fundamental to the food chains of all ecosystems in the world. They take energy from the environment in the form of sunlight or inorganic chemicals and use it to create fuel molecules such as carbohydrates. This mechanism is called primary production. Other organisms, called heterotrophs, take in autotrophs as food to carry out functions necessary for their life. Thus, heterotrophs – all animals, almost all fungi, as well as most bacteria and protozoa – depend on autotrophs, or primary producers, for the raw materials and fuel they need. Heterotrophs obtain energy by breaking down carbohydrates or oxidizing organic molecules (carbohydrates, fats, and proteins) obtained in food. Carnivorous organisms rely on autotrophs indirectly, as the nutrients obtained from their heterotrophic prey come from autotrophs they have consumed.
Most ecosystems are supported by the autotrophic primary production of plants and cyanobacteria that capture photons initially released by the sun. Plants can only use a fraction (approximately 1%) of this energy for photosynthesis. The process of photosynthesis splits a water molecule (H2O), releasing oxygen (O2) into the atmosphere, and reducing carbon dioxide (CO2) to release the hydrogen atoms that fuel the metabolic process of primary production. Plants convert and store the energy of the photon into the chemical bonds of simple sugars during photosynthesis. These plant sugars are polymerized for storage as long-chain carbohydrates, including other sugars, starch, and cellulose; glucose is also used to make fats and proteins. When autotrophs are eaten by heterotrophs, i.e., consumers such as animals, the carbohydrates, fats, and proteins contained in them become energy sources for the heterotrophs. Proteins can be made using nitrates, sulfates, and phosphates in the soil.
Primary production in tropical streams and rivers
Aquatic algae are a significant contributor to food webs in tropical rivers and streams. This is displayed by net primary production, a fundamental ecological process that reflects the amount of carbon that is synthesized within an ecosystem. This carbon ultimately becomes available to consumers. Net primary production displays that the rates of in-stream primary production in tropical regions are at least an order of magnitude greater than in similar temperate systems.
Origin of autotrophs
Researchers believe that the first cellular lifeforms were not heterotrophs as they would rely upon autotrophs since organic substrates delivered from space were either too heterogeneous to support microbial growth or too reduced to be fermented. Instead, they consider that the first cells were autotrophs. These autotrophs might have been thermophilic and anaerobic chemolithoautotrophs that lived at deep sea alkaline hydrothermal vents. Catalytic Fe(Ni)S minerals in these environments are shown to catalyze biomolecules like RNA. This view is supported by phylogenetic evidence as the physiology and habitat of the last universal common ancestor (LUCA) was inferred to have also been a thermophilic anaerobe with a Wood-Ljungdahl pathway, its biochemistry was replete with FeS clusters and radical reaction mechanisms. It was dependent upon Fe, H2, and CO2. The high concentration of K+ present within the cytosol of most life forms suggests that early cellular life had Na+/H+ antiporters or possibly symporters. Autotrophs possibly evolved into heterotrophs when they were at low H2 partial pressures where the first form of heterotrophy were likely amino acid and clostridial type purine fermentations and photosynthesis emerged in the presence of long-wavelength geothermal light emitted by hydrothermal vents. The first photochemically active pigments are inferred to be Zn-tetrapyrroles.
See also
Electrolithoautotroph
Electrotroph
Heterotrophic nutrition
Organotroph
Primary nutritional groups
References
External links
Trophic ecology
Microbial growth and nutrition
Biology terminology
Plant nutrition | 0.785539 | 0.996389 | 0.782703 |
Geology | Geology is a branch of natural science concerned with the Earth and other astronomical objects, the rocks of which they are composed, and the processes by which they change over time. Modern geology significantly overlaps all other Earth sciences, including hydrology. It is integrated with Earth system science and planetary science.
Geology describes the structure of the Earth on and beneath its surface and the processes that have shaped that structure. Geologists study the mineralogical composition of rocks in order to get insight into their history of formation. Geology determines the relative ages of rocks found at a given location; geochemistry (a branch of geology) determines their absolute ages. By combining various petrological, crystallographic, and paleontological tools, geologists are able to chronicle the geological history of the Earth as a whole. One aspect is to demonstrate the age of the Earth. Geology provides evidence for plate tectonics, the evolutionary history of life, and the Earth's past climates.
Geologists broadly study the properties and processes of Earth and other terrestrial planets. Geologists use a wide variety of methods to understand the Earth's structure and evolution, including fieldwork, rock description, geophysical techniques, chemical analysis, physical experiments, and numerical modelling. In practical terms, geology is important for mineral and hydrocarbon exploration and exploitation, evaluating water resources, understanding natural hazards, remediating environmental problems, and providing insights into past climate change. Geology is a major academic discipline, and it is central to geological engineering and plays an important role in geotechnical engineering.
Geological material
The majority of geological data comes from research on solid Earth materials. Meteorites and other extraterrestrial natural materials are also studied by geological methods.
Minerals
Minerals are naturally occurring elements and compounds with a definite homogeneous chemical composition and an ordered atomic arrangement.
Each mineral has distinct physical properties, and there are many tests to determine each of them. Minerals are often identified through these tests. The specimens can be tested for:
Color: Minerals are grouped by their color. Mostly diagnostic but impurities can change a mineral's color.
Streak: Performed by scratching the sample on a porcelain plate. The color of the streak can help identify the mineral.
Hardness: The resistance of a mineral to scratching or indentation.
Breakage pattern: A mineral can either show fracture or cleavage, the former being breakage of uneven surfaces, and the latter a breakage along closely spaced parallel planes.
Luster: Quality of light reflected from the surface of a mineral. Examples are metallic, pearly, waxy, dull.
Specific gravity: the weight of a specific volume of a mineral.
Effervescence: Involves dripping hydrochloric acid on the mineral to test for fizzing.
Magnetism: Involves using a magnet to test for magnetism.
Taste: Minerals can have a distinctive taste such as halite (which tastes like table salt).
Rock
A rock is any naturally occurring solid mass or aggregate of minerals or mineraloids. Most research in geology is associated with the study of rocks, as they provide the primary record of the majority of the geological history of the Earth. There are three major types of rock: igneous, sedimentary, and metamorphic. The rock cycle
illustrates the relationships among them (see diagram).
When a rock solidifies or crystallizes from melt (magma or lava), it is an igneous rock. This rock can be weathered and eroded, then redeposited and lithified into a sedimentary rock. Sedimentary rocks are mainly divided into four categories: sandstone, shale, carbonate, and evaporite. This group of classifications focuses partly on the size of sedimentary particles (sandstone and shale), and partly on mineralogy and formation processes (carbonation and evaporation). Igneous and sedimentary rocks can then be turned into metamorphic rocks by heat and pressure that change its mineral content, resulting in a characteristic fabric. All three types may melt again, and when this happens, new magma is formed, from which an igneous rock may once again solidify.
Organic matter, such as coal, bitumen, oil, and natural gas, is linked mainly to organic-rich sedimentary rocks.
To study all three types of rock, geologists evaluate the minerals of which they are composed and their other physical properties, such as texture and fabric.
Unlithified material
Geologists also study unlithified materials (referred to as superficial deposits) that lie above the bedrock. This study is often known as Quaternary geology, after the Quaternary period of geologic history, which is the most recent period of geologic time.
Magma
Magma is the original unlithified source of all igneous rocks. The active flow of molten rock is closely studied in volcanology, and igneous petrology aims to determine the history of igneous rocks from their original molten source to their final crystallization.
Whole-Earth structure
Plate tectonics
In the 1960s, it was discovered that the Earth's lithosphere, which includes the crust and rigid uppermost portion of the upper mantle, is separated into tectonic plates that move across the plastically deforming, solid, upper mantle, which is called the asthenosphere. This theory is supported by several types of observations, including seafloor spreading and the global distribution of mountain terrain and seismicity.
There is an intimate coupling between the movement of the plates on the surface and the convection of the mantle (that is, the heat transfer caused by the slow movement of ductile mantle rock). Thus, oceanic parts of plates and the adjoining mantle convection currents always move in the same direction – because the oceanic lithosphere is actually the rigid upper thermal boundary layer of the convecting mantle. This coupling between rigid plates moving on the surface of the Earth and the convecting mantle is called plate tectonics.
The development of plate tectonics has provided a physical basis for many observations of the solid Earth. Long linear regions of geological features are explained as plate boundaries:
Mid-ocean ridges, high regions on the seafloor where hydrothermal vents and volcanoes exist, are seen as divergent boundaries, where two plates move apart.
Arcs of volcanoes and earthquakes are theorized as convergent boundaries, where one plate subducts, or moves, under another.
Transform boundaries, such as the San Andreas Fault system, are where plates slide horizontally past each other.
Plate tectonics has provided a mechanism for Alfred Wegener's theory of continental drift, in which the continents move across the surface of the Earth over geological time. They also provided a driving force for crustal deformation, and a new setting for the observations of structural geology. The power of the theory of plate tectonics lies in its ability to combine all of these observations into a single theory of how the lithosphere moves over the convecting mantle.
Earth structure
Advances in seismology, computer modeling, and mineralogy and crystallography at high temperatures and pressures give insights into the internal composition and structure of the Earth.
Seismologists can use the arrival times of seismic waves to image the interior of the Earth. Early advances in this field showed the existence of a liquid outer core (where shear waves were not able to propagate) and a dense solid inner core. These advances led to the development of a layered model of the Earth, with a lithosphere (including crust) on top, the mantle below (separated within itself by seismic discontinuities at 410 and 660 kilometers), and the outer core and inner core below that. More recently, seismologists have been able to create detailed images of wave speeds inside the earth in the same way a doctor images a body in a CT scan. These images have led to a much more detailed view of the interior of the Earth, and have replaced the simplified layered model with a much more dynamic model.
Mineralogists have been able to use the pressure and temperature data from the seismic and modeling studies alongside knowledge of the elemental composition of the Earth to reproduce these conditions in experimental settings and measure changes within the crystal structure. These studies explain the chemical changes associated with the major seismic discontinuities in the mantle and show the crystallographic structures expected in the inner core of the Earth.
Geological time
The geological time scale encompasses the history of the Earth. It is bracketed at the earliest by the dates of the first Solar System material at 4.567 Ga (or 4.567 billion years ago) and the formation of the Earth at
4.54 Ga
(4.54 billion years), which is the beginning of the Hadean eona division of geological time. At the later end of the scale, it is marked by the present day (in the Holocene epoch).
Timescale of the Earth
Important milestones on Earth
4.567 Ga (gigaannum: billion years ago): Solar system formation
4.54 Ga: Accretion, or formation, of Earth
c. 4 Ga: End of Late Heavy Bombardment, the first life
c. 3.5 Ga: Start of photosynthesis
c. 2.3 Ga: Oxygenated atmosphere, first snowball Earth
730–635 Ma (megaannum: million years ago): second snowball Earth
541 ± 0.3 Ma: Cambrian explosion – vast multiplication of hard-bodied life; first abundant fossils; start of the Paleozoic
c. 380 Ma: First vertebrate land animals
250 Ma: Permian-Triassic extinction – 90% of all land animals die; end of Paleozoic and beginning of Mesozoic
66 Ma: Cretaceous–Paleogene extinction – Dinosaurs die; end of Mesozoic and beginning of Cenozoic
c. 7 Ma: First hominins appear
3.9 Ma: First Australopithecus, direct ancestor to modern Homo sapiens, appear
200 ka (kiloannum: thousand years ago): First modern Homo sapiens appear in East Africa
Timescale of the Moon
Timescale of Mars
Dating methods
Relative dating
Methods for relative dating were developed when geology first emerged as a natural science. Geologists still use the following principles today as a means to provide information about geological history and the timing of geological events.
The principle of uniformitarianism states that the geological processes observed in operation that modify the Earth's crust at present have worked in much the same way over geological time. A fundamental principle of geology advanced by the 18th-century Scottish physician and geologist James Hutton is that "the present is the key to the past." In Hutton's words: "the past history of our globe must be explained by what can be seen to be happening now."
The principle of intrusive relationships concerns crosscutting intrusions. In geology, when an igneous intrusion cuts across a formation of sedimentary rock, it can be determined that the igneous intrusion is younger than the sedimentary rock. Different types of intrusions include stocks, laccoliths, batholiths, sills and dikes.
The principle of cross-cutting relationships pertains to the formation of faults and the age of the sequences through which they cut. Faults are younger than the rocks they cut; accordingly, if a fault is found that penetrates some formations but not those on top of it, then the formations that were cut are older than the fault, and the ones that are not cut must be younger than the fault. Finding the key bed in these situations may help determine whether the fault is a normal fault or a thrust fault.
The principle of inclusions and components states that, with sedimentary rocks, if inclusions (or clasts) are found in a formation, then the inclusions must be older than the formation that contains them. For example, in sedimentary rocks, it is common for gravel from an older formation to be ripped up and included in a newer layer. A similar situation with igneous rocks occurs when xenoliths are found. These foreign bodies are picked up as magma or lava flows, and are incorporated, later to cool in the matrix. As a result, xenoliths are older than the rock that contains them.
The principle of original horizontality states that the deposition of sediments occurs as essentially horizontal beds. Observation of modern marine and non-marine sediments in a wide variety of environments supports this generalization (although cross-bedding is inclined, the overall orientation of cross-bedded units is horizontal).
The principle of superposition states that a sedimentary rock layer in a tectonically undisturbed sequence is younger than the one beneath it and older than the one above it. Logically a younger layer cannot slip beneath a layer previously deposited. This principle allows sedimentary layers to be viewed as a form of the vertical timeline, a partial or complete record of the time elapsed from deposition of the lowest layer to deposition of the highest bed.
The principle of faunal succession is based on the appearance of fossils in sedimentary rocks. As organisms exist during the same period throughout the world, their presence or (sometimes) absence provides a relative age of the formations where they appear. Based on principles that William Smith laid out almost a hundred years before the publication of Charles Darwin's theory of evolution, the principles of succession developed independently of evolutionary thought. The principle becomes quite complex, however, given the uncertainties of fossilization, localization of fossil types due to lateral changes in habitat (facies change in sedimentary strata), and that not all fossils formed globally at the same time.
Absolute dating
Geologists also use methods to determine the absolute age of rock samples and geological events. These dates are useful on their own and may also be used in conjunction with relative dating methods or to calibrate relative methods.
At the beginning of the 20th century, advancement in geological science was facilitated by the ability to obtain accurate absolute dates to geological events using radioactive isotopes and other methods. This changed the understanding of geological time. Previously, geologists could only use fossils and stratigraphic correlation to date sections of rock relative to one another. With isotopic dates, it became possible to assign absolute ages to rock units, and these absolute dates could be applied to fossil sequences in which there was datable material, converting the old relative ages into new absolute ages.
For many geological applications, isotope ratios of radioactive elements are measured in minerals that give the amount of time that has passed since a rock passed through its particular closure temperature, the point at which different radiometric isotopes stop diffusing into and out of the crystal lattice. These are used in geochronologic and thermochronologic studies. Common methods include uranium–lead dating, potassium–argon dating, argon–argon dating and uranium–thorium dating. These methods are used for a variety of applications. Dating of lava and volcanic ash layers found within a stratigraphic sequence can provide absolute age data for sedimentary rock units that do not contain radioactive isotopes and calibrate relative dating techniques. These methods can also be used to determine ages of pluton emplacement.
Thermochemical techniques can be used to determine temperature profiles within the crust, the uplift of mountain ranges, and paleo-topography.
Fractionation of the lanthanide series elements is used to compute ages since rocks were removed from the mantle.
Other methods are used for more recent events. Optically stimulated luminescence and cosmogenic radionuclide dating are used to date surfaces and/or erosion rates. Dendrochronology can also be used for the dating of landscapes. Radiocarbon dating is used for geologically young materials containing organic carbon.
Geological development of an area
The geology of an area changes through time as rock units are deposited and inserted, and deformational processes alter their shapes and locations.
Rock units are first emplaced either by deposition onto the surface or intrusion into the overlying rock. Deposition can occur when sediments settle onto the surface of the Earth and later lithify into sedimentary rock, or when as volcanic material such as volcanic ash or lava flows blanket the surface. Igneous intrusions such as batholiths, laccoliths, dikes, and sills, push upwards into the overlying rock, and crystallize as they intrude.
After the initial sequence of rocks has been deposited, the rock units can be deformed and/or metamorphosed. Deformation typically occurs as a result of horizontal shortening, horizontal extension, or side-to-side (strike-slip) motion. These structural regimes broadly relate to convergent boundaries, divergent boundaries, and transform boundaries, respectively, between tectonic plates.
When rock units are placed under horizontal compression, they shorten and become thicker. Because rock units, other than muds, do not significantly change in volume, this is accomplished in two primary ways: through faulting and folding. In the shallow crust, where brittle deformation can occur, thrust faults form, which causes the deeper rock to move on top of the shallower rock. Because deeper rock is often older, as noted by the principle of superposition, this can result in older rocks moving on top of younger ones. Movement along faults can result in folding, either because the faults are not planar or because rock layers are dragged along, forming drag folds as slip occurs along the fault. Deeper in the Earth, rocks behave plastically and fold instead of faulting. These folds can either be those where the material in the center of the fold buckles upwards, creating "antiforms", or where it buckles downwards, creating "synforms". If the tops of the rock units within the folds remain pointing upwards, they are called anticlines and synclines, respectively. If some of the units in the fold are facing downward, the structure is called an overturned anticline or syncline, and if all of the rock units are overturned or the correct up-direction is unknown, they are simply called by the most general terms, antiforms, and synforms.
Even higher pressures and temperatures during horizontal shortening can cause both folding and metamorphism of the rocks. This metamorphism causes changes in the mineral composition of the rocks; creates a foliation, or planar surface, that is related to mineral growth under stress. This can remove signs of the original textures of the rocks, such as bedding in sedimentary rocks, flow features of lavas, and crystal patterns in crystalline rocks.
Extension causes the rock units as a whole to become longer and thinner. This is primarily accomplished through normal faulting and through the ductile stretching and thinning. Normal faults drop rock units that are higher below those that are lower. This typically results in younger units ending up below older units. Stretching of units can result in their thinning. In fact, at one location within the Maria Fold and Thrust Belt, the entire sedimentary sequence of the Grand Canyon appears over a length of less than a meter. Rocks at the depth to be ductilely stretched are often also metamorphosed. These stretched rocks can also pinch into lenses, known as boudins, after the French word for "sausage" because of their visual similarity.
Where rock units slide past one another, strike-slip faults develop in shallow regions, and become shear zones at deeper depths where the rocks deform ductilely.
The addition of new rock units, both depositionally and intrusively, often occurs during deformation. Faulting and other deformational processes result in the creation of topographic gradients, causing material on the rock unit that is increasing in elevation to be eroded by hillslopes and channels. These sediments are deposited on the rock unit that is going down. Continual motion along the fault maintains the topographic gradient in spite of the movement of sediment and continues to create accommodation space for the material to deposit. Deformational events are often also associated with volcanism and igneous activity. Volcanic ashes and lavas accumulate on the surface, and igneous intrusions enter from below. Dikes, long, planar igneous intrusions, enter along cracks, and therefore often form in large numbers in areas that are being actively deformed. This can result in the emplacement of dike swarms, such as those that are observable across the Canadian shield, or rings of dikes around the lava tube of a volcano.
All of these processes do not necessarily occur in a single environment and do not necessarily occur in a single order. The Hawaiian Islands, for example, consist almost entirely of layered basaltic lava flows. The sedimentary sequences of the mid-continental United States and the Grand Canyon in the southwestern United States contain almost-undeformed stacks of sedimentary rocks that have remained in place since Cambrian time. Other areas are much more geologically complex. In the southwestern United States, sedimentary, volcanic, and intrusive rocks have been metamorphosed, faulted, foliated, and folded. Even older rocks, such as the Acasta gneiss of the Slave craton in northwestern Canada, the oldest known rock in the world have been metamorphosed to the point where their origin is indiscernible without laboratory analysis. In addition, these processes can occur in stages. In many places, the Grand Canyon in the southwestern United States being a very visible example, the lower rock units were metamorphosed and deformed, and then deformation ended and the upper, undeformed units were deposited. Although any amount of rock emplacement and rock deformation can occur, and they can occur any number of times, these concepts provide a guide to understanding the geological history of an area.
Investigative methods
Geologists use a number of fields, laboratory, and numerical modeling methods to decipher Earth history and to understand the processes that occur on and inside the Earth. In typical geological investigations, geologists use primary information related to petrology (the study of rocks), stratigraphy (the study of sedimentary layers), and structural geology (the study of positions of rock units and their deformation). In many cases, geologists also study modern soils, rivers, landscapes, and glaciers; investigate past and current life and biogeochemical pathways, and use geophysical methods to investigate the subsurface. Sub-specialities of geology may distinguish endogenous and exogenous geology.
Field methods
Geological field work varies depending on the task at hand. Typical fieldwork could consist of:
Geological mapping
Structural mapping: identifying the locations of major rock units and the faults and folds that led to their placement there.
Stratigraphic mapping: pinpointing the locations of sedimentary facies (lithofacies and biofacies) or the mapping of isopachs of equal thickness of sedimentary rock
Surficial mapping: recording the locations of soils and surficial deposits
Surveying of topographic features
compilation of topographic maps
Work to understand change across landscapes, including:
Patterns of erosion and deposition
River-channel change through migration and avulsion
Hillslope processes
Subsurface mapping through geophysical methods
These methods include:
Shallow seismic surveys
Ground-penetrating radar
Aeromagnetic surveys
Electrical resistivity tomography
They aid in:
Hydrocarbon exploration
Finding groundwater
Locating buried archaeological artifacts
High-resolution stratigraphy
Measuring and describing stratigraphic sections on the surface
Well drilling and logging
Biogeochemistry and geomicrobiology
Collecting samples to:
determine biochemical pathways
identify new species of organisms
identify new chemical compounds
and to use these discoveries to:
understand early life on Earth and how it functioned and metabolized
find important compounds for use in pharmaceuticals
Paleontology: excavation of fossil material
For research into past life and evolution
For museums and education
Collection of samples for geochronology and thermochronology
Glaciology: measurement of characteristics of glaciers and their motion
Petrology
In addition to identifying rocks in the field (lithology), petrologists identify rock samples in the laboratory. Two of the primary methods for identifying rocks in the laboratory are through optical microscopy and by using an electron microprobe. In an optical mineralogy analysis, petrologists analyze thin sections of rock samples using a petrographic microscope, where the minerals can be identified through their different properties in plane-polarized and cross-polarized light, including their birefringence, pleochroism, twinning, and interference properties with a conoscopic lens. In the electron microprobe, individual locations are analyzed for their exact chemical compositions and variation in composition within individual crystals. Stable and radioactive isotope studies provide insight into the geochemical evolution of rock units.
Petrologists can also use fluid inclusion data and perform high temperature and pressure physical experiments to understand the temperatures and pressures at which different mineral phases appear, and how they change through igneous and metamorphic processes. This research can be extrapolated to the field to understand metamorphic processes and the conditions of crystallization of igneous rocks. This work can also help to explain processes that occur within the Earth, such as subduction and magma chamber evolution.
Structural geology
Structural geologists use microscopic analysis of oriented thin sections of geological samples to observe the fabric within the rocks, which gives information about strain within the crystalline structure of the rocks. They also plot and combine measurements of geological structures to better understand the orientations of faults and folds to reconstruct the history of rock deformation in the area. In addition, they perform analog and numerical experiments of rock deformation in large and small settings.
The analysis of structures is often accomplished by plotting the orientations of various features onto stereonets. A stereonet is a stereographic projection of a sphere onto a plane, in which planes are projected as lines and lines are projected as points. These can be used to find the locations of fold axes, relationships between faults, and relationships between other geological structures.
Among the most well-known experiments in structural geology are those involving orogenic wedges, which are zones in which mountains are built along convergent tectonic plate boundaries. In the analog versions of these experiments, horizontal layers of sand are pulled along a lower surface into a back stop, which results in realistic-looking patterns of faulting and the growth of a critically tapered (all angles remain the same) orogenic wedge. Numerical models work in the same way as these analog models, though they are often more sophisticated and can include patterns of erosion and uplift in the mountain belt. This helps to show the relationship between erosion and the shape of a mountain range. These studies can also give useful information about pathways for metamorphism through pressure, temperature, space, and time.
Stratigraphy
In the laboratory, stratigraphers analyze samples of stratigraphic sections that can be returned from the field, such as those from drill cores. Stratigraphers also analyze data from geophysical surveys that show the locations of stratigraphic units in the subsurface. Geophysical data and well logs can be combined to produce a better view of the subsurface, and stratigraphers often use computer programs to do this in three dimensions. Stratigraphers can then use these data to reconstruct ancient processes occurring on the surface of the Earth, interpret past environments, and locate areas for water, coal, and hydrocarbon extraction.
In the laboratory, biostratigraphers analyze rock samples from outcrop and drill cores for the fossils found in them. These fossils help scientists to date the core and to understand the depositional environment in which the rock units formed. Geochronologists precisely date rocks within the stratigraphic section to provide better absolute bounds on the timing and rates of deposition.
Magnetic stratigraphers look for signs of magnetic reversals in igneous rock units within the drill cores. Other scientists perform stable-isotope studies on the rocks to gain information about past climate.
Planetary geology
With the advent of space exploration in the twentieth century, geologists have begun to look at other planetary bodies in the same ways that have been developed to study the Earth. This new field of study is called planetary geology (sometimes known as astrogeology) and relies on known geological principles to study other bodies of the solar system. This is a major aspect of planetary science, and largely focuses on the terrestrial planets, icy moons, asteroids, comets, and meteorites. However, some planetary geophysicists study the giant planets and exoplanets.
Although the Greek-language-origin prefix geo refers to Earth, "geology" is often used in conjunction with the names of other planetary bodies when describing their composition and internal processes: examples are "the geology of Mars" and "Lunar geology". Specialized terms such as selenology (studies of the Moon), areology (of Mars), etc., are also in use.
Although planetary geologists are interested in studying all aspects of other planets, a significant focus is to search for evidence of past or present life on other worlds. This has led to many missions whose primary or ancillary purpose is to examine planetary bodies for evidence of life. One of these is the Phoenix lander, which analyzed Martian polar soil for water, chemical, and mineralogical constituents related to biological processes.
Applied geology
Economic geology
Economic geology is a branch of geology that deals with aspects of economic minerals that humankind uses to fulfill various needs. Economic minerals are those extracted profitably for various practical uses. Economic geologists help locate and manage the Earth's natural resources, such as petroleum and coal, as well as mineral resources, which include metals such as iron, copper, and uranium.
Mining geology
Mining geology consists of the extractions of mineral and ore resources from the Earth. Some resources of economic interests include gemstones, metals such as gold and copper, and many minerals such as asbestos, Magnesite, perlite, mica, phosphates, zeolites, clay, pumice, quartz, and silica, as well as elements such as sulfur, chlorine, and helium.
Petroleum geology
Petroleum geologists study the locations of the subsurface of the Earth that can contain extractable hydrocarbons, especially petroleum and natural gas. Because many of these reservoirs are found in sedimentary basins, they study the formation of these basins, as well as their sedimentary and tectonic evolution and the present-day positions of the rock units.
Engineering geology
Engineering geology is the application of geological principles to engineering practice for the purpose of assuring that the geological factors affecting the location, design, construction, operation, and maintenance of engineering works are properly addressed. Engineering geology is distinct from geological engineering, particularly in North America.
In the field of civil engineering, geological principles and analyses are used in order to ascertain the mechanical principles of the material on which structures are built. This allows tunnels to be built without collapsing, bridges and skyscrapers to be built with sturdy foundations, and buildings to be built that will not settle in clay and mud.
Hydrology
Geology and geological principles can be applied to various environmental problems such as stream restoration, the restoration of brownfields, and the understanding of the interaction between natural habitat and the geological environment. Groundwater hydrology, or hydrogeology, is used to locate groundwater, which can often provide a ready supply of uncontaminated water and is especially important in arid regions, and to monitor the spread of contaminants in groundwater wells.
Paleoclimatology
Geologists also obtain data through stratigraphy, boreholes, core samples, and ice cores. Ice cores and sediment cores are used for paleoclimate reconstructions, which tell geologists about past and present temperature, precipitation, and sea level across the globe. These datasets are our primary source of information on global climate change outside of instrumental data.
Natural hazards
Geologists and geophysicists study natural hazards in order to enact safe building codes and warning systems that are used to prevent loss of property and life. Examples of important natural hazards that are pertinent to geology (as opposed those that are mainly or only pertinent to meteorology) are:
History
The study of the physical material of the Earth dates back at least to ancient Greece when Theophrastus (372–287 BCE) wrote the work Peri Lithon (On Stones). During the Roman period, Pliny the Elder wrote in detail of the many minerals and metals, then in practical use – even correctly noting the origin of amber. Additionally, in the 4th century BCE Aristotle made critical observations of the slow rate of geological change. He observed the composition of the land and formulated a theory where the Earth changes at a slow rate and that these changes cannot be observed during one person's lifetime. Aristotle developed one of the first evidence-based concepts connected to the geological realm regarding the rate at which the Earth physically changes.
Abu al-Rayhan al-Biruni (973–1048 CE) was one of the earliest Persian geologists, whose works included the earliest writings on the geology of India, hypothesizing that the Indian subcontinent was once a sea. Drawing from Greek and Indian scientific literature that were not destroyed by the Muslim conquests, the Persian scholar Ibn Sina (Avicenna, 981–1037) proposed detailed explanations for the formation of mountains, the origin of earthquakes, and other topics central to modern geology, which provided an essential foundation for the later development of the science. In China, the polymath Shen Kuo (1031–1095) formulated a hypothesis for the process of land formation: based on his observation of fossil animal shells in a geological stratum in a mountain hundreds of miles from the ocean, he inferred that the land was formed by the erosion of the mountains and by deposition of silt.
Georgius Agricola (1494–1555) published his groundbreaking work De Natura Fossilium in 1546 and is seen as the founder of geology as a scientific discipline.
Nicolas Steno (1638–1686) is credited with the law of superposition, the principle of original horizontality, and the principle of lateral continuity: three defining principles of stratigraphy.
The word geology was first used by Ulisse Aldrovandi in 1603, then by Jean-André Deluc in 1778 and introduced as a fixed term by Horace-Bénédict de Saussure in 1779. The word is derived from the Greek γῆ, gê, meaning "earth" and λόγος, logos, meaning "speech". But according to another source, the word "geology" comes from a Norwegian, Mikkel Pedersøn Escholt (1600–1669), who was a priest and scholar. Escholt first used the definition in his book titled, Geologia Norvegica (1657).
William Smith (1769–1839) drew some of the first geological maps and began the process of ordering rock strata (layers) by examining the fossils contained in them.
In 1763, Mikhail Lomonosov published his treatise On the Strata of Earth. His work was the first narrative of modern geology, based on the unity of processes in time and explanation of the Earth's past from the present.
James Hutton (1726–1797) is often viewed as the first modern geologist. In 1785 he presented a paper entitled Theory of the Earth to the Royal Society of Edinburgh. In his paper, he explained his theory that the Earth must be much older than had previously been supposed to allow enough time for mountains to be eroded and for sediments to form new rocks at the bottom of the sea, which in turn were raised up to become dry land. Hutton published a two-volume version of his ideas in 1795.
Followers of Hutton were known as Plutonists because they believed that some rocks were formed by vulcanism, which is the deposition of lava from volcanoes, as opposed to the Neptunists, led by Abraham Werner, who believed that all rocks had settled out of a large ocean whose level gradually dropped over time.
The first geological map of the U.S. was produced in 1809 by William Maclure. In 1807, Maclure commenced the self-imposed task of making a geological survey of the United States. Almost every state in the Union was traversed and mapped by him, the Allegheny Mountains being crossed and recrossed some 50 times. The results of his unaided labours were submitted to the American Philosophical Society in a memoir entitled Observations on the Geology of the United States explanatory of a Geological Map, and published in the Society's Transactions, together with the nation's first geological map. This antedates William Smith's geological map of England by six years, although it was constructed using a different classification of rocks.
Sir Charles Lyell (1797–1875) first published his famous book, Principles of Geology, in 1830. This book, which influenced the thought of Charles Darwin, successfully promoted the doctrine of uniformitarianism. This theory states that slow geological processes have occurred throughout the Earth's history and are still occurring today. In contrast, catastrophism is the theory that Earth's features formed in single, catastrophic events and remained unchanged thereafter. Though Hutton believed in uniformitarianism, the idea was not widely accepted at the time.
Much of 19th-century geology revolved around the question of the Earth's exact age. Estimates varied from a few hundred thousand to billions of years. By the early 20th century, radiometric dating allowed the Earth's age to be estimated at two billion years. The awareness of this vast amount of time opened the door to new theories about the processes that shaped the planet.
Some of the most significant advances in 20th-century geology have been the development of the theory of plate tectonics in the 1960s and the refinement of estimates of the planet's age. Plate tectonics theory arose from two separate geological observations: seafloor spreading and continental drift. The theory revolutionized the Earth sciences. Today the Earth is known to be approximately 4.5 billion years old.
Fields or related disciplines
Earth system science
Economic geology
Mining geology
Petroleum geology
Engineering geology
Environmental geology
Environmental science
Geoarchaeology
Geochemistry
Biogeochemistry
Isotope geochemistry
Geochronology
Geodetics
Geography
Physical geography
Technical geography
Geological engineering
Geological modelling
Geometallurgy
Geomicrobiology
Geomorphology
Geomythology
Geophysics
Glaciology
Historical geology
Hydrogeology
Meteorology
Mineralogy
Oceanography
Marine geology
Paleoclimatology
Paleontology
Micropaleontology
Palynology
Petrology
Petrophysics
Planetary geology
Plate tectonics
Regional geology
Sedimentology
Seismology
Soil science
Pedology (soil study)
Speleology
Stratigraphy
Biostratigraphy
Chronostratigraphy
Lithostratigraphy
Structural geology
Systems geology
Tectonics
Volcanology
See also
List of individual rocks
References
External links
One Geology: This interactive geological map of the world is an international initiative of the geological surveys around the globe. This groundbreaking project was launched in 2007 and contributed to the 'International Year of Planet Earth', becoming one of their flagship projects.
Earth Science News, Maps, Dictionary, Articles, Jobs
American Geophysical Union
American Geosciences Institute
European Geosciences Union
European Federation of Geologists
Geological Society of America
Geological Society of London
Video-interviews with famous geologists
Geology OpenTextbook
Chronostratigraphy benchmarks
The principles and objects of geology, with special reference to the geology of Egypt (1911), W. F. Hume | 0.783931 | 0.998425 | 0.782696 |
Survival skills | Survival skills are techniques used to sustain life in any type of natural environment or built environment. These techniques are meant to provide basic necessities for human life, including water, food, and shelter. Survival skills also support proper knowledge and interactions with animals and plants to promote the sustaining of life over time.
Survival skills are basic ideas and abilities that ancient people invented and passed down for thousands of years. Today, survival skills are often associated with surviving in a disaster situation.
Outdoor activities such as hiking, backpacking, horseback riding, fishing, and hunting all require basic wilderness survival skills, especially to handle emergencies. Individuals who practice survival skills as a type of outdoor recreation or hobby may describe themselves as survivalists. Survival skills are often used by people living off-grid lifestyles such as homesteaders. Bushcraft and primitive living are most often self-implemented but require many of the same skills.
First aid
First aid (wilderness first aid in particular) can help a person survive and function with injuries and illnesses that would otherwise kill or compromise them. Common and dangerous injuries include:
Bites from snakes, spiders, and other wild animals
Bone fractures
Burns
Drowsiness
Headache
Heart attack
Hemorrhage
Hypothermia and hyperthermia
Infection from food, animal contact, or drinking non-potable water
Poisoning from poisonous plants or fungi
Sprains, particularly of the ankle
Vomiting
Wounds, which may become infected
The person may need to apply the contents of a first aid kit or, if possessing the required knowledge, naturally occurring medicinal plants, immobilize injured limbs, or even transport incapacitated comrades.
Shelter
Many people who are forced into survival situations often have an elevated risk of danger because of direct exposure to the elements. Many people in survival situations die of hypothermia or hyperthermia, or animal attacks. An effective shelter can range from a natural shelter, such as a cave, overhanging rock outcrop, or a fallen-down tree, to an intermediate form of man-made shelter such as a debris hut, tree pit shelter, or snow cave, to a completely man-made structure such as a tarp, tent, or a longhouse. It is noted that some common properties between these structures are:
Location (away from hazards, such as cliffs; and nearby materials, like food sources)
Insulation (from the ground, rain, wind, air, or sun)
Heat Source (either body heat or fire-heated)
Personal or Group Shelter (having multiple individuals)
Fire
Fire is a tool that helps meet many survival needs. A campfire can be used to boil water, rendering it safe to drink, and to cook food. Fire also creates a sense of safety and protection, which can provide an overlooked psychological boost. When temperatures are low, fire can postpone or prevent the risk of hypothermia. In a wilderness survival situation, fire can provide a sense of home in addition to being an essential energy source. Fire may deter wild animals from interfering with an individual, though some wild animals may also be attracted to the light and heat of a fire.
There are numerous methods for starting a fire in a survival situation. Fires are either started with the case of the solar spark lighter, or through a spark, as in the case of a flint striker. Fires will often be extinguished if either there is excessive wind, or if the fuel or environment is too wet. Lighting a fire without a lighter or matches, e.g. by using natural flint and metal with tinder, is a frequent subject of both books on survival and in survival courses, because it allows an individual to start a fire with few materials in the event of a disaster. There is an emphasis placed on practicing fire-making skills before venturing into the wilderness. Producing fire under adverse conditions has been made much easier by the introduction of tools such as the magnesium striker, solar spark lighter, and the fire piston.
Water
A human being can survive an average of three to five days without water. Since the human body is composed of an average of 60% water, it should be no surprise that water is higher on the list than food. The need for water dictates that unnecessary water loss by perspiration should be avoided in survival situations. Perspiration and the need for water increase with exercise. Although human water intake varies greatly depending on factors like age and gender, the average human should drink about 13 cups or 3 liters per day. Many people in survival situations perish due to dehydration, and/or the debilitating effects of water-borne pathogens from untreated water.
A typical person will lose a minimum of two to four liters of water per day under ordinary conditions, and more in hot, dry, or cold weather. Four to six liters of water or other liquids are generally required each day in the wilderness to avoid dehydration and to keep the body functioning properly. The U.S. Army survival manual does not recommend drinking water only when thirsty, as this leads to inadequate hydration. Instead, water should be consumed at regular intervals. Other groups recommend rationing water through "water discipline."
A lack of water causes dehydration, which may result in lethargy, headaches, dizziness, confusion, and eventually death. Even mild dehydration reduces endurance and impairs concentration, which is dangerous in a survival situation where clear thinking is essential. Dark yellow or brown urine is a diagnostic indicator of dehydration. To avoid dehydration, a high priority is typically assigned to locating a supply of drinking water and making provisions to render that water as safe as possible.
Recent thinking is that boiling or commercial filters are significantly safer than the use of chemicals, with the exception of chlorine dioxide.
Food
Culinary root tubers, fruit, edible mushrooms, edible nuts, edible beans, edible cereals or edible leaves, edible cacti, ants and algae can be gathered and, if needed, prepared (mostly by boiling). With the exception of leaves, these foods are relatively high in calories, providing some energy to the body. Plants are some of the easiest food sources to find in the jungle, forest, or desert because they are stationary and can thus be obtained without exerting much effort. Animal trapping, hunting, and fishing allow a survivalist to acquire high-calorie meat but require certain skills and equipment (such as bows, snares, and nets).
Focusing on survival until rescued, the Boy Scouts of America especially discourages foraging for wild foods on the grounds that the knowledge and skills needed to make a safe decision are unlikely to be possessed by those finding themselves in a wilderness survival situation.
Navigation
When going on a hike or trip in an unfamiliar location, search and rescue advises to notify a trusted contact of your destination, your planned return time, and then notify them when returning. In the event you do not return in the specified time frame, (e.g. 12 hours of the scheduled return time), your contact can contact the police for search and rescue.
Survival situations can often be resolved by finding a way to safety, or a more suitable location to wait for rescue. Types of navigation include:
Celestial navigation, using the sun and the night sky to locate the cardinal directions and to maintain course of travel
Using a map, compass or GPS receiver
Dead reckoning
Natural navigation, using the condition of surrounding natural objects (i.e. moss on a tree, snow on a hill, direction of running water, etc.)
Mental preparedness
Mental clarity and preparedness are critical to survival. The will to live in a life-and-death situation often separates those that live and those that do not. Even well-trained survival experts may be mentally affected in disaster situations. It is critical to be calm and focused during a disaster.
To the extent that stress results from testing human limits, the benefits of learning to function under stress and determining those limits may outweigh the downside of stress. There are certain strategies and mental tools that can help people cope better in a survival situation, including focusing on manageable tasks, having a Plan B available, and recognizing denial.
Urban survival
Earthquake
Governments such as the United States and New Zealand advise that in an earthquake, one should "Drop, Cover, and Hold."
New Zealand Civil Defense explains it this way:
DROP down on your hands and knees. This protects you from falling but lets you move if you need to.
COVER your head and neck (or your entire body if possible) under a sturdy table or desk (if it is within a few steps of you). If there is no shelter nearby, cover your head and neck with your arms and hands.
HOLD on to your shelter (or your position to protect your head and neck) until the shaking stops. If the shaking shifts your shelter around, move with it.
The United States Federal Emergency Management Agency (FEMA) adds that in the event of a building collapse, it is advised that you:
Seek protection under a structure like a table
Cover your mouth with your shirt to filter out dust
Don't move until you are confident that something won't topple on you
Use your phone light to signal for help, or call
Important survival items
Survivalists often carry a "survival kit." The contents of these kits vary considerably, but generally consist of items that are necessary or useful in potential survival situations, depending on the anticipated needs and location. For wilderness survival, these kits often contain items like a knife, water vessel, fire-starting equipment, first aid equipment, tools to obtain food (such as snare wire, fish hooks, or firearms), a light source, navigational aids, and signaling or communications devices. Multi-purpose tools are often chosen because they serve multiple purposes, allowing the user to reduce weight and save space.
Preconstructed survival kits may be purchased from various retailers, or individual components may be bought and assembled into a kit.
Controversial survival skills
Some survival books promote the "Universal Edibility Test." Allegedly, it is possible to distinguish edible foods from toxic ones by exposing your skin and mouth to progressively greater amounts of the food in question, with waiting periods and checks for symptoms between these exposures. However, many experts reject this method, stating that even a small amount of some "potential foods" can cause physical discomfort, illness, or even death.
Many mainstream survival experts have recommended the act of drinking urine in times of dehydration and malnutrition. However, the U.S. Army Survival Field Manual (FM 21–76) instructs that this technique is a myth and should never be used. There are several reasons to avoid drinking urine, including the high salt content of urine, potential contaminants, and the risk of bacterial exposure, despite urine often being touted as "sterile."
Many classic western movies, classic survival books, and even some school textbooks suggest that using your mouth to suck the venom out of a venomous snake bite is an appropriate treatment. However, venom that has entered the bloodstream cannot be sucked out, and it may be dangerous for a rescuer to attempt to do so. Similarly, some survivalists promote the belief that when bitten by a venomous snake, drinking your urine provides natural anti-venom. Effective snakebite treatment involves pressure bandages and prompt medical treatment, and may require antivenom.
Seizonjutsu
Seizonjutsu (生存術) are survivsl skills such as gathering, hunting tracking etc used in Ninjutsu and expertise in meteorology, botanics and training for physical strength to endure hardships in the outback.
See also
Alone (TV show)
Bicycle touring
Bushcraft
Distress signal
Hazards of outdoor recreation
Mini survival kit
Survivalism
Ten Essentials
Woodcraft
References
Further reading
Mountaineering: The Freedom of the Hills; 8th Ed; Mountaineers Books; 596 pages; 1960 to 2010; .
The Knowledge: How to Rebuild Our World from Scratch; Penguin Books; 352 pages; 2014; .
External links
Media
Seizonjutsu - Ninja Survival Training Videos
Foraging | 0.789224 | 0.991502 | 0.782517 |
Techno-progressivism | Techno-progressivism, or tech-progressivism, is a stance of active support for the convergence of technological change and social change. Techno-progressives argue that technological developments can be profoundly empowering and emancipatory when they are regulated by legitimate democratic and accountable authorities to ensure that their costs, risks and benefits are all fairly shared by the actual stakeholders to those developments. One of the first mentions of techno-progressivism appeared within extropian jargon in 1999 as the removal of "all political, cultural, biological, and psychological limits to self-actualization and self-realization".
Stance
Techno-progressivism maintains that accounts of progress should focus on scientific and technical dimensions, as well as ethical and social ones. For most techno-progressive perspectives, then, the growth of scientific knowledge or the accumulation of technological powers will not represent the achievement of proper progress unless and until it is accompanied by a just distribution of the costs, risks, and benefits of these new knowledges and capacities. At the same time, for most techno-progressive critics and advocates, the achievement of better democracy, greater fairness, less violence, and a wider rights culture are all desirable, but inadequate in themselves to confront the quandaries of contemporary technological societies unless and until they are accompanied by progress in science and technology to support and implement these values.
Strong techno-progressive positions include support for the civil right of a person to either maintain or modify his or her own mind and body, on his or her own terms, through informed, consensual recourse to, or refusal of, available therapeutic or enabling biomedical technology.
During the November 2014 Transvision Conference, many of the leading transhumanist organizations signed the Technoprogressive Declaration. The Declaration stated the values of technoprogressivism.
Contrasting stance
Bioconservatism (a portmanteau word combining "biology" and "conservatism") is a stance of hesitancy about technological development especially if it is perceived to threaten a given social order. Strong bioconservative positions include opposition to genetic modification of food crops, the cloning and genetic engineering of livestock and pets, and, most prominently, rejection of the genetic, prosthetic, and cognitive modification of human beings to overcome what are broadly perceived as current human biological and cultural limitations.
Bioconservatives range in political perspective from right-leaning religious and cultural conservatives to left-leaning environmentalists and technology critics. What unifies bioconservatives is skepticism about medical and other biotechnological transformations of the living world. Typically less sweeping as a critique of technological society than bioluddism, the bioconservative perspective is characterized by its defense of the natural, deployed as a moral category.
Although techno-progressivism is the stance which contrasts with bioconservatism in the biopolitical spectrum, both techno-progressivism and bioconservatism, in their more moderate expressions, share an opposition to unsafe, unfair, undemocratic forms of technological development, and both recognize that such developmental modes can facilitate unacceptable recklessness and exploitation, exacerbate injustice and incubate dangerous social discontent.
List of notable techno-progressive social critics
Technocritic Dale Carrico with his accounts of techno-progressivism
Philosopher Donna Haraway with her accounts of cyborg theory.
Media theorist Douglas Rushkoff with his accounts of open source.
Cultural critic Mark Dery and his accounts of cyberculture.
Science journalist Chris Mooney with his account of the U.S. Republican Party's "war on science".
Futurist Bruce Sterling with his Viridian design movement.
Futurist Alex Steffen and his accounts of bright green environmentalism through the Worldchanging blog.
Science journalist Annalee Newitz with her accounts of the Biopunk.
Bioethicist James Hughes of the Institute for Ethics and Emerging Technologies with his accounts of democratic transhumanism.
Controversy
Technocritic Dale Carrico, who has used "techno-progressive" as a shorthand to describe progressive politics that emphasize technoscientific issues, has expressed concern that some "transhumanists" are using the term to describe themselves, with the consequence of possibly misleading the public regarding their actual cultural, social and political views, which may or may not be compatible with critical techno-progressivism.
See also
Algocracy
Body modification
Bioethics
Biopolitics
Digital freedom
Free software movement
Frontierism
Fordism
High modernism
Manifest Destiny
New Frontier
Post-scarcity economy
Scientism
Technocentrism
Techno-utopianism
Transhumanist politics
Progress
References
External links
Institute for Ethics and Emerging Technologies
Overview of Biopolitics
Ideologies
Technology in society
Political ideologies
Progressivism
Science and technology studies
Transhumanism
Ethics of science and technology
Transhumanist politics
Politics and technology | 0.798375 | 0.980109 | 0.782494 |
Biomedicine | Biomedicine (also referred to as Western medicine, mainstream medicine or conventional medicine) is a branch of medical science that applies biological and physiological principles to clinical practice. Biomedicine stresses standardized, evidence-based treatment validated through biological research, with treatment administered via formally trained doctors, nurses, and other such licensed practitioners.
Biomedicine also can relate to many other categories in health and biological related fields. It has been the dominant system of medicine in the Western world for more than a century.
It includes many biomedical disciplines and areas of specialty that typically contain the "bio-" prefix such as molecular biology, biochemistry, biotechnology, cell biology, embryology, nanobiotechnology, biological engineering, laboratory medical biology, cytogenetics, genetics, gene therapy, bioinformatics, biostatistics, systems biology, neuroscience, microbiology, virology, immunology, parasitology, physiology, pathology, anatomy, toxicology, and many others that generally concern life sciences as applied to medicine.
Overview
Biomedicine is the cornerstone of modern health care and laboratory diagnostics. It concerns a wide range of scientific and technological approaches: from in vitro diagnostics to in vitro fertilisation, from the molecular mechanisms of cystic fibrosis to the population dynamics of the HIV virus, from the understanding of molecular interactions to the study of carcinogenesis, from a single-nucleotide polymorphism (SNP) to gene therapy.
Biomedicine is based on molecular biology and combines all issues of developing molecular medicine into large-scale structural and functional relationships of the human genome, transcriptome, proteome, physiome and metabolome with the particular point of view of devising new technologies for prediction, diagnosis and therapy.
Biomedicine involves the study of (patho-) physiological processes with methods from biology and physiology. Approaches range from understanding molecular interactions to the study of the consequences at the in vivo level. These processes are studied with the particular point of view of devising new strategies for diagnosis and therapy.
Depending on the severity of the disease, biomedicine pinpoints a problem within a patient and fixes the problem through medical intervention. Medicine focuses on curing diseases rather than improving one's health.
In social sciences biomedicine is described somewhat differently. Through an anthropological lens biomedicine extends beyond the realm of biology and scientific facts; it is a socio-cultural system which collectively represents reality. While biomedicine is traditionally thought to have no bias due to the evidence-based practices, Gaines & Davis-Floyd (2004) highlight that biomedicine itself has a cultural basis and this is because biomedicine reflects the norms and values of its creators.
Molecular biology
Molecular biology is the process of synthesis and regulation of a cell's DNA, RNA, and protein. Molecular biology consists of different techniques including Polymerase chain reaction, Gel electrophoresis, and macromolecule blotting to manipulate DNA.
Polymerase chain reaction is done by placing a mixture of the desired DNA, DNA polymerase, primers, and nucleotide bases into a machine. The machine heats up and cools down at various temperatures to break the hydrogen bonds binding the DNA and allows the nucleotide bases to be added onto the two DNA templates after it has been separated.
Gel electrophoresis is a technique used to identify similar DNA between two unknown samples of DNA. This process is done by first preparing an agarose gel. This jelly-like sheet will have wells for DNA to be poured into. An electric current is applied so that the DNA, which is negatively charged due to its phosphate groups is attracted to the positive electrode. Different rows of DNA will move at different speeds because some DNA pieces are larger than others. Thus if two DNA samples show a similar pattern on the gel electrophoresis, one can tell that these DNA samples match.
Macromolecule blotting is a process performed after gel electrophoresis. An alkaline solution is prepared in a container. A sponge is placed into the solution and an agarose gel is placed on top of the sponge. Next, nitrocellulose paper is placed on top of the agarose gel and a paper towels are added on top of the nitrocellulose paper to apply pressure. The alkaline solution is drawn upwards towards the paper towel. During this process, the DNA denatures in the alkaline solution and is carried upwards to the nitrocellulose paper. The paper is then placed into a plastic bag and filled with a solution full of the DNA fragments, called the probe, found in the desired sample of DNA. The probes anneal to the complementary DNA of the bands already found on the nitrocellulose sample. Afterwards, probes are washed off and the only ones present are the ones that have annealed to complementary DNA on the paper. Next the paper is stuck onto an x ray film. The radioactivity of the probes creates black bands on the film, called an autoradiograph. As a result, only similar patterns of DNA to that of the probe are present on the film. This allows us the compare similar DNA sequences of multiple DNA samples. The overall process results in a precise reading of similarities in both similar and different DNA sample.
Biochemistry
Biochemistry is the science of the chemical processes which takes place within living organisms. Living organisms need essential elements to survive, among which are carbon, hydrogen, nitrogen, oxygen, calcium, and phosphorus. These elements make up the four macromolecules that living organisms need to survive: carbohydrates, lipids, proteins, and nucleic acids.
Carbohydrates, made up of carbon, hydrogen, and oxygen, are energy-storing molecules. The simplest carbohydrate is glucose, CHO, is used in cellular respiration to produce ATP, adenosine triphosphate, which supplies cells with energy.
Proteins are chains of amino acids that function, among other things, to contract skeletal muscle, as catalysts, as transport molecules, and as storage molecules. Protein catalysts can facilitate biochemical processes by lowering the activation energy of a reaction. Hemoglobins are also proteins, carrying oxygen to an organism's cells.
Lipids, also known as fats, are small molecules derived from biochemical subunits from either the ketoacyl or isoprene groups. Creating eight distinct categories: fatty acids, glycerolipids, glycerophospholipids, sphingolipids, saccharolipids, and polyketides (derived from condensation of ketoacyl subunits); and sterol lipids and prenol lipids (derived from condensation of isoprene subunits). Their primary purpose is to store energy over the long term. Due to their unique structure, lipids provide more than twice the amount of energy that carbohydrates do. Lipids can also be used as insulation. Moreover, lipids can be used in hormone production to maintain a healthy hormonal balance and provide structure to cell membranes.
Nucleic acids are a key component of DNA, the main genetic information-storing substance, found oftentimes in the cell nucleus, and controls the metabolic processes of the cell. DNA consists of two complementary antiparallel strands consisting of varying patterns of nucleotides. RNA is a single strand of DNA, which is transcribed from DNA and used for DNA translation, which is the process for making proteins out of RNA sequences.
See also
References
External links
Branches of biology
Veterinary medicine
Western culture | 0.790136 | 0.9903 | 0.782472 |
Posthumanism | Posthumanism or post-humanism (meaning "after humanism" or "beyond humanism") is an idea in continental philosophy and critical theory responding to the presence of anthropocentrism in 21st-century thought. Posthumanization comprises "those processes by which a society comes to include members other than 'natural' biological human beings who, in one way or another, contribute to the structures, dynamics, or meaning of the society."
It encompasses a wide variety of branches, including:
Antihumanism: a branch of theory that is critical of traditional humanism and traditional ideas about the human condition, vitality and agency.
Cultural posthumanism: A branch of cultural theory critical of the foundational assumptions of humanism and its legacy that examines and questions the historical notions of "human" and "human nature", often challenging typical notions of human subjectivity and embodiment and strives to move beyond "archaic" concepts of "human nature" to develop ones which constantly adapt to contemporary technoscientific knowledge.
Philosophical posthumanism: A philosophical direction that draws on cultural posthumanism, the philosophical strand examines the ethical implications of expanding the circle of moral concern and extending subjectivities beyond the human species.
Posthuman condition: The deconstruction of the human condition by critical theorists.
Existential posthumanism: it embraces posthumanism as a praxis of existence. Its sources are drawn from non-dualistic global philosophies, such as Advaita Vedanta, Taoism and Zen Buddhism, the philosophies of Yoga, continental existentialism, native epistemologies and Sufism, among others. It examines and challenges hegemonic notions of being "human" by delving into the history of embodied practices of being human and, thus, expanding the reflection on human nature.
Posthuman transhumanism: A transhuman ideology and movement which, drawing from posthumanist philosophy, seeks to develop and make available technologies that enable immortality and greatly enhance human intellectual, physical, and psychological capacities in order to achieve a "posthuman future".
AI takeover: A variant of transhumanism in which humans will not be enhanced, but rather eventually replaced by artificial intelligences. Some philosophers and theorists, including Nick Land, promote the view that humans should embrace and accept their eventual demise as a consequence of a technological singularity. This is related to the view of "cosmism", which supports the building of strong artificial intelligence even if it may entail the end of humanity, as in their view it "would be a cosmic tragedy if humanity freezes evolution at the puny human level".
Voluntary human extinction: Seeks a "posthuman future" that in this case is a future without humans.
Philosophical posthumanism
Philosopher Theodore Schatzki suggests there are two varieties of posthumanism of the philosophical kind:
One, which he calls "objectivism", tries to counter the overemphasis of the subjective, or intersubjective, that pervades humanism, and emphasises the role of the nonhuman agents, whether they be animals and plants, or computers or other things, because "Humans and nonhumans, it [objectivism] proclaims, codetermine one another", and also claims "independence of (some) objects from human activity and conceptualization".
A second posthumanist agenda is "the prioritization of practices over individuals (or individual subjects)", which, they say, constitute the individual.
There may be a third kind of posthumanism, propounded by the philosopher Herman Dooyeweerd. Though he did not label it "posthumanism", he made an immanent critique of humanism, and then constructed a philosophy that presupposed neither humanist, nor scholastic, nor Greek thought but started with a different religious ground motive. Dooyeweerd prioritized law and meaningfulness as that which enables humanity and all else to exist, behave, live, occur, etc. "Meaning is the being of all that has been created", Dooyeweerd wrote, "and the nature even of our selfhood". Both human and nonhuman alike function subject to a common law-side, which is diverse, composed of a number of distinct law-spheres or aspects. The temporal being of both human and non-human is multi-aspectual; for example, both plants and humans are bodies, functioning in the biotic aspect, and both computers and humans function in the formative and lingual aspect, but humans function in the aesthetic, juridical, ethical and faith aspects too. The Dooyeweerdian version is able to incorporate and integrate both the objectivist version and the practices version, because it allows nonhuman agents their own subject-functioning in various aspects and places emphasis on aspectual functioning.
Emergence of philosophical posthumanism
Ihab Hassan, theorist in the academic study of literature, once stated: "Humanism may be coming to an end as humanism transforms itself into something one must helplessly call posthumanism." This view predates most currents of posthumanism which have developed over the late 20th century in somewhat diverse, but complementary, domains of thought and practice. For example, Hassan is a known scholar whose theoretical writings expressly address postmodernity in society. Beyond postmodernist studies, posthumanism has been developed and deployed by various cultural theorists, often in reaction to problematic inherent assumptions within humanistic and enlightenment thought.
Theorists who both complement and contrast Hassan include Michel Foucault, Judith Butler, cyberneticists such as Gregory Bateson, Warren McCullouch, Norbert Wiener, Bruno Latour, Cary Wolfe, Elaine Graham, N. Katherine Hayles, Benjamin H. Bratton, Donna Haraway, Peter Sloterdijk, Stefan Lorenz Sorgner, Evan Thompson, Francisco Varela, Humberto Maturana, Timothy Morton, and Douglas Kellner. Among the theorists are philosophers, such as Robert Pepperell, who have written about a "posthuman condition", which is often substituted for the term posthumanism.
Posthumanism differs from classical humanism by relegating humanity back to one of many natural species, thereby rejecting any claims founded on anthropocentric dominance. According to this claim, humans have no inherent rights to destroy nature or set themselves above it in ethical considerations a priori. Human knowledge is also reduced to a less controlling position, previously seen as the defining aspect of the world. Human rights exist on a spectrum with animal rights and posthuman rights. The limitations and fallibility of human intelligence are confessed, even though it does not imply abandoning the rational tradition of humanism.
Proponents of a posthuman discourse, suggest that innovative advancements and emerging technologies have transcended the traditional model of the human, as proposed by Descartes among others associated with philosophy of the Enlightenment period. Posthumanistic views were also found in the works of Shakespeare. In contrast to humanism, the discourse of posthumanism seeks to redefine the boundaries surrounding modern philosophical understanding of the human. Posthumanism represents an evolution of thought beyond that of the contemporary social boundaries and is predicated on the seeking of truth within a postmodern context. In so doing, it rejects previous attempts to establish "anthropological universals" that are imbued with anthropocentric assumptions. Recently, critics have sought to describe the emergence of posthumanism as a critical moment in modernity, arguing for the origins of key posthuman ideas in modern fiction, in Nietzsche, or in a modernist response to the crisis of historicity.
Although Nietzsche's philosophy has been characterized as posthumanist, Foucault placed posthumanism within a context that differentiated humanism from Enlightenment thought. According to Foucault, the two existed in a state of tension: as humanism sought to establish norms while Enlightenment thought attempted to transcend all that is material, including the boundaries that are constructed by humanistic thought. Drawing on the Enlightenment's challenges to the boundaries of humanism, posthumanism rejects the various assumptions of human dogmas (anthropological, political, scientific) and takes the next step by attempting to change the nature of thought about what it means to be human. This requires not only decentering the human in multiple discourses (evolutionary, ecological and technological) but also examining those discourses to uncover inherent humanistic, anthropocentric, normative notions of humanness and the concept of the human.
Contemporary posthuman discourse
Posthumanistic discourse aims to open up spaces to examine what it means to be human and critically question the concept of "the human" in light of current cultural and historical contexts. In her book How We Became Posthuman, N. Katherine Hayles, writes about the struggle between different versions of the posthuman as it continually co-evolves alongside intelligent machines. Such coevolution, according to some strands of the posthuman discourse, allows one to extend their subjective understandings of real experiences beyond the boundaries of embodied existence. According to Hayles's view of posthuman, often referred to as "technological posthumanism", visual perception and digital representations thus paradoxically become ever more salient. Even as one seeks to extend knowledge by deconstructing perceived boundaries, it is these same boundaries that make knowledge acquisition possible. The use of technology in a contemporary society is thought to complicate this relationship.
Hayles discusses the translation of human bodies into information (as suggested by Hans Moravec) in order to illuminate how the boundaries of our embodied reality have been compromised in the current age and how narrow definitions of humanness no longer apply. Because of this, according to Hayles, posthumanism is characterized by a loss of subjectivity based on bodily boundaries. This strand of posthumanism, including the changing notion of subjectivity and the disruption of ideas concerning what it means to be human, is often associated with Donna Haraway's concept of the cyborg. However, Haraway has distanced herself from posthumanistic discourse due to other theorists' use of the term to promote utopian views of technological innovation to extend the human biological capacity (even though these notions would more correctly fall into the realm of transhumanism).
While posthumanism is a broad and complex ideology, it has relevant implications today and for the future. It attempts to redefine social structures without inherently humanly or even biological origins, but rather in terms of social and psychological systems where consciousness and communication could potentially exist as unique disembodied entities. Questions subsequently emerge with respect to the current use and the future of technology in shaping human existence, as do new concerns with regards to language, symbolism, subjectivity, phenomenology, ethics, justice and creativity.
Technological versus non-technological
Posthumanism can be divided into non-technological and technological forms.
Non-technological posthumanism
While posthumanization has links with the scholarly methodologies of posthumanism, it is a distinct phenomenon. The rise of explicit posthumanism as a scholarly approach is relatively recent, occurring since the late 1970s; however, some of the processes of posthumanization that it studies are ancient. For example, the dynamics of non-technological posthumanization have existed historically in all societies in which animals were incorporated into families as household pets or in which ghosts, monsters, angels, or semidivine heroes were considered to play some role in the world.
Such non-technological posthumanization has been manifested not only in mythological and literary works but also in the construction of temples, cemeteries, zoos, or other physical structures that were considered to be inhabited or used by quasi- or para-human beings who were not natural, living, biological human beings but who nevertheless played some role within a given society, to the extent that, according to philosopher Francesca Ferrando: "the notion of spirituality dramatically broadens our understanding of the posthuman, allowing us to investigate not only technical technologies (robotics, cybernetics, biotechnology, nanotechnology, among others), but also, technologies of existence."
Technological posthumanism
Some forms of technological posthumanization involve efforts to directly alter the social, psychological, or physical structures and behaviors of the human being through the development and application of technologies relating to genetic engineering or neurocybernetic augmentation; such forms of posthumanization are studied, e.g., by cyborg theory. Other forms of technological posthumanization indirectly "posthumanize" human society through the deployment of social robots or attempts to develop artificial general intelligences, sentient networks, or other entities that can collaborate and interact with human beings as members of posthumanized societies.
The dynamics of technological posthumanization have long been an important element of science fiction; genres such as cyberpunk take them as a central focus. In recent decades, technological posthumanization has also become the subject of increasing attention by scholars and policymakers. The expanding and accelerating forces of technological posthumanization have generated diverse and conflicting responses, with some researchers viewing the processes of posthumanization as opening the door to a more meaningful and advanced transhumanist future for humanity, while other bioconservative critiques warn that such processes may lead to a fragmentation of human society, loss of meaning, and subjugation to the forces of technology.
Common features
Processes of technological and non-technological posthumanization both tend to result in a partial "de-anthropocentrization" of human society, as its circle of membership is expanded to include other types of entities and the position of human beings is decentered. A common theme of posthumanist study is the way in which processes of posthumanization challenge or blur simple binaries, such as those of "human versus non-human", "natural versus artificial", "alive versus non-alive", and "biological versus mechanical".
Relationship with transhumanism
Sociologist James Hughes comments that there is considerable confusion between the two terms. In the introduction to their book on post- and transhumanism, Robert Ranisch and Stefan Sorgner address the source of this confusion, stating that posthumanism is often used as an umbrella term that includes both transhumanism and critical posthumanism.
Although both subjects relate to the future of humanity, they differ in their view of anthropocentrism. Pramod Nayar, author of Posthumanism, states that posthumanism has two main branches: ontological and critical. Ontological posthumanism is synonymous with transhumanism. The subject is regarded as "an intensification of humanism". Transhumanist thought suggests that humans are not post human yet, but that human enhancement, often through technological advancement and application, is the passage of becoming post human. Transhumanism retains humanism's focus on the Homo sapiens as the center of the world but also considers technology to be an integral aid to human progression. Critical posthumanism, however, is opposed to these views. Critical posthumanism "rejects both human exceptionalism (the idea that humans are unique creatures) and human instrumentalism (that humans have a right to control the natural world)". These contrasting views on the importance of human beings are the main distinctions between the two subjects.
Transhumanism is also more ingrained in popular culture than critical posthumanism, especially in science fiction. The term is referred to by Pramod Nayar as "the pop posthumanism of cinema and pop culture".
Criticism
Some critics have argued that all forms of posthumanism, including transhumanism, have more in common than their respective proponents realize. Linking these different approaches, Paul James suggests that "the key political problem is that, in effect, the position allows the human as a category of being to flow down the plughole of history":
However, some posthumanists in the humanities and the arts are critical of transhumanism (the brunt of James's criticism), in part, because they argue that it incorporates and extends many of the values of Enlightenment humanism and classical liberalism, namely scientism, according to performance philosopher Shannon Bell:
While many modern leaders of thought are accepting of nature of ideologies described by posthumanism, some are more skeptical of the term. Haraway, the author of A Cyborg Manifesto, has outspokenly rejected the term, though acknowledges a philosophical alignment with posthumanism. Haraway opts instead for the term of companion species, referring to nonhuman entities with which humans coexist.
Questions of race, some argue, are suspiciously elided within the "turn" to posthumanism. Noting that the terms "post" and "human" are already loaded with racial meaning, critical theorist Zakiyyah Iman Jackson argues that the impulse to move "beyond" the human within posthumanism too often ignores "praxes of humanity and critiques produced by black people", including Frantz Fanon, Aime Cesaire, Hortense Spillers and Fred Moten. Interrogating the conceptual grounds in which such a mode of "beyond" is rendered legible and viable, Jackson argues that it is important to observe that "blackness conditions and constitutes the very nonhuman disruption and/or disruption" which posthumanists invite. In other words, given that race in general and blackness in particular constitute the very terms through which human-nonhuman distinctions are made, for example in enduring legacies of scientific racism, a gesture toward a "beyond" actually "returns us to a Eurocentric transcendentalism long challenged". Posthumanist scholarship, due to characteristic rhetorical techniques, is also frequently subject to the same critiques commonly made of postmodernist scholarship in the 1980s and 1990s.
See also
Bioconservatism
Cyborg anthropology
Posthuman
Superhuman
Technological change
Technological transitions
Transhumanism
References
Works cited
Via Project Muse .
Critical theory
Ontology
Philosophical theories
Philosophical schools and traditions
Postmodernism | 0.785626 | 0.995869 | 0.782381 |
Evolution | Evolution is the change in the heritable characteristics of biological populations over successive generations. It occurs when evolutionary processes such as natural selection and genetic drift act on genetic variation, resulting in certain characteristics becoming more or less common within a population over successive generations. The process of evolution has given rise to biodiversity at every level of biological organisation.
The scientific theory of evolution by natural selection was conceived independently by two British naturalists, Charles Darwin and Alfred Russel Wallace, in the mid-19th century as an explanation for why organisms are adapted to their physical and biological environments. The theory was first set out in detail in Darwin's book On the Origin of Species. Evolution by natural selection is established by observable facts about living organisms: (1) more offspring are often produced than can possibly survive; (2) traits vary among individuals with respect to their morphology, physiology, and behaviour; (3) different traits confer different rates of survival and reproduction (differential fitness); and (4) traits can be passed from generation to generation (heritability of fitness). In successive generations, members of a population are therefore more likely to be replaced by the offspring of parents with favourable characteristics for that environment.
In the early 20th century, competing ideas of evolution were refuted and evolution was combined with Mendelian inheritance and population genetics to give rise to modern evolutionary theory. In this synthesis the basis for heredity is in DNA molecules that pass information from generation to generation. The processes that change DNA in a population include natural selection, genetic drift, mutation, and gene flow.
All life on Earth—including humanity—shares a last universal common ancestor (LUCA), which lived approximately 3.5–3.8 billion years ago. The fossil record includes a progression from early biogenic graphite to microbial mat fossils to fossilised multicellular organisms. Existing patterns of biodiversity have been shaped by repeated formations of new species (speciation), changes within species (anagenesis), and loss of species (extinction) throughout the evolutionary history of life on Earth. Morphological and biochemical traits tend to be more similar among species that share a more recent common ancestor, which historically was used to reconstruct phylogenetic trees, although direct comparison of genetic sequences is a more common method today.
Evolutionary biologists have continued to study various aspects of evolution by forming and testing hypotheses as well as constructing theories based on evidence from the field or laboratory and on data generated by the methods of mathematical and theoretical biology. Their discoveries have influenced not just the development of biology but also other fields including agriculture, medicine, and computer science.
Heredity
Evolution in organisms occurs through changes in heritable characteristics—the inherited characteristics of an organism. In humans, for example, eye colour is an inherited characteristic and an individual might inherit the "brown-eye trait" from one of their parents. Inherited traits are controlled by genes and the complete set of genes within an organism's genome (genetic material) is called its genotype.
The complete set of observable traits that make up the structure and behaviour of an organism is called its phenotype. Some of these traits come from the interaction of its genotype with the environment while others are neutral. Some observable characteristics are not inherited. For example, suntanned skin comes from the interaction between a person's genotype and sunlight; thus, suntans are not passed on to people's children. The phenotype is the ability of the skin to tan when exposed to sunlight. However, some people tan more easily than others, due to differences in genotypic variation; a striking example are people with the inherited trait of albinism, who do not tan at all and are very sensitive to sunburn.
Heritable characteristics are passed from one generation to the next via DNA, a molecule that encodes genetic information. DNA is a long biopolymer composed of four types of bases. The sequence of bases along a particular DNA molecule specifies the genetic information, in a manner similar to a sequence of letters spelling out a sentence. Before a cell divides, the DNA is copied, so that each of the resulting two cells will inherit the DNA sequence. Portions of a DNA molecule that specify a single functional unit are called genes; different genes have different sequences of bases. Within cells, each long strand of DNA is called a chromosome. The specific location of a DNA sequence within a chromosome is known as a locus. If the DNA sequence at a locus varies between individuals, the different forms of this sequence are called alleles. DNA sequences can change through mutations, producing new alleles. If a mutation occurs within a gene, the new allele may affect the trait that the gene controls, altering the phenotype of the organism. However, while this simple correspondence between an allele and a trait works in some cases, most traits are influenced by multiple genes in a quantitative or epistatic manner.
Sources of variation
Evolution can occur if there is genetic variation within a population. Variation comes from mutations in the genome, reshuffling of genes through sexual reproduction and migration between populations (gene flow). Despite the constant introduction of new variation through mutation and gene flow, most of the genome of a species is very similar among all individuals of that species. However, discoveries in the field of evolutionary developmental biology have demonstrated that even relatively small differences in genotype can lead to dramatic differences in phenotype both within and between species.
An individual organism's phenotype results from both its genotype and the influence of the environment it has lived in. The modern evolutionary synthesis defines evolution as the change over time in this genetic variation. The frequency of one particular allele will become more or less prevalent relative to other forms of that gene. Variation disappears when a new allele reaches the point of fixation—when it either disappears from the population or replaces the ancestral allele entirely.
Mutation
Mutations are changes in the DNA sequence of a cell's genome and are the ultimate source of genetic variation in all organisms. When mutations occur, they may alter the product of a gene, or prevent the gene from functioning, or have no effect.
About half of the mutations in the coding regions of protein-coding genes are deleterious — the other half are neutral. A small percentage of the total mutations in this region confer a fitness benefit. Some of the mutations in other parts of the genome are deleterious but the vast majority are neutral. A few are beneficial.
Mutations can involve large sections of a chromosome becoming duplicated (usually by genetic recombination), which can introduce extra copies of a gene into a genome. Extra copies of genes are a major source of the raw material needed for new genes to evolve. This is important because most new genes evolve within gene families from pre-existing genes that share common ancestors. For example, the human eye uses four genes to make structures that sense light: three for colour vision and one for night vision; all four are descended from a single ancestral gene.
New genes can be generated from an ancestral gene when a duplicate copy mutates and acquires a new function. This process is easier once a gene has been duplicated because it increases the redundancy of the system; one gene in the pair can acquire a new function while the other copy continues to perform its original function. Other types of mutations can even generate entirely new genes from previously noncoding DNA, a phenomenon termed de novo gene birth.
The generation of new genes can also involve small parts of several genes being duplicated, with these fragments then recombining to form new combinations with new functions (exon shuffling). When new genes are assembled from shuffling pre-existing parts, domains act as modules with simple independent functions, which can be mixed together to produce new combinations with new and complex functions. For example, polyketide synthases are large enzymes that make antibiotics; they contain up to 100 independent domains that each catalyse one step in the overall process, like a step in an assembly line.
One example of mutation is wild boar piglets. They are camouflage coloured and show a characteristic pattern of dark and light longitudinal stripes. However, mutations in the melanocortin 1 receptor (MC1R) disrupt the pattern. The majority of pig breeds carry MC1R mutations disrupting wild-type colour and different mutations causing dominant black colouring.
Sex and recombination
In asexual organisms, genes are inherited together, or linked, as they cannot mix with genes of other organisms during reproduction. In contrast, the offspring of sexual organisms contain random mixtures of their parents' chromosomes that are produced through independent assortment. In a related process called homologous recombination, sexual organisms exchange DNA between two matching chromosomes. Recombination and reassortment do not alter allele frequencies, but instead change which alleles are associated with each other, producing offspring with new combinations of alleles. Sex usually increases genetic variation and may increase the rate of evolution.
The two-fold cost of sex was first described by John Maynard Smith. The first cost is that in sexually dimorphic species only one of the two sexes can bear young. This cost does not apply to hermaphroditic species, like most plants and many invertebrates. The second cost is that any individual who reproduces sexually can only pass on 50% of its genes to any individual offspring, with even less passed on as each new generation passes. Yet sexual reproduction is the more common means of reproduction among eukaryotes and multicellular organisms. The Red Queen hypothesis has been used to explain the significance of sexual reproduction as a means to enable continual evolution and adaptation in response to coevolution with other species in an ever-changing environment. Another hypothesis is that sexual reproduction is primarily an adaptation for promoting accurate recombinational repair of damage in germline DNA, and that increased diversity is a byproduct of this process that may sometimes be adaptively beneficial.
Gene flow
Gene flow is the exchange of genes between populations and between species. It can therefore be a source of variation that is new to a population or to a species. Gene flow can be caused by the movement of individuals between separate populations of organisms, as might be caused by the movement of mice between inland and coastal populations, or the movement of pollen between heavy-metal-tolerant and heavy-metal-sensitive populations of grasses.
Gene transfer between species includes the formation of hybrid organisms and horizontal gene transfer. Horizontal gene transfer is the transfer of genetic material from one organism to another organism that is not its offspring; this is most common among bacteria. In medicine, this contributes to the spread of antibiotic resistance, as when one bacteria acquires resistance genes it can rapidly transfer them to other species. Horizontal transfer of genes from bacteria to eukaryotes such as the yeast Saccharomyces cerevisiae and the adzuki bean weevil Callosobruchus chinensis has occurred. An example of larger-scale transfers are the eukaryotic bdelloid rotifers, which have received a range of genes from bacteria, fungi and plants. Viruses can also carry DNA between organisms, allowing transfer of genes even across biological domains.
Large-scale gene transfer has also occurred between the ancestors of eukaryotic cells and bacteria, during the acquisition of chloroplasts and mitochondria. It is possible that eukaryotes themselves originated from horizontal gene transfers between bacteria and archaea.
Epigenetics
Some heritable changes cannot be explained by changes to the sequence of nucleotides in the DNA. These phenomena are classed as epigenetic inheritance systems. DNA methylation marking chromatin, self-sustaining metabolic loops, gene silencing by RNA interference and the three-dimensional conformation of proteins (such as prions) are areas where epigenetic inheritance systems have been discovered at the organismic level. Developmental biologists suggest that complex interactions in genetic networks and communication among cells can lead to heritable variations that may underlay some of the mechanics in developmental plasticity and canalisation. Heritability may also occur at even larger scales. For example, ecological inheritance through the process of niche construction is defined by the regular and repeated activities of organisms in their environment. This generates a legacy of effects that modify and feed back into the selection regime of subsequent generations. Other examples of heritability in evolution that are not under the direct control of genes include the inheritance of cultural traits and symbiogenesis.
Evolutionary forces
From a neo-Darwinian perspective, evolution occurs when there are changes in the frequencies of alleles within a population of interbreeding organisms, for example, the allele for black colour in a population of moths becoming more common. Mechanisms that can lead to changes in allele frequencies include natural selection, genetic drift, and mutation bias.
Natural selection
Evolution by natural selection is the process by which traits that enhance survival and reproduction become more common in successive generations of a population. It embodies three principles:
Variation exists within populations of organisms with respect to morphology, physiology and behaviour (phenotypic variation).
Different traits confer different rates of survival and reproduction (differential fitness).
These traits can be passed from generation to generation (heritability of fitness).
More offspring are produced than can possibly survive, and these conditions produce competition between organisms for survival and reproduction. Consequently, organisms with traits that give them an advantage over their competitors are more likely to pass on their traits to the next generation than those with traits that do not confer an advantage. This teleonomy is the quality whereby the process of natural selection creates and preserves traits that are seemingly fitted for the functional roles they perform. Consequences of selection include nonrandom mating and genetic hitchhiking.
The central concept of natural selection is the evolutionary fitness of an organism. Fitness is measured by an organism's ability to survive and reproduce, which determines the size of its genetic contribution to the next generation. However, fitness is not the same as the total number of offspring: instead fitness is indicated by the proportion of subsequent generations that carry an organism's genes. For example, if an organism could survive well and reproduce rapidly, but its offspring were all too small and weak to survive, this organism would make little genetic contribution to future generations and would thus have low fitness.
If an allele increases fitness more than the other alleles of that gene, then with each generation this allele has a higher probability of becoming common within the population. These traits are said to be "selected for." Examples of traits that can increase fitness are enhanced survival and increased fecundity. Conversely, the lower fitness caused by having a less beneficial or deleterious allele results in this allele likely becoming rarer—they are "selected against."
Importantly, the fitness of an allele is not a fixed characteristic; if the environment changes, previously neutral or harmful traits may become beneficial and previously beneficial traits become harmful. However, even if the direction of selection does reverse in this way, traits that were lost in the past may not re-evolve in an identical form. However, a re-activation of dormant genes, as long as they have not been eliminated from the genome and were only suppressed perhaps for hundreds of generations, can lead to the re-occurrence of traits thought to be lost like hindlegs in dolphins, teeth in chickens, wings in wingless stick insects, tails and additional nipples in humans etc. "Throwbacks" such as these are known as atavisms.
Natural selection within a population for a trait that can vary across a range of values, such as height, can be categorised into three different types. The first is directional selection, which is a shift in the average value of a trait over time—for example, organisms slowly getting taller. Secondly, disruptive selection is selection for extreme trait values and often results in two different values becoming most common, with selection against the average value. This would be when either short or tall organisms had an advantage, but not those of medium height. Finally, in stabilising selection there is selection against extreme trait values on both ends, which causes a decrease in variance around the average value and less diversity. This would, for example, cause organisms to eventually have a similar height.
Natural selection most generally makes nature the measure against which individuals and individual traits, are more or less likely to survive. "Nature" in this sense refers to an ecosystem, that is, a system in which organisms interact with every other element, physical as well as biological, in their local environment. Eugene Odum, a founder of ecology, defined an ecosystem as: "Any unit that includes all of the organisms...in a given area interacting with the physical environment so that a flow of energy leads to clearly defined trophic structure, biotic diversity, and material cycles (i.e., exchange of materials between living and nonliving parts) within the system...." Each population within an ecosystem occupies a distinct niche, or position, with distinct relationships to other parts of the system. These relationships involve the life history of the organism, its position in the food chain and its geographic range. This broad understanding of nature enables scientists to delineate specific forces which, together, comprise natural selection.
Natural selection can act at different levels of organisation, such as genes, cells, individual organisms, groups of organisms and species. Selection can act at multiple levels simultaneously. An example of selection occurring below the level of the individual organism are genes called transposons, which can replicate and spread throughout a genome. Selection at a level above the individual, such as group selection, may allow the evolution of cooperation.
Genetic drift
Genetic drift is the random fluctuation of allele frequencies within a population from one generation to the next. When selective forces are absent or relatively weak, allele frequencies are equally likely to drift upward or downward in each successive generation because the alleles are subject to sampling error. This drift halts when an allele eventually becomes fixed, either by disappearing from the population or by replacing the other alleles entirely. Genetic drift may therefore eliminate some alleles from a population due to chance alone. Even in the absence of selective forces, genetic drift can cause two separate populations that begin with the same genetic structure to drift apart into two divergent populations with different sets of alleles.
According to the neutral theory of molecular evolution most evolutionary changes are the result of the fixation of neutral mutations by genetic drift. In this model, most genetic changes in a population are thus the result of constant mutation pressure and genetic drift. This form of the neutral theory has been debated since it does not seem to fit some genetic variation seen in nature. A better-supported version of this model is the nearly neutral theory, according to which a mutation that would be effectively neutral in a small population is not necessarily neutral in a large population. Other theories propose that genetic drift is dwarfed by other stochastic forces in evolution, such as genetic hitchhiking, also known as genetic draft. Another concept is constructive neutral evolution (CNE), which explains that complex systems can emerge and spread into a population through neutral transitions due to the principles of excess capacity, presuppression, and ratcheting, and it has been applied in areas ranging from the origins of the spliceosome to the complex interdependence of microbial communities.
The time it takes a neutral allele to become fixed by genetic drift depends on population size; fixation is more rapid in smaller populations. The number of individuals in a population is not critical, but instead a measure known as the effective population size. The effective population is usually smaller than the total population since it takes into account factors such as the level of inbreeding and the stage of the lifecycle in which the population is the smallest. The effective population size may not be the same for every gene in the same population.
It is usually difficult to measure the relative importance of selection and neutral processes, including drift. The comparative importance of adaptive and non-adaptive forces in driving evolutionary change is an area of current research.
Mutation bias
Mutation bias is usually conceived as a difference in expected rates for two different kinds of mutation, e.g., transition-transversion bias, GC-AT bias, deletion-insertion bias. This is related to the idea of developmental bias. Haldane and Fisher argued that, because mutation is a weak pressure easily overcome by selection, tendencies of mutation would be ineffectual except under conditions of neutral evolution or extraordinarily high mutation rates. This opposing-pressures argument was long used to dismiss the possibility of internal tendencies in evolution, until the molecular era prompted renewed interest in neutral evolution.
Noboru Sueoka and Ernst Freese proposed that systematic biases in mutation might be responsible for systematic differences in genomic GC composition between species. The identification of a GC-biased E. coli mutator strain in 1967, along with the proposal of the neutral theory, established the plausibility of mutational explanations for molecular patterns, which are now common in the molecular evolution literature.
For instance, mutation biases are frequently invoked in models of codon usage. Such models also include effects of selection, following the mutation-selection-drift model, which allows both for mutation biases and differential selection based on effects on translation. Hypotheses of mutation bias have played an important role in the development of thinking about the evolution of genome composition, including isochores. Different insertion vs. deletion biases in different taxa can lead to the evolution of different genome sizes. The hypothesis of Lynch regarding genome size relies on mutational biases toward increase or decrease in genome size.
However, mutational hypotheses for the evolution of composition suffered a reduction in scope when it was discovered that (1) GC-biased gene conversion makes an important contribution to composition in diploid organisms such as mammals and (2) bacterial genomes frequently have AT-biased mutation.
Contemporary thinking about the role of mutation biases reflects a different theory from that of Haldane and Fisher. More recent work showed that the original "pressures" theory assumes that evolution is based on standing variation: when evolution depends on events of mutation that introduce new alleles, mutational and developmental biases in the introduction of variation (arrival biases) can impose biases on evolution without requiring neutral evolution or high mutation rates.
Several studies report that the mutations implicated in adaptation reflect common mutation biases though others dispute this interpretation.
Genetic hitchhiking
Recombination allows alleles on the same strand of DNA to become separated. However, the rate of recombination is low (approximately two events per chromosome per generation). As a result, genes close together on a chromosome may not always be shuffled away from each other and genes that are close together tend to be inherited together, a phenomenon known as linkage. This tendency is measured by finding how often two alleles occur together on a single chromosome compared to expectations, which is called their linkage disequilibrium. A set of alleles that is usually inherited in a group is called a haplotype. This can be important when one allele in a particular haplotype is strongly beneficial: natural selection can drive a selective sweep that will also cause the other alleles in the haplotype to become more common in the population; this effect is called genetic hitchhiking or genetic draft. Genetic draft caused by the fact that some neutral genes are genetically linked to others that are under selection can be partially captured by an appropriate effective population size.
Sexual selection
A special case of natural selection is sexual selection, which is selection for any trait that increases mating success by increasing the attractiveness of an organism to potential mates. Traits that evolved through sexual selection are particularly prominent among males of several animal species. Although sexually favoured, traits such as cumbersome antlers, mating calls, large body size and bright colours often attract predation, which compromises the survival of individual males. This survival disadvantage is balanced by higher reproductive success in males that show these hard-to-fake, sexually selected traits.
Natural outcomes
Evolution influences every aspect of the form and behaviour of organisms. Most prominent are the specific behavioural and physical adaptations that are the outcome of natural selection. These adaptations increase fitness by aiding activities such as finding food, avoiding predators or attracting mates. Organisms can also respond to selection by cooperating with each other, usually by aiding their relatives or engaging in mutually beneficial symbiosis. In the longer term, evolution produces new species through splitting ancestral populations of organisms into new groups that cannot or will not interbreed. These outcomes of evolution are distinguished based on time scale as macroevolution versus microevolution. Macroevolution refers to evolution that occurs at or above the level of species, in particular speciation and extinction, whereas microevolution refers to smaller evolutionary changes within a species or population, in particular shifts in allele frequency and adaptation. Macroevolution is the outcome of long periods of microevolution. Thus, the distinction between micro- and macroevolution is not a fundamental one—the difference is simply the time involved. However, in macroevolution, the traits of the entire species may be important. For instance, a large amount of variation among individuals allows a species to rapidly adapt to new habitats, lessening the chance of it going extinct, while a wide geographic range increases the chance of speciation, by making it more likely that part of the population will become isolated. In this sense, microevolution and macroevolution might involve selection at different levels—with microevolution acting on genes and organisms, versus macroevolutionary processes such as species selection acting on entire species and affecting their rates of speciation and extinction.
A common misconception is that evolution has goals, long-term plans, or an innate tendency for "progress", as expressed in beliefs such as orthogenesis and evolutionism; realistically, however, evolution has no long-term goal and does not necessarily produce greater complexity. Although complex species have evolved, they occur as a side effect of the overall number of organisms increasing, and simple forms of life still remain more common in the biosphere. For example, the overwhelming majority of species are microscopic prokaryotes, which form about half the world's biomass despite their small size and constitute the vast majority of Earth's biodiversity. Simple organisms have therefore been the dominant form of life on Earth throughout its history and continue to be the main form of life up to the present day, with complex life only appearing more diverse because it is more noticeable. Indeed, the evolution of microorganisms is particularly important to evolutionary research since their rapid reproduction allows the study of experimental evolution and the observation of evolution and adaptation in real time.
Adaptation
Adaptation is the process that makes organisms better suited to their habitat. Also, the term adaptation may refer to a trait that is important for an organism's survival. For example, the adaptation of horses' teeth to the grinding of grass. By using the term adaptation for the evolutionary process and adaptive trait for the product (the bodily part or function), the two senses of the word may be distinguished. Adaptations are produced by natural selection. The following definitions are due to Theodosius Dobzhansky:
Adaptation is the evolutionary process whereby an organism becomes better able to live in its habitat or habitats.
Adaptedness is the state of being adapted: the degree to which an organism is able to live and reproduce in a given set of habitats.
An adaptive trait is an aspect of the developmental pattern of the organism which enables or enhances the probability of that organism surviving and reproducing.
Adaptation may cause either the gain of a new feature, or the loss of an ancestral feature. An example that shows both types of change is bacterial adaptation to antibiotic selection, with genetic changes causing antibiotic resistance by both modifying the target of the drug, or increasing the activity of transporters that pump the drug out of the cell. Other striking examples are the bacteria Escherichia coli evolving the ability to use citric acid as a nutrient in a long-term laboratory experiment, Flavobacterium evolving a novel enzyme that allows these bacteria to grow on the by-products of nylon manufacturing, and the soil bacterium Sphingobium evolving an entirely new metabolic pathway that degrades the synthetic pesticide pentachlorophenol. An interesting but still controversial idea is that some adaptations might increase the ability of organisms to generate genetic diversity and adapt by natural selection (increasing organisms' evolvability).
Adaptation occurs through the gradual modification of existing structures. Consequently, structures with similar internal organisation may have different functions in related organisms. This is the result of a single ancestral structure being adapted to function in different ways. The bones within bat wings, for example, are very similar to those in mice feet and primate hands, due to the descent of all these structures from a common mammalian ancestor. However, since all living organisms are related to some extent, even organs that appear to have little or no structural similarity, such as arthropod, squid and vertebrate eyes, or the limbs and wings of arthropods and vertebrates, can depend on a common set of homologous genes that control their assembly and function; this is called deep homology.
During evolution, some structures may lose their original function and become vestigial structures. Such structures may have little or no function in a current species, yet have a clear function in ancestral species, or other closely related species. Examples include pseudogenes, the non-functional remains of eyes in blind cave-dwelling fish, wings in flightless birds, the presence of hip bones in whales and snakes, and sexual traits in organisms that reproduce via asexual reproduction. Examples of vestigial structures in humans include wisdom teeth, the coccyx, the vermiform appendix, and other behavioural vestiges such as goose bumps and primitive reflexes.
However, many traits that appear to be simple adaptations are in fact exaptations: structures originally adapted for one function, but which coincidentally became somewhat useful for some other function in the process. One example is the African lizard Holaspis guentheri, which developed an extremely flat head for hiding in crevices, as can be seen by looking at its near relatives. However, in this species, the head has become so flattened that it assists in gliding from tree to tree—an exaptation. Within cells, molecular machines such as the bacterial flagella and protein sorting machinery evolved by the recruitment of several pre-existing proteins that previously had different functions. Another example is the recruitment of enzymes from glycolysis and xenobiotic metabolism to serve as structural proteins called crystallins within the lenses of organisms' eyes.
An area of current investigation in evolutionary developmental biology is the developmental basis of adaptations and exaptations. This research addresses the origin and evolution of embryonic development and how modifications of development and developmental processes produce novel features. These studies have shown that evolution can alter development to produce new structures, such as embryonic bone structures that develop into the jaw in other animals instead forming part of the middle ear in mammals. It is also possible for structures that have been lost in evolution to reappear due to changes in developmental genes, such as a mutation in chickens causing embryos to grow teeth similar to those of crocodiles. It is now becoming clear that most alterations in the form of organisms are due to changes in a small set of conserved genes.
Coevolution
Interactions between organisms can produce both conflict and cooperation. When the interaction is between pairs of species, such as a pathogen and a host, or a predator and its prey, these species can develop matched sets of adaptations. Here, the evolution of one species causes adaptations in a second species. These changes in the second species then, in turn, cause new adaptations in the first species. This cycle of selection and response is called coevolution. An example is the production of tetrodotoxin in the rough-skinned newt and the evolution of tetrodotoxin resistance in its predator, the common garter snake. In this predator-prey pair, an evolutionary arms race has produced high levels of toxin in the newt and correspondingly high levels of toxin resistance in the snake.
Cooperation
Not all co-evolved interactions between species involve conflict. Many cases of mutually beneficial interactions have evolved. For instance, an extreme cooperation exists between plants and the mycorrhizal fungi that grow on their roots and aid the plant in absorbing nutrients from the soil. This is a reciprocal relationship as the plants provide the fungi with sugars from photosynthesis. Here, the fungi actually grow inside plant cells, allowing them to exchange nutrients with their hosts, while sending signals that suppress the plant immune system.
Coalitions between organisms of the same species have also evolved. An extreme case is the eusociality found in social insects, such as bees, termites and ants, where sterile insects feed and guard the small number of organisms in a colony that are able to reproduce. On an even smaller scale, the somatic cells that make up the body of an animal limit their reproduction so they can maintain a stable organism, which then supports a small number of the animal's germ cells to produce offspring. Here, somatic cells respond to specific signals that instruct them whether to grow, remain as they are, or die. If cells ignore these signals and multiply inappropriately, their uncontrolled growth causes cancer.
Such cooperation within species may have evolved through the process of kin selection, which is where one organism acts to help raise a relative's offspring. This activity is selected for because if the helping individual contains alleles which promote the helping activity, it is likely that its kin will also contain these alleles and thus those alleles will be passed on. Other processes that may promote cooperation include group selection, where cooperation provides benefits to a group of organisms.
Speciation
Speciation is the process where a species diverges into two or more descendant species.
There are multiple ways to define the concept of "species". The choice of definition is dependent on the particularities of the species concerned. For example, some species concepts apply more readily toward sexually reproducing organisms while others lend themselves better toward asexual organisms. Despite the diversity of various species concepts, these various concepts can be placed into one of three broad philosophical approaches: interbreeding, ecological and phylogenetic. The Biological Species Concept (BSC) is a classic example of the interbreeding approach. Defined by evolutionary biologist Ernst Mayr in 1942, the BSC states that "species are groups of actually or potentially interbreeding natural populations, which are reproductively isolated from other such groups." Despite its wide and long-term use, the BSC like other species concepts is not without controversy, for example, because genetic recombination among prokaryotes is not an intrinsic aspect of reproduction; this is called the species problem. Some researchers have attempted a unifying monistic definition of species, while others adopt a pluralistic approach and suggest that there may be different ways to logically interpret the definition of a species.
Barriers to reproduction between two diverging sexual populations are required for the populations to become new species. Gene flow may slow this process by spreading the new genetic variants also to the other populations. Depending on how far two species have diverged since their most recent common ancestor, it may still be possible for them to produce offspring, as with horses and donkeys mating to produce mules. Such hybrids are generally infertile. In this case, closely related species may regularly interbreed, but hybrids will be selected against and the species will remain distinct. However, viable hybrids are occasionally formed and these new species can either have properties intermediate between their parent species, or possess a totally new phenotype. The importance of hybridisation in producing new species of animals is unclear, although cases have been seen in many types of animals, with the gray tree frog being a particularly well-studied example.
Speciation has been observed multiple times under both controlled laboratory conditions and in nature. In sexually reproducing organisms, speciation results from reproductive isolation followed by genealogical divergence. There are four primary geographic modes of speciation. The most common in animals is allopatric speciation, which occurs in populations initially isolated geographically, such as by habitat fragmentation or migration. Selection under these conditions can produce very rapid changes in the appearance and behaviour of organisms. As selection and drift act independently on populations isolated from the rest of their species, separation may eventually produce organisms that cannot interbreed.
The second mode of speciation is peripatric speciation, which occurs when small populations of organisms become isolated in a new environment. This differs from allopatric speciation in that the isolated populations are numerically much smaller than the parental population. Here, the founder effect causes rapid speciation after an increase in inbreeding increases selection on homozygotes, leading to rapid genetic change.
The third mode is parapatric speciation. This is similar to peripatric speciation in that a small population enters a new habitat, but differs in that there is no physical separation between these two populations. Instead, speciation results from the evolution of mechanisms that reduce gene flow between the two populations. Generally this occurs when there has been a drastic change in the environment within the parental species' habitat. One example is the grass Anthoxanthum odoratum, which can undergo parapatric speciation in response to localised metal pollution from mines. Here, plants evolve that have resistance to high levels of metals in the soil. Selection against interbreeding with the metal-sensitive parental population produced a gradual change in the flowering time of the metal-resistant plants, which eventually produced complete reproductive isolation. Selection against hybrids between the two populations may cause reinforcement, which is the evolution of traits that promote mating within a species, as well as character displacement, which is when two species become more distinct in appearance.
Finally, in sympatric speciation species diverge without geographic isolation or changes in habitat. This form is rare since even a small amount of gene flow may remove genetic differences between parts of a population. Generally, sympatric speciation in animals requires the evolution of both genetic differences and nonrandom mating, to allow reproductive isolation to evolve.
One type of sympatric speciation involves crossbreeding of two related species to produce a new hybrid species. This is not common in animals as animal hybrids are usually sterile. This is because during meiosis the homologous chromosomes from each parent are from different species and cannot successfully pair. However, it is more common in plants because plants often double their number of chromosomes, to form polyploids. This allows the chromosomes from each parental species to form matching pairs during meiosis, since each parent's chromosomes are represented by a pair already. An example of such a speciation event is when the plant species Arabidopsis thaliana and Arabidopsis arenosa crossbred to give the new species Arabidopsis suecica. This happened about 20,000 years ago, and the speciation process has been repeated in the laboratory, which allows the study of the genetic mechanisms involved in this process. Indeed, chromosome doubling within a species may be a common cause of reproductive isolation, as half the doubled chromosomes will be unmatched when breeding with undoubled organisms.
Speciation events are important in the theory of punctuated equilibrium, which accounts for the pattern in the fossil record of short "bursts" of evolution interspersed with relatively long periods of stasis, where species remain relatively unchanged. In this theory, speciation and rapid evolution are linked, with natural selection and genetic drift acting most strongly on organisms undergoing speciation in novel habitats or small populations. As a result, the periods of stasis in the fossil record correspond to the parental population and the organisms undergoing speciation and rapid evolution are found in small populations or geographically restricted habitats and therefore rarely being preserved as fossils.
Extinction
Extinction is the disappearance of an entire species. Extinction is not an unusual event, as species regularly appear through speciation and disappear through extinction. Nearly all animal and plant species that have lived on Earth are now extinct, and extinction appears to be the ultimate fate of all species. These extinctions have happened continuously throughout the history of life, although the rate of extinction spikes in occasional mass extinction events. The Cretaceous–Paleogene extinction event, during which the non-avian dinosaurs became extinct, is the most well-known, but the earlier Permian–Triassic extinction event was even more severe, with approximately 96% of all marine species driven to extinction. The Holocene extinction event is an ongoing mass extinction associated with humanity's expansion across the globe over the past few thousand years. Present-day extinction rates are 100–1000 times greater than the background rate and up to 30% of current species may be extinct by the mid 21st century. Human activities are now the primary cause of the ongoing extinction event; global warming may further accelerate it in the future. Despite the estimated extinction of more than 99% of all species that ever lived on Earth, about 1 trillion species are estimated to be on Earth currently with only one-thousandth of 1% described.
The role of extinction in evolution is not very well understood and may depend on which type of extinction is considered. The causes of the continuous "low-level" extinction events, which form the majority of extinctions, may be the result of competition between species for limited resources (the competitive exclusion principle). If one species can out-compete another, this could produce species selection, with the fitter species surviving and the other species being driven to extinction. The intermittent mass extinctions are also important, but instead of acting as a selective force, they drastically reduce diversity in a nonspecific manner and promote bursts of rapid evolution and speciation in survivors.
Applications
Concepts and models used in evolutionary biology, such as natural selection, have many applications.
Artificial selection is the intentional selection of traits in a population of organisms. This has been used for thousands of years in the domestication of plants and animals. More recently, such selection has become a vital part of genetic engineering, with selectable markers such as antibiotic resistance genes being used to manipulate DNA. Proteins with valuable properties have evolved by repeated rounds of mutation and selection (for example modified enzymes and new antibodies) in a process called directed evolution.
Understanding the changes that have occurred during an organism's evolution can reveal the genes needed to construct parts of the body, genes which may be involved in human genetic disorders. For example, the Mexican tetra is an albino cavefish that lost its eyesight during evolution. Breeding together different populations of this blind fish produced some offspring with functional eyes, since different mutations had occurred in the isolated populations that had evolved in different caves. This helped identify genes required for vision and pigmentation.
Evolutionary theory has many applications in medicine. Many human diseases are not static phenomena, but capable of evolution. Viruses, bacteria, fungi and cancers evolve to be resistant to host immune defences, as well as to pharmaceutical drugs. These same problems occur in agriculture with pesticide and herbicide resistance. It is possible that we are facing the end of the effective life of most of available antibiotics and predicting the evolution and evolvability of our pathogens and devising strategies to slow or circumvent it is requiring deeper knowledge of the complex forces driving evolution at the molecular level.
In computer science, simulations of evolution using evolutionary algorithms and artificial life started in the 1960s and were extended with simulation of artificial selection. Artificial evolution became a widely recognised optimisation method as a result of the work of Ingo Rechenberg in the 1960s. He used evolution strategies to solve complex engineering problems. Genetic algorithms in particular became popular through the writing of John Henry Holland. Practical applications also include automatic evolution of computer programmes. Evolutionary algorithms are now used to solve multi-dimensional problems more efficiently than software produced by human designers and also to optimise the design of systems.
Evolutionary history of life
Origin of life
The Earth is about 4.54 billion years old. The earliest undisputed evidence of life on Earth dates from at least 3.5 billion years ago, during the Eoarchean Era after a geological crust started to solidify following the earlier molten Hadean Eon. Microbial mat fossils have been found in 3.48 billion-year-old sandstone in Western Australia. Other early physical evidence of a biogenic substance is graphite in 3.7 billion-year-old metasedimentary rocks discovered in Western Greenland as well as "remains of biotic life" found in 4.1 billion-year-old rocks in Western Australia. Commenting on the Australian findings, Stephen Blair Hedges wrote: "If life arose relatively quickly on Earth, then it could be common in the universe." In July 2016, scientists reported identifying a set of 355 genes from the last universal common ancestor (LUCA) of all organisms living on Earth.
More than 99% of all species, amounting to over five billion species, that ever lived on Earth are estimated to be extinct. Estimates on the number of Earth's current species range from 10 million to 14 million, of which about 1.9 million are estimated to have been named and 1.6 million documented in a central database to date, leaving at least 80% not yet described.
Highly energetic chemistry is thought to have produced a self-replicating molecule around 4 billion years ago, and half a billion years later the last common ancestor of all life existed. The current scientific consensus is that the complex biochemistry that makes up life came from simpler chemical reactions. The beginning of life may have included self-replicating molecules such as RNA and the assembly of simple cells.
Common descent
All organisms on Earth are descended from a common ancestor or ancestral gene pool. Current species are a stage in the process of evolution, with their diversity the product of a long series of speciation and extinction events. The common descent of organisms was first deduced from four simple facts about organisms: First, they have geographic distributions that cannot be explained by local adaptation. Second, the diversity of life is not a set of completely unique organisms, but organisms that share morphological similarities. Third, vestigial traits with no clear purpose resemble functional ancestral traits. Fourth, organisms can be classified using these similarities into a hierarchy of nested groups, similar to a family tree.
Due to horizontal gene transfer, this "tree of life" may be more complicated than a simple branching tree, since some genes have spread independently between distantly related species. To solve this problem and others, some authors prefer to use the "Coral of life" as a metaphor or a mathematical model to illustrate the evolution of life. This view dates back to an idea briefly mentioned by Darwin but later abandoned.
Past species have also left records of their evolutionary history. Fossils, along with the comparative anatomy of present-day organisms, constitute the morphological, or anatomical, record. By comparing the anatomies of both modern and extinct species, palaeontologists can infer the lineages of those species. However, this approach is most successful for organisms that had hard body parts, such as shells, bones or teeth. Further, as prokaryotes such as bacteria and archaea share a limited set of common morphologies, their fossils do not provide information on their ancestry.
More recently, evidence for common descent has come from the study of biochemical similarities between organisms. For example, all living cells use the same basic set of nucleotides and amino acids. The development of molecular genetics has revealed the record of evolution left in organisms' genomes: dating when species diverged through the molecular clock produced by mutations. For example, these DNA sequence comparisons have revealed that humans and chimpanzees share 98% of their genomes and analysing the few areas where they differ helps shed light on when the common ancestor of these species existed.
Evolution of life
Prokaryotes inhabited the Earth from approximately 3–4 billion years ago. No obvious changes in morphology or cellular organisation occurred in these organisms over the next few billion years. The eukaryotic cells emerged between 1.6 and 2.7 billion years ago. The next major change in cell structure came when bacteria were engulfed by eukaryotic cells, in a cooperative association called endosymbiosis. The engulfed bacteria and the host cell then underwent coevolution, with the bacteria evolving into either mitochondria or hydrogenosomes. Another engulfment of cyanobacterial-like organisms led to the formation of chloroplasts in algae and plants.
The history of life was that of the unicellular eukaryotes, prokaryotes and archaea until about 610 million years ago when multicellular organisms began to appear in the oceans in the Ediacaran period. The evolution of multicellularity occurred in multiple independent events, in organisms as diverse as sponges, brown algae, cyanobacteria, slime moulds and myxobacteria. In January 2016, scientists reported that, about 800 million years ago, a minor genetic change in a single molecule called GK-PID may have allowed organisms to go from a single cell organism to one of many cells.
Soon after the emergence of these first multicellular organisms, a remarkable amount of biological diversity appeared over approximately 10 million years, in an event called the Cambrian explosion. Here, the majority of types of modern animals appeared in the fossil record, as well as unique lineages that subsequently became extinct. Various triggers for the Cambrian explosion have been proposed, including the accumulation of oxygen in the atmosphere from photosynthesis.
About 500 million years ago, plants and fungi colonised the land and were soon followed by arthropods and other animals. Insects were particularly successful and even today make up the majority of animal species. Amphibians first appeared around 364 million years ago, followed by early amniotes and birds around 155 million years ago (both from "reptile"-like lineages), mammals around 129 million years ago, Homininae around 10 million years ago and modern humans around 250,000 years ago. However, despite the evolution of these large animals, smaller organisms similar to the types that evolved early in this process continue to be highly successful and dominate the Earth, with the majority of both biomass and species being prokaryotes.
History of evolutionary thought
Classical antiquity
The proposal that one type of organism could descend from another type goes back to some of the first pre-Socratic Greek philosophers, such as Anaximander and Empedocles. Such proposals survived into Roman times. The poet and philosopher Lucretius followed Empedocles in his masterwork De rerum natura.
Middle Ages
In contrast to these materialistic views, Aristotelianism had considered all natural things as actualisations of fixed natural possibilities, known as forms. This became part of a medieval teleological understanding of nature in which all things have an intended role to play in a divine cosmic order. Variations of this idea became the standard understanding of the Middle Ages and were integrated into Christian learning, but Aristotle did not demand that real types of organisms always correspond one-for-one with exact metaphysical forms and specifically gave examples of how new types of living things could come to be.
A number of Arab Muslim scholars wrote about evolution, most notably Ibn Khaldun, who wrote the book Muqaddimah in 1377 AD, in which he asserted that humans developed from "the world of the monkeys", in a process by which "species become more numerous".
Pre-Darwinian
The "New Science" of the 17th century rejected the Aristotelian approach. It sought to explain natural phenomena in terms of physical laws that were the same for all visible things and that did not require the existence of any fixed natural categories or divine cosmic order. However, this new approach was slow to take root in the biological sciences: the last bastion of the concept of fixed natural types. John Ray applied one of the previously more general terms for fixed natural types, "species", to plant and animal types, but he strictly identified each type of living thing as a species and proposed that each species could be defined by the features that perpetuated themselves generation after generation. The biological classification introduced by Carl Linnaeus in 1735 explicitly recognised the hierarchical nature of species relationships, but still viewed species as fixed according to a divine plan.
Other naturalists of this time speculated on the evolutionary change of species over time according to natural laws. In 1751, Pierre Louis Maupertuis wrote of natural modifications occurring during reproduction and accumulating over many generations to produce new species. Georges-Louis Leclerc, Comte de Buffon, suggested that species could degenerate into different organisms, and Erasmus Darwin proposed that all warm-blooded animals could have descended from a single microorganism (or "filament"). The first full-fledged evolutionary scheme was Jean-Baptiste Lamarck's "transmutation" theory of 1809, which envisaged spontaneous generation continually producing simple forms of life that developed greater complexity in parallel lineages with an inherent progressive tendency, and postulated that on a local level, these lineages adapted to the environment by inheriting changes caused by their use or disuse in parents. (The latter process was later called Lamarckism.) These ideas were condemned by established naturalists as speculation lacking empirical support. In particular, Georges Cuvier insisted that species were unrelated and fixed, their similarities reflecting divine design for functional needs. In the meantime, Ray's ideas of benevolent design had been developed by William Paley into the Natural Theology or Evidences of the Existence and Attributes of the Deity (1802), which proposed complex adaptations as evidence of divine design and which was admired by Charles Darwin.
Darwinian revolution
The crucial break from the concept of constant typological classes or types in biology came with the theory of evolution through natural selection, which was formulated by Charles Darwin and Alfred Wallace in terms of variable populations. Darwin used the expression "descent with modification" rather than "evolution". Partly influenced by An Essay on the Principle of Population (1798) by Thomas Robert Malthus, Darwin noted that population growth would lead to a "struggle for existence" in which favourable variations prevailed as others perished. In each generation, many offspring fail to survive to an age of reproduction because of limited resources. This could explain the diversity of plants and animals from a common ancestry through the working of natural laws in the same way for all types of organism. Darwin developed his theory of "natural selection" from 1838 onwards and was writing up his "big book" on the subject when Alfred Russel Wallace sent him a version of virtually the same theory in 1858. Their separate papers were presented together at an 1858 meeting of the Linnean Society of London. At the end of 1859, Darwin's publication of his "abstract" as On the Origin of Species explained natural selection in detail and in a way that led to an increasingly wide acceptance of Darwin's concepts of evolution at the expense of alternative theories. Thomas Henry Huxley applied Darwin's ideas to humans, using paleontology and comparative anatomy to provide strong evidence that humans and apes shared a common ancestry. Some were disturbed by this since it implied that humans did not have a special place in the universe.
Pangenesis and heredity
The mechanisms of reproductive heritability and the origin of new traits remained a mystery. Towards this end, Darwin developed his provisional theory of pangenesis. In 1865, Gregor Mendel reported that traits were inherited in a predictable manner through the independent assortment and segregation of elements (later known as genes). Mendel's laws of inheritance eventually supplanted most of Darwin's pangenesis theory. August Weismann made the important distinction between germ cells that give rise to gametes (such as sperm and egg cells) and the somatic cells of the body, demonstrating that heredity passes through the germ line only. Hugo de Vries connected Darwin's pangenesis theory to Weismann's germ/soma cell distinction and proposed that Darwin's pangenes were concentrated in the cell nucleus and when expressed they could move into the cytoplasm to change the cell's structure. De Vries was also one of the researchers who made Mendel's work well known, believing that Mendelian traits corresponded to the transfer of heritable variations along the germline. To explain how new variants originate, de Vries developed a mutation theory that led to a temporary rift between those who accepted Darwinian evolution and biometricians who allied with de Vries. In the 1930s, pioneers in the field of population genetics, such as Ronald Fisher, Sewall Wright and J. B. S. Haldane set the foundations of evolution onto a robust statistical philosophy. The false contradiction between Darwin's theory, genetic mutations, and Mendelian inheritance was thus reconciled.
The 'modern synthesis'
In the 1920s and 1930s, the modern synthesis connected natural selection and population genetics, based on Mendelian inheritance, into a unified theory that included random genetic drift, mutation, and gene flow. This new version of evolutionary theory focused on changes in allele frequencies in population. It explained patterns observed across species in populations, through fossil transitions in palaeontology.
Further syntheses
Since then, further syntheses have extended evolution's explanatory power in the light of numerous discoveries, to cover biological phenomena across the whole of the biological hierarchy from genes to populations.
The publication of the structure of DNA by James Watson and Francis Crick with contribution of Rosalind Franklin in 1953 demonstrated a physical mechanism for inheritance. Molecular biology improved understanding of the relationship between genotype and phenotype. Advances were also made in phylogenetic systematics, mapping the transition of traits into a comparative and testable framework through the publication and use of evolutionary trees. In 1973, evolutionary biologist Theodosius Dobzhansky penned that "nothing in biology makes sense except in the light of evolution", because it has brought to light the relations of what first seemed disjointed facts in natural history into a coherent explanatory body of knowledge that describes and predicts many observable facts about life on this planet.
One extension, known as evolutionary developmental biology and informally called "evo-devo", emphasises how changes between generations (evolution) act on patterns of change within individual organisms (development). Since the beginning of the 21st century, some biologists have argued for an extended evolutionary synthesis, which would account for the effects of non-genetic inheritance modes, such as epigenetics, parental effects, ecological inheritance and cultural inheritance, and evolvability.
Social and cultural responses
In the 19th century, particularly after the publication of On the Origin of Species in 1859, the idea that life had evolved was an active source of academic debate centred on the philosophical, social and religious implications of evolution. Today, the modern evolutionary synthesis is accepted by a vast majority of scientists. However, evolution remains a contentious concept for some theists.
While various religions and denominations have reconciled their beliefs with evolution through concepts such as theistic evolution, there are creationists who believe that evolution is contradicted by the creation myths found in their religions and who raise various objections to evolution. As had been demonstrated by responses to the publication of Vestiges of the Natural History of Creation in 1844, the most controversial aspect of evolutionary biology is the implication of human evolution that humans share common ancestry with apes and that the mental and moral faculties of humanity have the same types of natural causes as other inherited traits in animals. In some countries, notably the United States, these tensions between science and religion have fuelled the current creation–evolution controversy, a religious conflict focusing on politics and public education. While other scientific fields such as cosmology and Earth science also conflict with literal interpretations of many religious texts, evolutionary biology experiences significantly more opposition from religious literalists.
The teaching of evolution in American secondary school biology classes was uncommon in most of the first half of the 20th century. The Scopes Trial decision of 1925 caused the subject to become very rare in American secondary biology textbooks for a generation, but it was gradually re-introduced later and became legally protected with the 1968 Epperson v. Arkansas decision. Since then, the competing religious belief of creationism was legally disallowed in secondary school curricula in various decisions in the 1970s and 1980s, but it returned in pseudoscientific form as intelligent design (ID), to be excluded once again in the 2005 Kitzmiller v. Dover Area School District case. The debate over Darwin's ideas did not generate significant controversy in China.
See also
Chronospecies
References
Bibliography
The notebook is available from The Complete Work of Charles Darwin Online . Retrieved 2019-10-09.
The book is available from The Complete Work of Charles Darwin Online . Retrieved 2014-11-21.
"Proceedings of a symposium held at the American Museum of Natural History in New York, 2002."
. Retrieved 2014-11-29.
"Papers from the Symposium on the Limits of Reductionism in Biology, held at the Novartis Foundation, London, May 13–15, 1997."
"Based on a conference held in Bellagio, Italy, June 25–30, 1989"
Further reading
Introductory reading
American version.
Advanced reading
External links
General information
Adobe Flash required.
"History of Evolution in the United States". Salon. Retrieved 2021-08-24.
Experiments
Online lectures
Biology theories | 0.782847 | 0.999394 | 0.782373 |
Human variability | Human variability, or human variation, is the range of possible values for any characteristic, physical or mental, of human beings.
Frequently debated areas of variability include cognitive ability, personality, physical appearance (body shape, skin color, etc.) and immunology.
Variability is partly heritable and partly acquired (nature vs. nurture debate).
As the human species exhibits sexual dimorphism, many traits show significant variation not just between populations but also between the sexes.
Sources of human variability
Human variability is attributed to a combination of environmental and genetic sources including:
For the genetic variables listed above, few of the traits characterizing human variability are controlled by simple Mendelian inheritance. Most are polygenic or are determined by a complex combination of genetics and environment.
Many genetic differences (polymorphisms) have little effect on health or reproductive success but help to distinguish one population from another. It is helpful for researchers in the field of population genetics to study ancient migrations and relationships between population groups.
Environmental factors
Climate and disease
Other important factors of environmental factors include climate and disease. Climate has effects on determining what kinds of human variation are more adaptable to survive without much restrictions and hardships. For example, people who live in a climate where there is a lot of exposure to sunlight have a darker color of skin tone. Evolution has caused production of folate (folic acid) from UV radiation, thus giving them darker skin tone with more melanin to make sure child development is smooth and successful. Conversely, people who live farther away from the equator have a lighter skin tone. This is due to a need for an increased exposure and absorbance of sunlight to make sure the body can produce enough vitamin D for survival.
Blackfoot disease is a disease caused by environmental pollution and causes people to have black, charcoal-like skin in the lower limbs. This is caused by arsenic pollution in water and food source. This is an example of how disease can affect human variation. Another disease that can affect human variation is syphilis, a sexual transmitted disease. Syphilis does not affect human variation until the middle stage of the disease. It then starts to grow rashes all over the body, affecting people's human variation.
Nutrition
Phenotypic variation is a combination of one's genetics and their surrounding environment, with no interaction or mutual influence between the two. This means that a significant portion of human variability can be controlled by human behavior. Nutrition and diet play a substantial role in determining phenotype because they are arguably the most controllable forms of environmental factors that create epigenetic changes. This is because they can be changed or altered relatively easily as opposed to other environmental factors like location.
If people are reluctant to changing their diets, consuming harmful foods can have chronic negative effects on variability. One such instance of this occurs when eating certain chemicals through one's diet or consuming carcinogens, which can have adverse effects on individual phenotype. For example, Bisphenol A (BPA) is a known endocrine disruptor that mimics the hormone estradiol and can be found in various plastic products. BPA seeps into food or drinks when the plastic containing it is heated up and begins to melt. When these contaminated substances are consumed, especially often and over long periods of time, one's risk of diabetes and cardiovascular disease increases. BPA also has the potential to alter "physiological weight control patterns." Examples such as this demonstrate that preserving a healthy phenotype largely rests on nutritional decision-making skills.
The concept that nutrition and diet affect phenotype extends to what the mother eats during pregnancy, which can have drastic effects on the outcome of the phenotype of the child. A recent study by researchers at the MRC International Nutrition Group shows that "methylation machinery can be disrupted by nutrient deficiencies and that this can lead to disease" susceptibility in newborn babies. The reason for this is because methyl groups have the ability to silence certain genes. Increased deficiencies of various nutrients such as this have the potential to permanently change the epigenetics of the baby.
Genetic factors
Genetic variation in humans may mean any variance in phenotype which results from heritable allele expression, mutations, and epigenetic changes. While human phenotypes may seem diverse, individuals actually differ by only 1 in every 1,000 base pairs and is primarily the result of inherited genetic differences. Pure consideration of alleles is often referred to as Mendelian Genetics, or more properly Classical Genetics, and involves the assessment of whether a given trait is dominant or recessive and thus, at what rates it will be inherited. The color of one's eyes was long believed to occur with a pattern of brown-eye dominance, with blue eyes being a recessive characteristic resulting from a past mutation. However, it is now understood that eye color is controlled by various genes, and thus, may not follow as distinct a pattern as previously believed. The trait is still the result of variance in genetic sequence between individuals as a result of inheritance from their parents. Common traits which may be linked to genetic patterns are earlobe attachment, hair color, and hair growth patterns.
In terms of evolution, genetic mutations are the origins of differences in alleles between individuals. However, mutations may also occur within a person's life-time and be passed down from parent to offspring. In some cases, mutations may result in genetic diseases, such as Cystic Fibrosis, which is the result of a mutation to the CFTR gene that is recessively inherited from both parents. In other cases, mutations may be harmless or phenotypically unnoticeable. We are able to treat biological traits as manifestations of either a single loci or multiple loci, labeling said biological traits as either monogenic or polygenic, respectively. Concerning polygenic traits it may be essential to be mindful of inter-genetic interactions or epistasis. Although epistasis is a significant genetic source of biological variation, it is only additive interactions that are heritable as other epistatic interactions involve recondite inter-genetic relationships. Epistatic interactions in of themselves vary further with their dependency on the results of the mechanisms of recombination and crossing over.
The ability of genes to be expressed may also be a source of variation between individuals and result in changes to phenotype. This may be the result of epigenetics, which are founded upon an organism's phenotypic plasticity, with such a plasticity even being heritable. Epigenetics may result from methylation of gene sequences leading to the blocking of expression or changes to histone protein structuring as a result of environmental or biological cues. Such alterations influence how genetic material is handled by the cell and to what extent certain DNA sections are expressed and compose the epigenome. The division between what can be considered as a genetic source of biological variation and not becomes immensely arbitrary as we approach aspects of biological variation such as epigenetics. Indeed, gene specific gene expression and inheritance may be reliant on environmental influences.
Cultural factors
Archaeological findings such as those that indicate that the Middle Stone Age and the Acheulean – identified as a specific 'cultural phases' of humanity with a number of characteristics – lasted substantially longer in some places or 'ended' at times over 100,000 years apart, highlight a significant spatiotemporal cultural variability in and complexity of the sociocultural history and evolution of humanity. In some cases cultural factors may be intertwined with genetic and environmental factors.
Measuring variation
Scientific
Measurement of human variation can fall under the purview of several scholarly disciplines, many of which lie at the intersection of biology and statistics. The methods of biostatistics, the application of statistical methods to the analysis of biological data, and bioinformatics, the application of information technologies to the analysis of biological data, are utilized by researchers in these fields to uncover significant patterns of variability. Some fields of scientific research include the following:
Demography is a branch of statistics and sociology concerned with the statistical study of populations, especially humans. A demographic analysis can measure various metrics of a population, most commonly metrics of size and growth, diversity in culture, ethnicity, language, religious belief, political belief, etc. Biodemography is a subfield which specifically integrates biological understanding into demographics analysis.
In the social sciences, social research is conducted and collected data is analyzed under statistical methods. The methodologies of this research can be divided into qualitative and quantitative designs. Some example subdisciplines include:
Anthropology, the study of human societies. Comparative research in subfields of anthropology may yield results on human variation with respect to the subfield's topic of interest.
Psychology, the study of behavior from a mental perspective. Does a lot of experiments and analysis grouped into quantitative or qualitative research methods.
Sociology, the study of behavior from a social perspective. Sociological research can be conducted in either quantitative or qualitative formats, depending on the nature of data collected and the subfield of sociology under which the research falls. Analysis of this data is subject to quantitative or qualitative methods. Computational sociology is also a method of producing useful data for studies of social behavior.
Anthropometry
Anthropometry is the study of the measurements of different parts of the human body. Common measurements include height, weight, organ size (brain, stomach, penis, vagina), and other bodily metrics such as waist–hip ratio. Each measurement can vary significantly between populations; for instance, the average height of males of European descent is 178 cm ± 7 cm and of females of European descent is 165 cm ± 7 cm. Meanwhile, average height of Nilotic males in Dinka is 181.3 cm.
Applications of anthropometry include ergonomics, biometrics, and forensics. Knowing the distribution of body measurements enable designers to build better tools for workers. Anthropometry is also used when designing safety equipment such as seat belts. In biometrics, measurements of fingerprints and iris patterns can be used for secure identification purposes.
Measuring genetic variation
Human genomics and population genetics are the study of the human genome and variome, respectively. Studies in these areas may concern the patterns and trends in human DNA. The Human Genome Project and The Human Variome Project are examples of large scale studies of the entire human population to collect data which can be analyzed to understand genomic and genetic variation in individuals, respectively.
The Human Genome Project is the largest scientific project in the history of biology. At a cost of $3.8 billion in funding and over a period of 13 years from 1990 to 2003, the project processed through DNA sequencing the approximately 3 billion base pairs and catalogued the 20,000 to 25,000 genes in human DNA. The project made the data available to all scientific researchers and developed analytical tools for processing this information. A particular finding regarding human variability due to difference in DNA made possible by the Human Genome Project is that any two individuals share 99.9% of their nucleotide sequences.
The Human Variome Project is a similar undertaking with the goal of identification and categorization of the set of human genetic variation, specifically variations which are medically pertinent. This project will also provide a data repository for further research and analysis of disease. The Human Variome Project was launched in 2006 and is being run by an international community of researchers and representatives, including collaborators from the World Health Organization and the United Nations Educational, Scientific, and Cultural Organization.
Genetic drift
Genetic drift is one method by which variability occurs in populations. Unlike natural selection, genetic drift occurs when alleles decrease randomly over time and not as a result of selection bias. Over a long history, this can cause significant shifts in the underlying genetic distribution of a population. We can model genetic drift with the Wright-Fisher model. In a population of N with 2N genes, there are two alleles with frequencies p and q. If the previous generation had an allele with frequency p, then the probability that the next generation has k of that allele is:
Over time, one allele will be fixed when the frequency of that allele reaches 1 and the frequency of the other allele reaches 0. The probability that any allele is fixed is proportional to the frequency of that allele. For two alleles with frequencies p and q, the probability that p will be fixed is p. The expected number of generations for an allele with frequency p to be fixed is:
Where Ne is the effective population size.
Single-nucleotide polymorphism
Single-nucleotide polymorphism or SNPs are variations of a single nucleotide. SNPs can occur in coding or non-coding regions of genes and on average occur once every 300 nucleotides. SNPs in coding regions can cause synonymous, missense, and nonsense mutations. SNPs have shown to be correlated with drug responses and risk of diseases such as sickle-cell anemia, Alzheimer's disease, cystic fibrosis, and more.
DNA fingerprinting
DNA profiling, whereby a DNA fingerprint is constructed by extracting a DNA sample from body tissue or fluid. Then, it is segmented using restriction enzymes and each segment marked with probes then exposed on X-ray film. The segments form patterns of black bars;the DNA fingerprint. DNA Fingerprints are used in conjunction with other methods in order to individuals information in Federal programs such as CODIS (Combined DNA Index System for Missing Persons) in order to help identify individuals
Mitochondrial DNA
Mitochondrial DNA, which is only passed from mother to child. The first human population studies based on mitochondrial DNA were performed by restriction enzyme analyses (RFLPs) and revealed differences between the four ethnic groups (Caucasian, Amerindian, African, and Asian). Differences in mtDNA patterns have also been shown in communities with a different geographic origin within the same ethnic group
Alloenzymic variation
Alloenzymic variation, a source of variation that identifies protein variants of the same gene due to amino acid substitutions in proteins. After grinding tissue to release the cytoplasm, wicks are used to absorb the resulting extract and placed in a slit cut into a starch gel. A low current is run across the gel resulting in a positive and negative ends. Proteins are then separated by charge and size, with the smaller and more highly charged molecules moving more quickly across the gel. This techniques does underestimate true genetic variability as there may be an amino acid substitution but if the amino acid is not charged differently than the original no difference in migration will appear it is estimated that approximately 1/3 of the true genetic variation is not expressed by this technique.
Structural variation
Structural variation, which can include insertions, deletions, duplications, and mutations in DNA. Within the human population, about 13% of the human genome is defined as structurally variant.
Phenotypic variation
Phenotypic variation, which accounts for both genetic and epigenetic factors that affect what characteristics are shown. For applications such as organ donations and matching, phenotypic variation of blood type, tissue type, and organ size are considered.
Civic
Measurement of human variation may also be initiated by governmental parties. A government may conduct a census, the systematic recording of an entire population of a region. The data may be used for calculating metrics of demography such as sex, gender, age, education, employment, etc.; this information is utilized for civic, political, economic, industrial, and environmental assessment and planning.
Commercial
Commercial motivation for understanding variation in human populations arises from the competitive advantage of tailoring products and services for a specific target market. A business may undertake some form of market research in order to collect data on customer preference and behavior and implement changes which align with the results.
Social significance and valuation
Both individuals and entire societies and cultures place values on different aspects of human variability; however, values can change as societies and cultures change. Not all people agree on the values or relative rankings, and neither do all societies and cultures. Nonetheless, nearly all human differences have a social value dimension. Examples of variations which may be given different values in different societies include skin color and/or body structure. Race and sex have a strong value difference, while handedness has a much weaker value difference. The values given to different traits among human variability are often influenced by what phenotypes are more prevalent locally. Local valuation may affect social standing, reproductive opportunities, or even survival.
Differences may vary or be distributed in various ways. Some, like height for a given sex, vary in close to a "normal" or Gaussian distribution. Other characteristics (e.g., skin color) vary continuously in a population, but the continuum may be socially divided into a small number of distinct categories. Then, there are some characteristics that vary bimodally (for example, handedness), with fewer people in intermediate categories.
Classification and evaluation of traits
When an inherited difference of body structure or function is severe enough to cause a significant hindrance in certain perceived abilities, it is termed a genetic disease, but even this categorization has fuzzy edges. There are many instances in which the degree of negative value of a human difference depends completely on the social or physical environment. For example, in a society with a large proportion of deaf people (as Martha's Vineyard in the 19th century), it was possible to deny that deafness is a disability. Another example of social renegotiation of the value assigned to a difference is reflected in the controversy over management of ambiguous genitalia, especially whether abnormal genital structure has enough negative consequences to warrant surgical correction.
Furthermore, many genetic traits may be advantageous in certain circumstances and disadvantageous in others. Being a heterozygote or carrier of the sickle-cell disease gene confers some protection against malaria, apparently enough to maintain the gene in populations of malarial areas. In a homozygous dose it is a significant disability.
Each trait has its own advantages and disadvantages, but sometimes a trait that is found desirable may not be favorable in terms of certain biological factors such as reproductive fitness, and traits that are not highly valued by the majority of people may be favorable in terms of biological factors. For example, women tend to have fewer pregnancies on average than before and therefore net worldwide fertility rates are dropping. Moreover, this leads to the fact that multiple births tend to be favorable in terms of number of children and therefore offspring count; when the average number of pregnancies and the average number of children was higher, multiple births made only a slight relative difference in number of children. However, with fewer pregnancies, multiple births can make the difference in number of children relatively large. A hypothetical scenario would be that couple 1 has ten children and couple 2 has eight children, but in both couples, the woman undergoes eight pregnancies. This is not a large difference in ratio of fertility. However, another hypothetical scenario can be that couple 1 has three children and couple 2 has one child but in both couples the woman undergoes one pregnancy (in this case couple 2 has triplets). When the proportion of offspring count in the latter hypothetical scenario is compared, the difference in proportion of offspring count becomes higher. A trait in women known to greatly increase the chance of multiple births is being a tall woman (presumably the chance is further increased when the woman is very tall among both women and men). Yet very tall women are not viewed as a desirable phenotype by the majority of people, and the phenotype of very tall women has not been highly favored in the past. Nevertheless, values placed on traits can change over time.
Such an example is homosexuality. In Ancient Greece, what in present terms would be called homosexuality, primarily between a man and a young boy, was not uncommon and was not outlawed. However, homosexuality became more condemned. Attitudes towards homosexuality alleviated in modern times.
Acknowledgement and study of human differences does have a wide range of uses, such as tailoring the size and shape of manufactured items. See Ergonomics.
Controversies of sociocultural and personal implications
Possession of above average amounts of some abilities is valued by most societies. Some of the traits that societies try to measure by perception are intellectual aptitude in the form of ability to learn, artistic prowess, strength, endurance, agility, and resilience.
Each individual's distinctive differences, even the negatively valued or stigmatized ones, are usually considered an essential part of self-identity.
Membership or status in a social group may depend on having specific values for certain attributes. It is not unusual for people to deliberately try to amplify or exaggerate differences, or to conceal or minimize them, for a variety of reasons. Examples of practices designed to minimize differences include tanning, hair straightening, skin bleaching, plastic surgery, orthodontia, and growth hormone treatment for extreme shortness. Conversely, male-female differences are enhanced and exaggerated in most societies.
In some societies, such as the United States, circumcision is practiced on a majority of males, as well as sex reassignment on intersex infants, with substantial emphasis on cultural and religious norms. Circumcision is highly controversial because although it offers health benefits, such as less chance of urinary tract infections, STDs, and penile cancer, it is considered a drastic procedure that is not medically mandatory and argued as a decision that should be taken when the child is old enough to decide for himself. Similarly, sex reassignment surgery offers psychiatric health benefits to transgender people but is seen as unethical by some Christians, especially when performed on children.
Much controversy surrounds the assigning or distinguishing of some variations, especially since differences between groups in a society or between societies is often debated as part of either a person's "essential" nature or a socially constructed attribution. For example, there has long been a debate among sex researchers on whether sexual orientation is due to evolution and biology (the "essentialist" position), or a result of mutually reinforcing social perceptions and behavioral choices (the "constructivist" perspective). The essentialist position emphasizes inclusive fitness as the reason homosexuality has not been eradicated by natural selection. Gay or lesbian individuals have not been greatly affected by evolutionary selection because they may help the fitness of their siblings and siblings' children, thus increasing their own fitness through inclusive fitness and maintaining evolution of homosexuality. Biological theories for same gender sexual orientation include genetic influences, neuroanatomical factors, and hormone differences but research so far has not provided any conclusive results. In contrast, the social constructivist position argues that sexuality is a result of culture and has originated from language or dialogue about sex. Mating choices are the product of cultural values, such as youth and attractiveness, and homosexuality varies greatly between cultures and societies. In this view, complexities, such as sexual orientation changing during the course of one's lifespan, are accounted for.
Controversy also surrounds the boundaries of "wellness", "wholeness," or "normality." In some cultures, differences in physical appearance, mental ability, and even sex can exclude one from traditions, ceremonies, or other important events, such as religious service. For example, in India, menstruation is not only a taboo subject but also traditionally considered shameful. Depending on beliefs, a woman who is menstruating is not allowed to cook or enter spiritual areas because she is "impure" and "cursed". There has been large-scale renegotiation of the social significance of variations which reduce the ability of a person to do one or more functions in western culture. Laws have been passed to alleviate the reduction of social opportunity available to those with disabilities. The concept of "differently abled" has been pushed by those persuading society to see limited incapacities as a human difference of less negative value.
Ideologies of superiority and inferiority
The extreme exercise of social valuation of human difference is in the definition of "human." Differences between humans can lead to an individual's "nonhuman" status, in the sense of withholding identification, charity, and social participation. Views of these variations can change enormously between cultures over time. For example, nineteenth-century European and American ideas of race and eugenics culminated in the attempts of the Nazi-led German society of the 1930s to deny not just reproduction, but life itself to a variety of people with "differences" attributed in part to biological characteristics. Hitler and Nazi leaders wanted to create a "master race" consisting of only Aryans, or blue-eyed, blonde-haired, and tall individuals, thus discriminating and attempting to exterminate those who didn't fit into this ideal.
Contemporary controversy continues over "what kind of human" is a fetus or child with a significant disability. On one end are people who would argue that Down syndrome is not a disability but a mere "difference," and on the other those who consider it such a calamity as to assume that such a child is better off "not born". For example, in India and China, being female is widely considered such a negatively valued human difference that female infanticide occurs such to severely affect the proportion of sexes.
Common human variations
See also
Anthropometry
Human genetic variation
Human physical appearance
Mendelian traits in humans
Quantitative trait locus
Human behaviour genetics
Big Five personality traits
References
Bibliography
Further reading
Books
Humans
Comparisons | 0.793335 | 0.986059 | 0.782275 |
Geomorphology | Geomorphology (from Ancient Greek: , , 'earth'; , , 'form'; and , , 'study') is the scientific study of the origin and evolution of topographic and bathymetric features generated by physical, chemical or biological processes operating at or near Earth's surface. Geomorphologists seek to understand why landscapes look the way they do, to understand landform and terrain history and dynamics and to predict changes through a combination of field observations, physical experiments and numerical modeling. Geomorphologists work within disciplines such as physical geography, geology, geodesy, engineering geology, archaeology, climatology, and geotechnical engineering. This broad base of interests contributes to many research styles and interests within the field.
Overview
Earth's surface is modified by a combination of surface processes that shape landscapes, and geologic processes that cause tectonic uplift and subsidence, and shape the coastal geography. Surface processes comprise the action of water, wind, ice, wildfire, and life on the surface of the Earth, along with chemical reactions that form soils and alter material properties, the stability and rate of change of topography under the force of gravity, and other factors, such as (in the very recent past) human alteration of the landscape. Many of these factors are strongly mediated by climate. Geologic processes include the uplift of mountain ranges, the growth of volcanoes, isostatic changes in land surface elevation (sometimes in response to surface processes), and the formation of deep sedimentary basins where the surface of the Earth drops and is filled with material eroded from other parts of the landscape. The Earth's surface and its topography therefore are an intersection of climatic, hydrologic, and biologic action with geologic processes, or alternatively stated, the intersection of the Earth's lithosphere with its hydrosphere, atmosphere, and biosphere.
The broad-scale topographies of the Earth illustrate this intersection of surface and subsurface action. Mountain belts are uplifted due to geologic processes. Denudation of these high uplifted regions produces sediment that is transported and deposited elsewhere within the landscape or off the coast. On progressively smaller scales, similar ideas apply, where individual landforms evolve in response to the balance of additive processes (uplift and deposition) and subtractive processes (subsidence and erosion). Often, these processes directly affect each other: ice sheets, water, and sediment are all loads that change topography through flexural isostasy. Topography can modify the local climate, for example through orographic precipitation, which in turn modifies the topography by changing the hydrologic regime in which it evolves. Many geomorphologists are particularly interested in the potential for feedbacks between climate and tectonics, mediated by geomorphic processes.
In addition to these broad-scale questions, geomorphologists address issues that are more specific or more local. Glacial geomorphologists investigate glacial deposits such as moraines, eskers, and proglacial lakes, as well as glacial erosional features, to build chronologies of both small glaciers and large ice sheets and understand their motions and effects upon the landscape. Fluvial geomorphologists focus on rivers, how they transport sediment, migrate across the landscape, cut into bedrock, respond to environmental and tectonic changes, and interact with humans. Soils geomorphologists investigate soil profiles and chemistry to learn about the history of a particular landscape and understand how climate, biota, and rock interact. Other geomorphologists study how hillslopes form and change. Still others investigate the relationships between ecology and geomorphology. Because geomorphology is defined to comprise everything related to the surface of the Earth and its modification, it is a broad field with many facets.
Geomorphologists use a wide range of techniques in their work. These may include fieldwork and field data collection, the interpretation of remotely sensed data, geochemical analyses, and the numerical modelling of the physics of landscapes. Geomorphologists may rely on geochronology, using dating methods to measure the rate of changes to the surface. Terrain measurement techniques are vital to quantitatively describe the form of the Earth's surface, and include differential GPS, remotely sensed digital terrain models and laser scanning, to quantify, study, and to generate illustrations and maps.
Practical applications of geomorphology include hazard assessment (such as landslide prediction and mitigation), river control and stream restoration, and coastal protection.
Planetary geomorphology studies landforms on other terrestrial planets such as Mars. Indications of effects of wind, fluvial, glacial, mass wasting, meteor impact, tectonics and volcanic processes are studied. This effort not only helps better understand the geologic and atmospheric history of those planets but also extends geomorphological study of the Earth. Planetary geomorphologists often use Earth analogues to aid in their study of surfaces of other planets.
History
Other than some notable exceptions in antiquity, geomorphology is a relatively young science, growing along with interest in other aspects of the earth sciences in the mid-19th century. This section provides a very brief outline of some of the major figures and events in its development.
Ancient geomorphology
The study of landforms and the evolution of the Earth's surface can be dated back to scholars of Classical Greece. In the 5th century BC, Greek historian Herodotus argued from observations of soils that the Nile delta was actively growing into the Mediterranean Sea, and estimated its age. In the 4th century BC, Greek philosopher Aristotle speculated that due to sediment transport into the sea, eventually those seas would fill while the land lowered. He claimed that this would mean that land and water would eventually swap places, whereupon the process would begin again in an endless cycle. The Encyclopedia of the Brethren of Purity published in Arabic at Basra during the 10th century also discussed the cyclical changing positions of land and sea with rocks breaking down and being washed into the sea, their sediment eventually rising to form new continents. The medieval Persian Muslim scholar Abū Rayhān al-Bīrūnī (973–1048), after observing rock formations at the mouths of rivers, hypothesized that the Indian Ocean once covered all of India. In his De Natura Fossilium of 1546, German metallurgist and mineralogist Georgius Agricola (1494–1555) wrote about erosion and natural weathering.
Another early theory of geomorphology was devised by Song dynasty Chinese scientist and statesman Shen Kuo (1031–1095). This was based on his observation of marine fossil shells in a geological stratum of a mountain hundreds of miles from the Pacific Ocean. Noticing bivalve shells running in a horizontal span along the cut section of a cliffside, he theorized that the cliff was once the pre-historic location of a seashore that had shifted hundreds of miles over the centuries. He inferred that the land was reshaped and formed by soil erosion of the mountains and by deposition of silt, after observing strange natural erosions of the Taihang Mountains and the Yandang Mountain near Wenzhou. Furthermore, he promoted the theory of gradual climate change over centuries of time once ancient petrified bamboos were found to be preserved underground in the dry, northern climate zone of Yanzhou, which is now modern day Yan'an, Shaanxi province. Previous Chinese authors also presented ideas about changing landforms. Scholar-official Du Yu (222–285) of the Western Jin dynasty predicted that two monumental stelae recording his achievements, one buried at the foot of a mountain and the other erected at the top, would eventually change their relative positions over time as would hills and valleys. Daoist alchemist Ge Hong (284–364) created a fictional dialogue where the immortal Magu explained that the territory of the East China Sea was once a land filled with mulberry trees.
Early modern geomorphology
The term geomorphology seems to have been first used by Laumann in an 1858 work written in German. Keith Tinkler has suggested that the word came into general use in English, German and French after John Wesley Powell and W. J. McGee used it during the International Geological Conference of 1891. John Edward Marr in his The Scientific Study of Scenery considered his book as, 'an Introductory Treatise on Geomorphology, a subject which has sprung from the union of Geology and Geography'.
An early popular geomorphic model was the geographical cycle or cycle of erosion model of broad-scale landscape evolution developed by William Morris Davis between 1884 and 1899. It was an elaboration of the uniformitarianism theory that had first been proposed by James Hutton (1726–1797). With regard to valley forms, for example, uniformitarianism posited a sequence in which a river runs through a flat terrain, gradually carving an increasingly deep valley, until the side valleys eventually erode, flattening the terrain again, though at a lower elevation. It was thought that tectonic uplift could then start the cycle over. In the decades following Davis's development of this idea, many of those studying geomorphology sought to fit their findings into this framework, known today as "Davisian". Davis's ideas are of historical importance, but have been largely superseded today, mainly due to their lack of predictive power and qualitative nature.
In the 1920s, Walther Penck developed an alternative model to Davis's. Penck thought that landform evolution was better described as an alternation between ongoing processes of uplift and denudation, as opposed to Davis's model of a single uplift followed by decay. He also emphasised that in many landscapes slope evolution occurs by backwearing of rocks, not by Davisian-style surface lowering, and his science tended to emphasise surface process over understanding in detail the surface history of a given locality. Penck was German, and during his lifetime his ideas were at times rejected vigorously by the English-speaking geomorphology community. His early death, Davis' dislike for his work, and his at-times-confusing writing style likely all contributed to this rejection.
Both Davis and Penck were trying to place the study of the evolution of the Earth's surface on a more generalized, globally relevant footing than it had been previously. In the early 19th century, authors – especially in Europe – had tended to attribute the form of landscapes to local climate, and in particular to the specific effects of glaciation and periglacial processes. In contrast, both Davis and Penck were seeking to emphasize the importance of evolution of landscapes through time and the generality of the Earth's surface processes across different landscapes under different conditions.
During the early 1900s, the study of regional-scale geomorphology was termed "physiography". Physiography later was considered to be a contraction of "physical" and "geography", and therefore synonymous with physical geography, and the concept became embroiled in controversy surrounding the appropriate concerns of that discipline. Some geomorphologists held to a geological basis for physiography and emphasized a concept of physiographic regions while a conflicting trend among geographers was to equate physiography with "pure morphology", separated from its geological heritage. In the period following World War II, the emergence of process, climatic, and quantitative studies led to a preference by many earth scientists for the term "geomorphology" in order to suggest an analytical approach to landscapes rather than a descriptive one.
Climatic geomorphology
During the age of New Imperialism in the late 19th century European explorers and scientists traveled across the globe bringing descriptions of landscapes and landforms. As geographical knowledge increased over time these observations were systematized in a search for regional patterns. Climate emerged thus as prime factor for explaining landform distribution at a grand scale. The rise of climatic geomorphology was foreshadowed by the work of Wladimir Köppen, Vasily Dokuchaev and Andreas Schimper. William Morris Davis, the leading geomorphologist of his time, recognized the role of climate by complementing his "normal" temperate climate cycle of erosion with arid and glacial ones. Nevertheless, interest in climatic geomorphology was also a reaction against Davisian geomorphology that was by the mid-20th century considered both un-innovative and dubious. Early climatic geomorphology developed primarily in continental Europe while in the English-speaking world the tendency was not explicit until L.C. Peltier's 1950 publication on a periglacial cycle of erosion.
Climatic geomorphology was criticized in a 1969 review article by process geomorphologist D.R. Stoddart. The criticism by Stoddart proved "devastating" sparking a decline in the popularity of climatic geomorphology in the late 20th century. Stoddart criticized climatic geomorphology for applying supposedly "trivial" methodologies in establishing landform differences between morphoclimatic zones, being linked to Davisian geomorphology and by allegedly neglecting the fact that physical laws governing processes are the same across the globe. In addition some conceptions of climatic geomorphology, like that which holds that chemical weathering is more rapid in tropical climates than in cold climates proved to not be straightforwardly true.
Quantitative and process geomorphology
Geomorphology was started to be put on a solid quantitative footing in the middle of the 20th century. Following the early work of Grove Karl Gilbert around the turn of the 20th century, a group of mainly American natural scientists, geologists and hydraulic engineers including William Walden Rubey, Ralph Alger Bagnold, Hans Albert Einstein, Frank Ahnert, John Hack, Luna Leopold, A. Shields, Thomas Maddock, Arthur Strahler, Stanley Schumm, and Ronald Shreve began to research the form of landscape elements such as rivers and hillslopes by taking systematic, direct, quantitative measurements of aspects of them and investigating the scaling of these measurements. These methods began to allow prediction of the past and future behavior of landscapes from present observations, and were later to develop into the modern trend of a highly quantitative approach to geomorphic problems. Many groundbreaking and widely cited early geomorphology studies appeared in the Bulletin of the Geological Society of America, and received only few citations prior to 2000 (they are examples of "sleeping beauties") when a marked increase in quantitative geomorphology research occurred.
Quantitative geomorphology can involve fluid dynamics and solid mechanics, geomorphometry, laboratory studies, field measurements, theoretical work, and full landscape evolution modeling. These approaches are used to understand weathering and the formation of soils, sediment transport, landscape change, and the interactions between climate, tectonics, erosion, and deposition.
In Sweden Filip Hjulström's doctoral thesis, "The River Fyris" (1935), contained one of the first quantitative studies of geomorphological processes ever published. His students followed in the same vein, making quantitative studies of mass transport (Anders Rapp), fluvial transport (Åke Sundborg), delta deposition (Valter Axelsson), and coastal processes (John O. Norrman). This developed into "the Uppsala School of Physical Geography".
Contemporary geomorphology
Today, the field of geomorphology encompasses a very wide range of different approaches and interests. Modern researchers aim to draw out quantitative "laws" that govern Earth surface processes, but equally, recognize the uniqueness of each landscape and environment in which these processes operate. Particularly important realizations in contemporary geomorphology include:
1) that not all landscapes can be considered as either "stable" or "perturbed", where this perturbed state is a temporary displacement away from some ideal target form. Instead, dynamic changes of the landscape are now seen as an essential part of their nature.
2) that many geomorphic systems are best understood in terms of the stochasticity of the processes occurring in them, that is, the probability distributions of event magnitudes and return times. This in turn has indicated the importance of chaotic determinism to landscapes, and that landscape properties are best considered statistically. The same processes in the same landscapes do not always lead to the same end results.
According to Karna Lidmar-Bergström, regional geography is since the 1990s no longer accepted by mainstream scholarship as a basis for geomorphological studies.
Albeit having its importance diminished, climatic geomorphology continues to exist as field of study producing relevant research. More recently concerns over global warming have led to a renewed interest in the field.
Despite considerable criticism, the cycle of erosion model has remained part of the science of geomorphology. The model or theory has never been proved wrong, but neither has it been proven. The inherent difficulties of the model have instead made geomorphological research to advance along other lines. In contrast to its disputed status in geomorphology, the cycle of erosion model is a common approach used to establish denudation chronologies, and is thus an important concept in the science of historical geology. While acknowledging its shortcomings, modern geomorphologists Andrew Goudie and Karna Lidmar-Bergström have praised it for its elegance and pedagogical value respectively.
Processes
Geomorphically relevant processes generally fall into
(1) the production of regolith by weathering and erosion,
(2) the transport of that material, and
(3) its eventual deposition. Primary surface processes responsible for most topographic features include wind, waves, chemical dissolution, mass wasting, groundwater movement, surface water flow, glacial action, tectonism, and volcanism. Other more exotic geomorphic processes might include periglacial (freeze-thaw) processes, salt-mediated action, changes to the seabed caused by marine currents, seepage of fluids through the seafloor or extraterrestrial impact.
Aeolian processes
Aeolian processes pertain to the activity of the winds and more specifically, to the winds' ability to shape the surface of the Earth. Winds may erode, transport, and deposit materials, and are effective agents in regions with sparse vegetation and a large supply of fine, unconsolidated sediments. Although water and mass flow tend to mobilize more material than wind in most environments, aeolian processes are important in arid environments such as deserts.
Biological processes
The interaction of living organisms with landforms, or biogeomorphologic processes, can be of many different forms, and is probably of profound importance for the terrestrial geomorphic system as a whole. Biology can influence very many geomorphic processes, ranging from biogeochemical processes controlling chemical weathering, to the influence of mechanical processes like burrowing and tree throw on soil development, to even controlling global erosion rates through modulation of climate through carbon dioxide balance. Terrestrial landscapes in which the role of biology in mediating surface processes can be definitively excluded are extremely rare, but may hold important information for understanding the geomorphology of other planets, such as Mars.
Fluvial processes
Rivers and streams are not only conduits of water, but also of sediment. The water, as it flows over the channel bed, is able to mobilize sediment and transport it downstream, either as bed load, suspended load or dissolved load. The rate of sediment transport depends on the availability of sediment itself and on the river's discharge. Rivers are also capable of eroding into rock and forming new sediment, both from their own beds and also by coupling to the surrounding hillslopes. In this way, rivers are thought of as setting the base level for large-scale landscape evolution in nonglacial environments. Rivers are key links in the connectivity of different landscape elements.
As rivers flow across the landscape, they generally increase in size, merging with other rivers. The network of rivers thus formed is a drainage system. These systems take on four general patterns: dendritic, radial, rectangular, and trellis. Dendritic happens to be the most common, occurring when the underlying stratum is stable (without faulting). Drainage systems have four primary components: drainage basin, alluvial valley, delta plain, and receiving basin. Some geomorphic examples of fluvial landforms are alluvial fans, oxbow lakes, and fluvial terraces.
Glacial processes
Glaciers, while geographically restricted, are effective agents of landscape change. The gradual movement of ice down a valley causes abrasion and plucking of the underlying rock. Abrasion produces fine sediment, termed glacial flour. The debris transported by the glacier, when the glacier recedes, is termed a moraine. Glacial erosion is responsible for U-shaped valleys, as opposed to the V-shaped valleys of fluvial origin.
The way glacial processes interact with other landscape elements, particularly hillslope and fluvial processes, is an important aspect of Plio-Pleistocene landscape evolution and its sedimentary record in many high mountain environments. Environments that have been relatively recently glaciated but are no longer may still show elevated landscape change rates compared to those that have never been glaciated. Nonglacial geomorphic processes which nevertheless have been conditioned by past glaciation are termed paraglacial processes. This concept contrasts with periglacial processes, which are directly driven by formation or melting of ice or frost.
Hillslope processes
Soil, regolith, and rock move downslope under the force of gravity via creep, slides, flows, topples, and falls. Such mass wasting occurs on both terrestrial and submarine slopes, and has been observed on Earth, Mars, Venus, Titan and Iapetus.
Ongoing hillslope processes can change the topology of the hillslope surface, which in turn can change the rates of those processes. Hillslopes that steepen up to certain critical thresholds are capable of shedding extremely large volumes of material very quickly, making hillslope processes an extremely important element of landscapes in tectonically active areas.
On the Earth, biological processes such as burrowing or tree throw may play important roles in setting the rates of some hillslope processes.
Igneous processes
Both volcanic (eruptive) and plutonic (intrusive) igneous processes can have important impacts on geomorphology. The action of volcanoes tends to rejuvenize landscapes, covering the old land surface with lava and tephra, releasing pyroclastic material and forcing rivers through new paths. The cones built by eruptions also build substantial new topography, which can be acted upon by other surface processes. Plutonic rocks intruding then solidifying at depth can cause both uplift or subsidence of the surface, depending on whether the new material is denser or less dense than the rock it displaces.
Tectonic processes
Tectonic effects on geomorphology can range from scales of millions of years to minutes or less. The effects of tectonics on landscape are heavily dependent on the nature of the underlying bedrock fabric that more or less controls what kind of local morphology tectonics can shape. Earthquakes can, in terms of minutes, submerge large areas of land forming new wetlands. Isostatic rebound can account for significant changes over hundreds to thousands of years, and allows erosion of a mountain belt to promote further erosion as mass is removed from the chain and the belt uplifts. Long-term plate tectonic dynamics give rise to orogenic belts, large mountain chains with typical lifetimes of many tens of millions of years, which form focal points for high rates of fluvial and hillslope processes and thus long-term sediment production.
Features of deeper mantle dynamics such as plumes and delamination of the lower lithosphere have also been hypothesised to play important roles in the long term (> million year), large scale (thousands of km) evolution of the Earth's topography (see dynamic topography). Both can promote surface uplift through isostasy as hotter, less dense, mantle rocks displace cooler, denser, mantle rocks at depth in the Earth.
Marine processes
Marine processes are those associated with the action of waves, marine currents and seepage of fluids through the seafloor. Mass wasting and submarine landsliding are also important processes for some aspects of marine geomorphology. Because ocean basins are the ultimate sinks for a large fraction of terrestrial sediments, depositional processes and their related forms (e.g., sediment fans, deltas) are particularly important as elements of marine geomorphology.
Overlap with other fields
There is a considerable overlap between geomorphology and other fields. Deposition of material is extremely important in sedimentology. Weathering is the chemical and physical disruption of earth materials in place on exposure to atmospheric or near surface agents, and is typically studied by soil scientists and environmental chemists, but is an essential component of geomorphology because it is what provides the material that can be moved in the first place. Civil and environmental engineers are concerned with erosion and sediment transport, especially related to canals, slope stability (and natural hazards), water quality, coastal environmental management, transport of contaminants, and stream restoration. Glaciers can cause extensive erosion and deposition in a short period of time, making them extremely important entities in the high latitudes and meaning that they set the conditions in the headwaters of mountain-born streams; glaciology therefore is important in geomorphology.
See also
Bioerosion
Biogeology
Biogeomorphology
Biorhexistasy
British Society for Geomorphology
Coastal biogeomorphology
Coastal erosion
Concepts and Techniques in Modern Geography
Drainage system (geomorphology)
Erosion prediction
Geologic modelling
Geomorphometry
Geotechnics
Hack's law
Hydrologic modeling, behavioral modeling in hydrology
List of landforms
Orogeny
Physiographic regions of the world
Sediment transport
Soil morphology
Soils retrogression and degradation
Stream capture
Thermochronology
References
Further reading
Ialenti, Vincent. "Envisioning Landscapes of Our Very Distant Future" NPR Cosmos & Culture. 9/2014.
Bierman, P.R.; Montgomery, D.R. Key Concepts in Geomorphology. New York: W. H. Freeman, 2013. .
Ritter, D.F.; Kochel, R.C.; Miller, J.R.. Process Geomorphology. London: Waveland Pr Inc, 2011. .
Hargitai H., Page D., Canon-Tapia E. and Rodrigue C.M..; Classification and Characterization of Planetary Landforms. in: Hargitai H, Kereszturi Á, eds, Encyclopedia of Planetary Landforms. Cham: Springer 2015
External links
The Geographical Cycle, or the Cycle of Erosion (1899)
Geomorphology from Space (NASA)
British Society for Geomorphology
Earth sciences
Geology
Geological processes
Gravity
Physical geography
Planetary science
Seismology
Topography | 0.785347 | 0.996041 | 0.782238 |
Rigour | Rigour (British English) or rigor (American English; see spelling differences) describes a condition of stiffness or strictness. These constraints may be environmentally imposed, such as "the rigours of famine"; logically imposed, such as mathematical proofs which must maintain consistent answers; or socially imposed, such as the process of defining ethics and law.
Etymology
"Rigour" comes to English through old French (13th c., Modern French rigueur) meaning "stiffness", which itself is based on the Latin rigorem (nominative rigor) "numbness, stiffness, hardness, firmness; roughness, rudeness", from the verb rigere "to be stiff". The noun was frequently used to describe a condition of strictness or stiffness, which arises from a situation or constraint either chosen or experienced passively. For example, the title of the book Theologia Moralis Inter Rigorem et Laxitatem Medi roughly translates as "mediating theological morality between rigour and laxness". The book details, for the clergy, situations in which they are obligated to follow church law exactly, and in which situations they can be more forgiving yet still considered moral. Rigor mortis translates directly as the stiffness (rigor) of death (mortis), again describing a condition which arises from a certain constraint (death).
Intellectualism
Intellectual rigour is a process of thought which is consistent, does not contain self-contradiction, and takes into account the entire scope of available knowledge on the topic. It actively avoids logical fallacy. Furthermore, it requires a sceptical assessment of the available knowledge. If a topic or case is dealt with in a rigorous way, it typically means that it is dealt with in a comprehensive, thorough and complete way, leaving no room for inconsistencies.
Scholarly method describes the different approaches or methods which may be taken to apply intellectual rigour on an institutional level to ensure the quality of information published. An example of intellectual rigour assisted by a methodical approach is the scientific method, in which a person will produce a hypothesis based on what they believe to be true, then construct experiments in order to prove that hypothesis wrong. This method, when followed correctly, helps to prevent against circular reasoning and other fallacies which frequently plague conclusions within academia. Other disciplines, such as philosophy and mathematics, employ their own structures to ensure intellectual rigour. Each method requires close attention to criteria for logical consistency, as well as to all relevant evidence and possible differences of interpretation. At an institutional level, peer review is used to validate intellectual rigour.
Honesty
Intellectual rigour is a subset of intellectual honesty—a practice of thought in which ones convictions are kept in proportion to valid evidence. Intellectual honesty is an unbiased approach to the acquisition, analysis, and transmission of ideas. A person is being intellectually honest when he or she, knowing the truth, states that truth, regardless of outside social/environmental pressures. It is possible to doubt whether complete intellectual honesty exists—on the grounds that no one can entirely master his or her own presuppositions—without doubting that certain kinds of intellectual rigour are potentially available. The distinction certainly matters greatly in debate, if one wishes to say that an argument is flawed in its premises.
Politics and law
The setting for intellectual rigour does tend to assume a principled position from which to advance or argue. An opportunistic tendency to use any argument at hand is not very rigorous, although very common in politics, for example. Arguing one way one day, and another later, can be defended by casuistry, i.e. by saying the cases are different.
In the legal context, for practical purposes, the facts of cases do always differ. Case law can therefore be at odds with a principled approach; and intellectual rigour can seem to be defeated. This defines a judge's problem with uncodified law. Codified law poses a different problem, of interpretation and adaptation of definite principles without losing the point; here applying the letter of the law, with all due rigour, may on occasion seem to undermine the principled approach.
Mathematics
Mathematical rigour can apply to methods of mathematical proof and to methods of mathematical practice (thus relating to other interpretations of rigour).
Mathematical proof
Mathematical rigour is often cited as a kind of gold standard for mathematical proof. Its history traces back to Greek mathematics, especially to Euclid's Elements.
Until the 19th century, Euclid's Elements was seen as extremely rigorous and profound, but in the late 19th century, Hilbert (among others) realized that the work left certain assumptions implicit—assumptions that could not be proved from Euclid's Axioms (e.g. two circles can intersect in a point, some point is within an angle, and figures can be superimposed on each other). This was contrary to the idea of rigorous proof where all assumptions need to be stated and nothing can be left implicit. New foundations were developed using the axiomatic method to address this gap in rigour found in the Elements (e.g., Hilbert's axioms, Birkhoff's axioms, Tarski's axioms).
During the 19th century, the term "rigorous" began to be used to describe increasing levels of abstraction when dealing with calculus which eventually became known as mathematical analysis. The works of Cauchy added rigour to the older works of Euler and Gauss. The works of Riemann added rigour to the works of Cauchy. The works of Weierstrass added rigour to the works of Riemann, eventually culminating in the arithmetization of analysis. Starting in the 1870s, the term gradually came to be associated with Cantorian set theory.
Mathematical rigour can be modelled as amenability to algorithmic proof checking. Indeed, with the aid of computers, it is possible to check some proofs mechanically. Formal rigour is the introduction of high degrees of completeness by means of a formal language where such proofs can be codified using set theories such as ZFC (see automated theorem proving).
Published mathematical arguments have to conform to a standard of rigour, but are written in a mixture of symbolic and natural language. In this sense, written mathematical discourse is a prototype of formal proof. Often, a written proof is accepted as rigorous although it might not be formalised as yet. The reason often cited by mathematicians for writing informally is that completely formal proofs tend to be longer and more unwieldy, thereby obscuring the line of argument. An argument that appears obvious to human intuition may in fact require fairly long formal derivations from the axioms. A particularly well-known example is how in Principia Mathematica, Whitehead and Russell have to expend a number of lines of rather opaque effort in order to establish that, indeed, it is sensical to say: "1+1=2". In short, comprehensibility is favoured over formality in written discourse.
Still, advocates of automated theorem provers may argue that the formalisation of proof does improve the mathematical rigour by disclosing gaps or flaws in informal written discourse. When the correctness of a proof is disputed, formalisation is a way to settle such a dispute as it helps to reduce misinterpretations or ambiguity.
Physics
The role of mathematical rigour in relation to physics is twofold:
First, there is the general question, sometimes called Wigner's Puzzle, "how it is that mathematics, quite generally, is applicable to nature?" Some scientists believe that its record of successful application to nature justifies the study of mathematical physics.
Second, there is the question regarding the role and status of mathematically rigorous results and relations. This question is particularly vexing in relation to quantum field theory, where computations often produce infinite values for which a variety of non-rigorous work-arounds have been devised.
Both aspects of mathematical rigour in physics have attracted considerable attention in philosophy of science (see, for example, ref. and ref. and the works quoted therein).
Education
Rigour in the classroom is a hotly debated topic amongst educators. Even the semantic meaning of the word is contested.
Generally speaking, classroom rigour consists of multi-faceted, challenging instruction and correct placement of the student. Students excelling in formal operational thought tend to excel in classes for gifted students. Students who have not reached that final stage of cognitive development, according to developmental psychologist Jean Piaget, can build upon those skills with the help of a properly trained teacher.
Rigour in the classroom is commonly called "rigorous instruction". It is instruction that requires students to construct meaning for themselves, impose structure on information, integrate individual skills into processes, operate within but at the outer edge of their abilities, and apply what they learn in more than one context and to unpredictable situations
See also
Intellectual honesty
Intellectual dishonesty
Pedant
Scientific method
Self-deception
Sophistry
Cognitive rigor
References
Philosophical logic | 0.79092 | 0.989019 | 0.782235 |
Downshifting (lifestyle) | In social behavior, downshifting is a trend where individuals adopt simpler lives from what critics call the "rat race".
The long-term effect of downshifting can include an escape from what has been described as economic materialism, as well as reduce the "stress and psychological expense that may accompany economic materialism". This new social trend emphasizes finding an improved balance between leisure and work, while also focusing life goals on personal fulfillment, as well as building personal relationships instead of the all-consuming pursuit of economic success.
Downshifting, as a concept, shares characteristics with simple living. However, it is distinguished as an alternative form by its focus on moderate change and concentration on an individual comfort level and a gradual approach to living. In the 1990s, this new form of simple living began appearing in the mainstream media, and has continually grown in popularity among populations living in industrial societies, especially the United States, the United Kingdom, New Zealand, and Australia, as well as Russia.
Values and motives
"Down-shifters" refers to people who adopt long-term voluntary simplicity in their lives. A few of the main practices of down-shifters include accepting less money for fewer hours worked, while placing an emphasis on consuming less in order to reduce their ecological footprint. One of the main results of these practices is being able to enjoy more leisure time in the company of others, especially loved ones.
The primary motivations for downshifting are gaining leisure time, escaping from work-and-spend cycle, and removing the clutter of unnecessary possessions. The personal goals of downshifting are simple: To reach a holistic self-understanding and satisfying meaning in life.
Because of its personalized nature and emphasis on many minor changes, rather than complete lifestyle overhaul, downshifting attracts participants from across the socioeconomic spectrum. An intrinsic consequence of downshifting is increased time for non-work-related activities, which, combined with the diverse demographics of downshifters, cultivates higher levels of civic engagement and social interaction.
The scope of participation is limitless, because all members of society—adults, children, businesses, institutions, organizations, and governments—are able to downshift even if many demographic strata do not start "high" enough to "down"-shift.
In practice, down-shifting involves a variety of behavioral and lifestyle changes. The majority of these down-shifts are voluntary choices. Natural life course events, such as the loss of a job, or birth of a child can prompt involuntary down-shifting. There is also a temporal dimension, because a down-shift could be either temporary or permanent.
Methods
Work and income
The most common form of down-shifting is work (or income) down-shifting. Down-shifting is fundamentally based on dissatisfaction with the conditions and consequences of the workplace environment. The philosophy of work-to-live replaces the social ideology of live-to-work. Reorienting economic priorities shifts the work–life balance away from the workplace.
Economically, work downshifts are defined in terms of reductions in either actual or potential income, work hours, and spending levels. Following a path of earnings that is lower than the established market path is a downshift in potential earnings in favor of gaining other non-material benefits.
On an individual level, work downshifting is a voluntary reduction in annual income. Downshifters desire meaning in life outside of work and, therefore, will opt to decrease the amount of time spent at work or work hours. Reducing the number of hours of work, consequently, lowers the amount earned. Simply not working overtime or taking a half-day a week for leisure time, are work downshifts.
Career downshifts are another way of downshifting economically and entail lowering previous aspirations of wealth, a promotion or higher social status. Quitting a job to work locally in the community, from home or to start a business are examples of career downshifts. Although more radical, these changes do not mean stopping work altogether.
Many reasons are cited by workers for this choice and usually center on a personal cost–benefit analysis of current working situations and desired extracurricular activities. High stress, pressure from employers to increase productivity, and long commutes can be factors that contribute to the costs of being employed. If the down-shifter wants more non-material benefits like leisure time, a healthy family life, or personal freedom then switching jobs could be a desirable option.
Work down-shifting may also be a key to considerable health benefits as well as a healthy retirement. People are retiring later in life than previous generations. As can be seen by looking at The Health and Retirement Study, done by the Health and Retirement Study Survey Research Center, women can show the long term health benefits of down-shifting their work lives by working part time hours over a long period of years. Men however prove to be more unhealthy if they work part time from middle age till retirement. Men who down-shift their work life to part time hours at the age of 60 to 65 however benefit from continuing to work a part-time job through a semi retirement even over the age of 70. This is an example of how flexible working policies can be a key to being healthy while in retirement.
Spending habits
Another aspect of down-shifting is being a conscious consumer or actively practicing alternative forms of consumption. Proponents of down-shifting point to consumerism as a primary source of stress and dissatisfaction because it creates a society of individualistic consumers who measure both social status and general happiness by an unattainable quantity of material possessions. Instead of buying goods for personal satisfaction, consumption down-shifting, purchasing only the necessities, is a way to focus on quality of life rather than quantity.
This realignment of spending priorities promotes the functional utility of goods over their ability to convey status which is evident in downshifters being generally less brand-conscious. These consumption habits also facilitate the option of working and earning less because annual spending is proportionally lower. Reducing spending is less demanding than more extreme downshifts in other areas, like employment, as it requires only minor lifestyle changes.
Policies that enable downshifting
Unions, business, and governments could implement more flexible working hours, part-time work, and other non-traditional work arrangements that enable people to work less, while still maintaining employment. Small business legislation, reduced filing requirements and reduced tax rates encourage small-scale individual entrepreneurship and therefore help individuals quit their jobs altogether and work for themselves on their own terms.
Environmental consequences
The catch-phrase of International Downshifting Week is "Slow Down and Green Up". Whether intentional or unintentional, generally, the choices and practices of down-shifters nurture environmental health because they reject the fast-paced lifestyle fueled by fossil fuels and adopt more sustainable lifestyles. The latent function of consumption down-shifting is to reduce, to some degree, the carbon footprint of the individual down-shifter. An example is to shift from a corporate suburban rat race lifestyle to a small eco friendly farming lifestyle.
Down-shifting geographically
Downshifting geographically is a relocation to a smaller, rural, or more slow-paced community. This is often a response to the hectic pace of life and stresses in urban areas. It is a significant change but does not bring total removal from mainstream culture.
Sociopolitical implications
Although downshifting is primarily motivated by personal desire and not by a conscious political stance, it does define societal overconsumption as the source of much personal discontent. By redefining life satisfaction in non-material terms, downshifters assume an alternative lifestyle but continue to coexist in a society and political system preoccupied with the economy. In general, downshifters are politically apathetic because mainstream politicians mobilize voters by proposing governmental solutions to periods of financial hardship and economic recessions. This economic rhetoric is meaningless to downshifters who have forgone worrying about money.
In the United States, the UK, and Australia, a significant minority, approximately 20 to 25 percent, of these countries' citizens identify themselves in some respect as downshifters. Downshifting is not an isolated or unusual choice. Politics still centers around consumerism and unrestricted growth, but downshifting values, such as family priorities and workplace regulation, appear in political debates and campaigns.
Like downshifters, the Cultural Creatives is another social movement whose ideology and practices diverge from mainstream consumerism and according to Paul Ray, are followed by at least a quarter of U.S. citizens.
In his book In Praise of Slowness, Carl Honoré relates followers of downshifting and simple living to the global slow movement.
The significant number and diversity of downshifters are a challenge to economic approaches to improving society. The rise in popularity of downshifting and similar, post-materialist ideologies represents unorganized social movements without political aspirations or motivating grievances. This is a result of their grassroots nature and relatively inconspicuous, non-confrontational subcultures.
See also
Anti-consumerism
Conspicuous consumption
Degrowth
Demotion
Downsizing
Eco-communalism
Ecological economics
Ecovillage
Ethical consumerism
FIRE movement
Frugality
Homesteading
Intentional community
Intentional living
Minimalism / Simple living
Permaculture
Slow living
Sustainable living
Transition towns
Workaholic
References
Further reading
Blanchard, Elisa A. (1994). Beyond Consumer Culture: A Study of Revaluation and Voluntary Action. Unpublished thesis, Tufts University.
Bull, Andy. (1998). Downshifting: The Ultimate Handbook. London: Thorsons
Etziomi, Amitai. (1998). Voluntary simplicity: Characterization, select psychological implications, and societal consequences. Journal of Economic Psychology 19:619–43.
Hamilton, Clive (November 2003). Downshifting in Britain: A sea-change in the pursuit of happiness. The Australia Institute Discussion Paper No. 58. 42p.
Hamilton, C., Mail, E. (January 2003). Downshifting in Australia: A sea-change in the pursuit of happiness. The Australia Institute Discussion Paper No. 50. 12p. ISSN 1322-5421
Juniu, Susana (2000). Downshifting: Regaining the Essence of Leisure, Journal of Leisure Research, 1st Quarter, Vol. 32 Issue 1, p69, 5p.
Levy, Neil (2005). Downshifting and Meaning in Life, Ratio, Vol. 18, Issue 2, 176–89.
J. B. MacKinnon (2021). The Day the World Stops Shopping: How ending consumerism gives us a better life and a greener world, Penguin Random House.
Mazza, P. (1997). Keeping it simple. Reflections 36 (March): 10–12.
Nelson, Michelle R., Paek, Hye-Jin, Rademacher, Mark A. (2007). Downshifting Consumer = Upshifting Citizen?: An Examination of a Local Freecycle Community. The Annals of the American Academy of Political and Social Science, 141–56.
Saltzman, Amy. (1991). Downshifting: Reinventing Success on a Slower Track. New York: Harper Collins.
Schor, Juliet B (1998). Voluntary Downshifting in the 1990s. In E. Houston, J. Stanford, & L. Taylor (Eds.), Power, Employment, and Accumulation: Social Structures in Economic Theory and Practice (pp. 66–79). Armonk, NY: M. E. Sharpe, 2003. Text from University of Chapel Hill Library Collections.
External links
The Homemade Life, a web forum aimed at promoting simple living
Official website for the Slow Movement
How To Be Rich Today – downloadable guide to Downshifting (UK)
Personal finance
Simple living
Subcultures
Waste minimisation
Work–life balance
fr:Simplicité volontaire | 0.788555 | 0.991883 | 0.782155 |
Production (economics) | Production is the process of combining various inputs, both material (such as metal, wood, glass, or plastics) and immaterial (such as plans, or knowledge) in order to create output. Ideally this output will be a good or service which has value and contributes to the utility of individuals. The area of economics that focuses on production is called production theory, and it is closely related to the consumption(or consumer) theory of economics.
The production process and output directly result from productively utilising the original inputs (or factors of production). Known as primary producer goods or services, land, labour, and capital are deemed the three fundamental factors of production. These primary inputs are not significantly altered in the output process, nor do they become a whole component in the product. Under classical economics, materials and energy are categorised as secondary factors as they are byproducts of land, labour and capital. Delving further, primary factors encompass all of the resourcing involved, such as land, which includes the natural resources above and below the soil. However, there is a difference between human capital and labour. In addition to the common factors of production, in different economic schools of thought, entrepreneurship and technology are sometimes considered evolved factors in production. It is common practice that several forms of controllable inputs are used to achieve the output of a product. The production function assesses the relationship between the inputs and the quantity of output.
Economic welfare is created in a production process, meaning all economic activities that aim directly or indirectly to satisfy human wants and needs. The degree to which the needs are satisfied is often accepted as a measure of economic welfare. In production there are two features which explain increasing economic welfare. The first is improving quality-price-ratio of goods and services and increasing incomes from growing and more efficient market production, and the second is total production which help in increasing GDP.
The most important forms of production are:
market production
public production
household production
In order to understand the origin of economic well-being, we must understand these three production processes. All of them produce commodities which have value and contribute to the well-being of individuals.
The satisfaction of needs originates from the use of the commodities which are produced. The need satisfaction increases when the quality-price-ratio of the commodities improves
and more satisfaction is achieved at less cost. Improving the quality-price-ratio of commodities is to a producer an essential way to improve the competitiveness of products but this kind of gains distributed to customers cannot be measured with production data. Improving product competitiveness often means lower prices and to the producer lower producer income, to be compensated with higher sales volume.
Economic well-being also increases due to income gains from increasing production. Market production is the only production form that creates and distributes incomes to stakeholders. Public production and household production are financed by the incomes generated in market production. Thus market production has a double role: creating well-being and producing goods and services and income creation. Because of this double role, market production is the “primus motor” of economic well-being.
Elements of production economics
The underlying assumption of production is that maximisation of profit is the key objective of the producer. The difference in the value of the production values (the output value) and costs (associated with the factors of production) is the calculated profit. Efficiency, technological, pricing, behavioural, consumption and productivity changes are a few of the critical elements that significantly influence production economics.
Efficiency
Within production, efficiency plays a tremendous role in achieving and maintaining full capacity, rather than producing an inefficient (not optimal) level. Changes in efficiency relate to the positive shift in current inputs, such as technological advancements, relative to the producer's position. Efficiency is calculated by the maximum potential output divided by the actual input. An example of the efficiency calculation is that if the applied inputs have the potential to produce 100 units but are producing 60 units, the efficiency of the output is 0.6, or 60%. Furthermore, economies of scale identify the point at which production efficiency (returns) can be increased, decrease or remain constant.
Technological changes
This element sees the ongoing adaption of technology at the frontier of the production function. Technological change is a significant determinant in advancing economic production results, as noted throughout economic histories, such as the industrial revolution. Therefore, it is critical to continue to monitor its effects on production and promote the development of new technologies.
Behaviour, consumption and productivity
There is a strong correlation between the producer's behaviour and the underlying assumption of production – both assume profit maximising behaviour. Production can be either increased, decreased or remain constant as a result of consumption, amongst various other factors. The relationship between production and consumption is mirror against the economic theory of supply and demand. Accordingly, when production decreases more than factor consumption, this results in reduced productivity. Contrarily, a production increase over consumption is seen as increased productivity.
Pricing
In an economic market, production input and output prices are assumed to be set from external factors as the producer is the price taker. Hence, pricing is an important element in the real-world application of production economics. Should the pricing be too high, the production of the product is simply unviable. There is also a strong link between pricing and consumption, with this influencing the overall production scale.
As a source of economic well-being
In principle there are two main activities in an economy, production and consumption. Similarly, there are two kinds of actors, producers and consumers. Well-being is made possible by efficient production and by the interaction between producers and consumers. In the interaction, consumers can be identified in two roles both of which generate well-being. Consumers can be both customers of the producers and suppliers to the producers. The customers' well-being arises from the commodities they are buying and the suppliers' well-being is related to the income they receive as compensation for the production inputs they have delivered to the producers.
Stakeholders of production
Stakeholders of production are persons, groups or organizations with an interest in a producing company. Economic well-being originates in efficient production and it is distributed through the interaction between the company's stakeholders. The stakeholders of companies are economic actors which have an economic interest in a company. Based on the similarities of their interests, stakeholders can be classified into three groups in order to differentiate their interests and mutual relations. The three groups are as follows:
Customers
The customers of a company are typically consumers, other market producers or producers in the public sector. Each of them has their individual production functions. Due to competition, the price-quality-ratios of commodities tend to improve and this brings the benefits of better productivity to customers. Customers get more for less. In households and the public sector this means that more need satisfaction is achieved at less cost. For this reason, the productivity of customers can increase over time even though their incomes remain unchanged.
Suppliers
The suppliers of companies are typically producers of materials, energy, capital, and services. They all have their individual production functions. The changes in prices or qualities of supplied commodities have an effect on both actors' (company and suppliers) production functions. We come to the conclusion that the production functions of the company and its suppliers are in a state of continuous change.
Producers
Those participating in production, i.e., the labour force, society and owners, are collectively referred to as the producer community or producers. The producer community generates income from developing and growing production.
The well-being gained through commodities stems from the price-quality relations of the commodities. Due to competition and development in the market, the price-quality relations of commodities tend to improve over time. Typically the quality of a commodity goes up and the price goes down over time. This development favourably affects the production functions of customers. Customers get more for less. Consumer customers get more satisfaction at less cost. This type of well-being generation can only partially be calculated from the production data. The situation is presented in this study.
The producer community (labour force, society, and owners) earns income as compensation for the inputs they have delivered to the production. When the production grows and becomes more efficient, the income tends to increase. In production this brings about an increased ability to pay salaries, taxes and profits. The growth of production and improved productivity generate additional income for the producing community. Similarly, the high income level achieved in the community is a result of the high volume of production and its good performance. This type of well-being generation – as mentioned earlier - can be reliably calculated from the production data.
Main processes of a producing company
A producing company can be divided into sub-processes in different ways; yet, the following five are identified as main processes, each with a logic, objectives, theory and key figures of its own. It is important to examine each of them individually, yet, as a part of the whole, in order to be able to measure and understand them. The main processes of a company are as follows:
real process.
income distribution process
production process.
monetary process.
market value process.
Production output is created in the real process, gains of production are distributed in the income distribution process and these two processes constitute the production process. The production process and its sub-processes, the real process and income distribution process occur simultaneously, and only the production process is identifiable and measurable by the traditional accounting practices. The real process and income distribution process can be identified and measured by extra calculation, and this is why they need to be analyzed separately in order to understand the logic of production and its performance.
Real process generates the production output from input, and it can be described by means of the production function. It refers to a series of events in production in which production inputs of different quality and quantity are combined into products of different quality and quantity. Products can be physical goods, immaterial services and most often combinations of both. The characteristics created into the product by the producer imply surplus value to the consumer, and on the basis of the market price this value is shared by the consumer and the producer in the marketplace. This is the mechanism through which surplus value originates to the consumer and the producer likewise. Surplus values to customers cannot be measured from any production data. Instead the surplus value to a producer can be measured. It can be expressed both in terms of nominal and real values. The real surplus value to the producer is an outcome of the real process, real income, and measured proportionally it means productivity.
The concept “real process” in the meaning quantitative structure of production process was introduced in Finnish management accounting in the 1960s. Since then it has been a cornerstone in the Finnish management accounting theory. (Riistama et al. 1971)
Income distribution process of the production refers to a series of events in which the unit prices of constant-quality products and inputs alter causing a change in income distribution among those participating in the exchange. The magnitude of the change in income distribution is directly proportionate to the change in prices of the output and inputs and to their quantities. Productivity gains are distributed, for example, to customers as lower product sales prices or to staff as higher income pay.
The production process consists of the real process and the income distribution process. A result and a criterion of success of the owner is profitability. The profitability of production is the share of the real process result the owner has been able to keep to himself in the income distribution process. Factors describing the production process are the components of profitability, i.e., returns and costs. They differ from the factors of the real process in that the components of profitability are given at nominal prices whereas in the real process the factors are at periodically fixed prices.
Monetary process refers to events related to financing the business. Market value process refers to a series of events in which investors determine the market value of the company in the investment markets.
Production growth and performance
Economic growth may be defined as a production increase of an output of a production process. It is usually expressed as a growth percentage depicting growth of the real production output. The real output is the real value of products produced in a production process and when we subtract the real input from the real output we get the real income. The real output and the real income are generated by the real process of production from the real inputs.
The real process can be described by means of the production function. The production function is a graphical or mathematical expression showing the relationship between the inputs used in production and the output achieved. Both graphical and mathematical expressions are presented and demonstrated. The production function is a simple description of the mechanism of income generation in production process. It consists of two components. These components are a change in production input and a change in productivity.
The figure illustrates an income generation process (exaggerated for clarity). The Value T2 (value at time 2) represents the growth in output from Value T1 (value at time 1). Each time of measurement has its own graph of the production function for that time (the straight lines). The output measured at time 2 is greater than the output measured at time one for both of the components of growth: an increase of inputs and an increase of productivity. The portion of growth caused by the increase in inputs is shown on line 1 and does not change the relation between inputs and outputs. The portion of growth caused by an increase in productivity is shown on line 2 with a steeper slope. So increased productivity represents greater output per unit of input.
The growth of production output does not reveal anything about the performance of the production process. The performance of production measures production's ability to generate income. Because the income from production is generated in the real process, we call it the real income. Similarly, as the production function is an expression of the real process, we could also call it “income generated by the production function”.
The real income generation follows the logic of the production function. Two components can also be distinguished in the income change: the income growth caused by an increase in production input (production volume) and the income growth caused by an increase in productivity. The income growth caused by increased production volume is determined by moving along the production function graph. The income growth corresponding to a shift of the production function is generated by the increase in productivity. The change of real income so signifies a move from the point 1 to the point 2 on the production function (above). When we want to maximize the production performance we have to maximize the income generated by the production function.
The sources of productivity growth and production volume growth are explained as follows. Productivity growth is seen as the key economic indicator of innovation. The successful introduction of new products and new or altered processes, organization structures, systems, and business models generates growth of output that exceeds the growth of inputs. This results in growth in productivity or output per unit of input. Income growth can also take place without innovation through replication of established technologies. With only replication and without innovation, output will increase in proportion to inputs. (Jorgenson et al. 2014, 2) This is the case of income growth through production volume growth.
Jorgenson et al. (2014, 2) give an empiric example. They show that the great preponderance of economic growth in the US since 1947 involves the replication of existing technologies through investment in equipment, structures, and software and expansion of the labor force. Further, they show that innovation accounts for only about twenty percent of US economic growth.
In the case of a single production process (described above) the output is defined as an economic value of products and services produced in the process. When we want to examine an entity of many production processes we have to sum up the value-added created in the single processes. This is done in order to avoid the double accounting of intermediate inputs. Value-added is obtained by subtracting the intermediate inputs from the outputs. The most well-known and used measure of value-added is the GDP (Gross Domestic Product). It is widely used as a measure of the economic growth of nations and industries.
Absolute (total) and average income
The production performance can be measured as an average or an absolute income. Expressing performance both in average (avg.) and absolute (abs.) quantities is helpful for understanding the welfare effects of production. For measurement of the average production performance, we use the known productivity ratio
Real output / Real input.
The absolute income of performance is obtained by subtracting the real input from the real output as follows:
Real income (abs.) = Real output – Real input
The growth of the real income is the increase of the economic value that can be distributed between the production stakeholders. With the aid of the production model we can perform the average and absolute accounting in one calculation. Maximizing production performance requires using the absolute measure, i.e. the real income and its derivatives as a criterion of production performance.
Maximizing productivity also leads to the phenomenon called "jobless growth" This refers to economic growth as a result of productivity growth but without creation of new jobs and new incomes from them. A practical example illustrates the case. When a jobless person obtains a job in market production we may assume it is a low productivity job. As a result, average productivity decreases but the real income per capita increases. Furthermore, the well-being of the society also grows. This example reveals the difficulty to interpret the total productivity change correctly. The combination of volume increase and total productivity decrease leads in this case to the improved performance because we are on the “diminishing returns” area of the production function. If we are on the part of “increasing returns” on the production function, the combination of production volume increase and total productivity increase leads to improved production performance. Unfortunately, we do not know in practice on which part of the production function we are. Therefore, a correct interpretation of a performance change is obtained only by measuring the real income change.
Production function
In the short run, the production function assumes there is at least one fixed factor input. The production function relates the quantity of factor inputs used by a business to the amount of output that result. There are three measure of production and productivity. The first one is total output (total product). It is straightforward to measure how much output is being produced in the manufacturing industries like motor vehicles. In the tertiary industry such as service or knowledge industries, it is harder to measure the outputs since they are less tangible.
The second way of measuring production and efficiency is average output. It measures output per-worker-employed or output-per-unit of capital. The third measures of production and efficiency is the marginal product. It is the change in output from increasing the number of workers used by one person, or by adding one more machine to the production process in the short run.
The law of diminishing marginal returns points out that as more units of a variable input are added to fixed amounts of land and capital, the change in total output would rise firstly and then fall.
The length of time required for all the factor of production to be flexible varies from industry to industry. For example, in the nuclear power industry, it takes many years to commission new nuclear power plant and capacity.
Real-life examples of the firm's short - term production equations may not be quite the same as the smooth production theory of the department. In order to improve efficiency and promote the structural transformation of economic growth, it is most important to establish the industrial development model related to it. At the same time, a shift should be made to models that contain typical characteristics of the industry, such as specific technological changes and significant differences in the likelihood of substitution before and after investment.
Production models
A production model is a numerical description of the production process and is based on the prices and the quantities of inputs and outputs. There are two main approaches to operationalize the concept of production function. We can use mathematical formulae, which are typically used in macroeconomics (in growth accounting) or arithmetical models, which are typically used in microeconomics and management accounting. We do not present the former approach here but refer to the survey “Growth accounting” by Hulten 2009. Also see an extensive discussion of various production models and their estimations in Sickles and Zelenyuk (2019, Chapter 1-2).
We use here arithmetical models because they are like the models of management accounting, illustrative and easily understood and applied in practice. Furthermore, they are integrated to management accounting, which is a practical advantage. A major advantage of the arithmetical model is its capability to depict production function as a part of production process. Consequently, production function can be understood, measured, and examined as a part of production process.
There are different production models according to different interests. Here we use a production income model and a production analysis model in order to demonstrate production function as a phenomenon and a measureable quantity.
Production income model
The scale of success run by a going concern is manifold, and there are no criteria that might be universally applicable to success. Nevertheless, there is one criterion by which we can generalise the rate of success in production. This criterion is the ability to produce surplus value. As a criterion of profitability, surplus value refers to the difference between returns and costs, taking into consideration the costs of equity in addition to the costs included in the profit and loss statement as usual. Surplus value indicates that the output has more value than the sacrifice made for it, in other words, the output value is higher than the value (production costs) of the used inputs. If the surplus value is positive, the owner's profit expectation has been surpassed.
The table presents a surplus value calculation. We call this set of production data a basic example and we use the data through the article in illustrative production models. The basic example is a simplified profitability calculation used for illustration and modelling. Even as reduced, it comprises all phenomena of a real measuring situation and most importantly the change in the output-input mix between two periods. Hence, the basic example works as an illustrative “scale model” of production without any features of a real measuring situation being lost. In practice, there may be hundreds of products and inputs but the logic of measuring does not differ from that presented in the basic example.
In this context, we define the quality requirements for the production data used in productivity accounting. The most important criterion of good measurement is the homogenous quality of the measurement object. If the object is not homogenous, then the measurement result may include changes in both quantity and quality but their respective shares will remain unclear. In productivity accounting this criterion requires that every item of output and input must appear in accounting as being homogenous. In other words, the inputs and the outputs are not allowed to be aggregated in measuring and accounting. If they are aggregated, they are no longer homogenous and hence the measurement results may be biased.
Both the absolute and relative surplus value have been calculated in the example. Absolute value is the difference of the output and input values and the relative value is their relation, respectively. The surplus value calculation in the example is at a nominal price, calculated at the market price of each period.
Production analysis model
A model used here is a typical production analysis model by help of which it is possible to calculate the outcome of the real process, income distribution process and production process. The starting point is a profitability calculation using surplus value as a criterion of profitability. The surplus value calculation is the only valid measure for understanding the connection between profitability and productivity or understanding the connection between real process and production process. A valid measurement of total productivity necessitates considering all production inputs, and the surplus value calculation is the only calculation to conform to the requirement. If we omit an input in productivity or income accounting, this means that the omitted input can be used unlimitedly in production without any cost impact on accounting results.
Accounting and interpreting
The process of calculating is best understood by applying the term ceteris paribus, i.e. "all other things being the same," stating that at a time only the impact of one changing factor be introduced to the phenomenon being examined. Therefore, the calculation can be presented as a process advancing step by step. First, the impacts of the income distribution process are calculated, and then, the impacts of the real process on the profitability of the production.
The first step of the calculation is to separate the impacts of the real process and the income distribution process, respectively, from the change in profitability (285.12 – 266.00 = 19.12). This takes place by simply creating one auxiliary column (4) in which a surplus value calculation is compiled using the quantities of Period 1 and the prices of Period 2. In the resulting profitability calculation, Columns 3 and 4 depict the impact of a change in income distribution process on the profitability and in Columns 4 and 7 the impact of a change in real process on the profitability.
The accounting results are easily interpreted and understood. We see that the real income has increased by 58.12 units from which 41.12 units come from the increase of productivity growth and the rest 17.00 units come from the production volume growth. The total increase of real income (58.12) is distributed to the stakeholders of production, in this case, 39.00 units to the customers and to the suppliers of inputs and the rest 19.12 units to the owners.
Here we can make an important conclusion. Income formation of production is always a balance between income generation and income distribution. The income change created in a real process (i.e. by production function) is always distributed to the stakeholders as economic values within the review period. Accordingly, the changes in real income and income distribution are always equal in terms of economic value.
Based on the accounted changes of productivity and production volume values we can explicitly conclude on which part of the production function the production is. The rules of interpretations are the following:
The production is on the part of “increasing returns” on the production function, when
productivity and production volume increase or
productivity and production volume decrease
The production is on the part of “diminishing returns” on the production function, when
productivity decreases and volume increases or
productivity increases and volume decreases.
In the basic example, the combination of volume growth (+17.00) and productivity growth (+41.12) reports explicitly that the production is on the part of “increasing returns” on the production function (Saari 2006 a, 138–144).
Another production model (Production Model Saari 1989) also gives details of the income distribution (Saari 2011,14). Because the accounting techniques of the two models are different, they give differing, although complementary, analytical information. The accounting results are, however, identical. We do not present the model here in detail but we only use its detailed data on income distribution, when the objective functions are formulated in the next section.
Objective functions
An efficient way to improve the understanding of production performance is to formulate different objective functions according to the objectives of the different interest groups. Formulating the objective function necessitates defining the variable to be maximized (or minimized). After that other variables are considered as constraints or free variables. The most familiar objective function is profit maximization which is also included in this case. Profit maximization is an objective function that stems from the owner's interest and all other variables are constraints in relation to maximizing of profits in the organization.
The procedure for formulating objective functions
The procedure for formulating different objective functions, in terms of the production model, is introduced next. In the income formation from production the following objective functions can be identified:
Maximizing the real income
Maximizing the producer income
Maximizing the owner income.
These cases are illustrated using the numbers from the basic example. The following symbols are used in the presentation:
The equal sign (=) signifies the starting point of the computation or the result of computing and the plus or minus sign (+ / -) signifies a variable that is to be added or subtracted from the function. A producer means here the producer community, i.e. labour force, society and owners.
Objective function formulations can be expressed in a single calculation which concisely illustrates the logic of the income generation, the income distribution and the variables to be maximized.
The calculation resembles an income statement starting with the income generation and ending with the income distribution. The income generation and the distribution are always in balance so that their amounts are equal. In this case, it is 58.12 units. The income which has been generated in the real process is distributed to the stakeholders during the same period. There are three variables that can be maximized. They are the real income, the producer income and the owner income. Producer income and owner income are practical quantities because they are addable quantities and they can be computed quite easily. Real income is normally not an addable quantity and in many cases it is difficult to calculate.
The dual approach for the formulation
Here we have to add that the change of real income can also be computed from the changes in income distribution. We have to identify the unit price changes of outputs and inputs and calculate their profit impacts (i.e. unit price change x quantity). The change of real income is the sum of these profit impacts and the change of owner income. This approach is called the dual approach because the framework is seen in terms of prices instead of quantities (ONS 3, 23).
The dual approach has been recognized in growth accounting for long but its interpretation has remained unclear. The following question has remained unanswered: “Quantity based estimates of the residual are interpreted as a shift in the production function, but what is the interpretation of the price-based growth estimates?” (Hulten 2009, 18). We have demonstrated above that the real income change is achieved by quantitative changes in production and the income distribution change to the stakeholders is its dual. In this case, the duality means that the same accounting result is obtained by accounting the change of the total income generation (real income) and by accounting the change of the total income distribution.
See also
Adaptive strategies
A list of production functions
Assembly line
Johann Heinrich von Thünen
Division of labour
Industrial Revolution
Cost-of-production theory of value
Computer-aided manufacturing
DIRTI 5
Distribution (economics)
Factors of production
Outline of industrial organization
Outline of production
Output (economics)
Price
Prices of production
Pricing strategies
Product (business)
Production function
Production theory basics
Production possibility frontier
Productive and unproductive labour
Productive forces
Productivism
Productivity
Productivity model
Productivity improving technologies (historical)
Microeconomics
Mode of production
Mass production
Second Industrial Revolution
Footnotes
References
Sickles, R., and Zelenyuk, V. (2019). Measurement of Productivity and Efficiency: Theory and Practice. Cambridge: Cambridge University Press.
Further references and external links
Moroney, J. R. (1967) "Cobb-Douglass production functions and returns to scale in US manufacturing industry", Western Economic Journal, vol 6, no 1, December 1967, pp 39–51.
Pearl, D. and Enos, J. (1975) "Engineering production functions and technological progress", The Journal of Industrial Economics, vol 24, September 1975, pp 55–72.
Robinson, J. (1953) "The production function and the theory of capital", Review of Economic Studies, vol XXI, 1953, pp. 81–106
Anwar Shaikh, "Laws of Production and Laws of Algebra: The Humbug Production Function", in The Review of Economics and Statistics, Volume 56(1), February 1974, pp. 115–120. Wayback Machine
Anwar Shaikh, "Laws of Production and Laws of Algebra – Humbug II", in Growth, Profits and Property ed. by Edward J. Nell. Cambridge, Cambridge University Press, 1980. Wayback Machine
Anwar Shaikh, "Nonlinear Dynamics and Pseudo-Production Functions", 2008.
Shephard, R (1970). Theory of cost and production functions, Princeton University Press, Princeton NJ.
Sickles, R., and Zelenyuk, V. (2019). "Measurement of Productivity and Efficiency: Theory and Practice". Cambridge: Cambridge University Press.
Thompson, A. (1981). Economics of the firm, Theory and practice, 3r d edition, Prentice Hall, Englewood Cliffs.
Elmer G. Wiens: Production Functions – Models of the Cobb-Douglas, C.E.S., Trans-Log, and Diewert Production Functions.
Production economics | 0.784903 | 0.996443 | 0.782111 |
Intensive animal farming | Intensive animal farming, industrial livestock production, and macro-farms, also known (particularly by opponents) as factory farming, is a type of intensive agriculture, specifically an approach to animal husbandry designed to maximize production while minimizing costs. To achieve this, agribusinesses keep livestock such as cattle, poultry, and fish at high stocking densities, at large scale, and using modern machinery, biotechnology, and global trade. The main products of this industry are meat, milk and eggs for human consumption.
There is a continuing debate over the benefits, risks and ethics of intensive animal farming. The issues include the efficiency of food production, animal welfare, health risks and the environmental impact (e.g. agricultural pollution and climate change). There are also concerns as to whether intensive animal farming is sustainable in the long-run, given its costs in resources. Intensive animal farming is more controversial than local farming and meat consumption in general. Advocates of factory farming claim that factory farming has led to the betterment of housing, nutrition, and disease control over the last twenty years; however, these claims have been debunked. It has been shown that factory farming harms wildlife, the environment, creates health risks, abuses animals, exploits workers (in particular undocumented workers), and raises very severe ethical issues.
History
Intensive animal farming is a relatively recent development in the history of agriculture, utilizing scientific discoveries and technological advances to enable changes in agricultural methods that increase production. Innovations from the late 19th century generally parallel developments in mass production in other industries in the latter part of the Industrial Revolution. The discovery of vitamins and their role in animal nutrition, in the first two decades of the 20th century, led to vitamin supplements, which allowed chickens to be raised indoors. The discovery of antibiotics and vaccines facilitated raising livestock in larger numbers by reducing disease. Chemicals developed for use in World War II gave rise to synthetic pesticides. Developments in shipping networks and technology have made long-distance distribution of agricultural produce feasible.
Agricultural production across the world doubled four times between 1820 and 1975 (1820 to 1920; 1920 to 1950; 1950 to 1965; and 1965 to 1975) to feed a global population of one billion human beings in 1800 and 6.5 billion in 2002. During the same period, the number of people involved in farming dropped as the process became more automated. In the 1930s, 24 percent of the American population worked in agriculture compared to 1.5 percent in 2002; in 1940, each farm worker supplied 11 consumers, whereas in 2002, each worker supplied 90 consumers.
The era of factory farming in Britain began in 1947 when a new Agriculture Act granted subsidies to farmers to encourage greater output by introducing new technology, in order to reduce Britain's reliance on imported meat. The United Nations writes that "intensification of animal production was seen as a way of providing food security." In 1966, the United States, United Kingdom and other industrialized nations, commenced factory farming of beef and dairy cattle and domestic pigs. As a result, farming became concentrated on fewer larger farms. For example, in 1967, there were one million pig farms in America; as of 2002, there were 114,000. In 1992, 28% of American pigs were raised on farms selling >5,000 pigs per year; as of 2022 this grew to 94.5%. From its American and West European heartland, intensive animal farming became globalized in the later years of the 20th century and is still expanding and replacing traditional practices of stock rearing in an increasing number of countries. In 1990 intensive animal farming accounted for 30% of world meat production and by 2005, this had risen to 40%.
Process
The aim is to produce large quantities of meat, eggs, or milk at the lowest possible cost. Food is supplied in place. Methods employed to maintain health and improve production may include the use of disinfectants, antimicrobial agents, anthelmintics, hormones and vaccines; protein, mineral and vitamin supplements; frequent health inspections; biosecurity; and climate-controlled facilities. Physical restraints, for example, fences or creeps, are used to control movement or actions regarded as undesirable. Breeding programs are used to produce animals more suited to the confined conditions and able to provide a consistent food product.
Industrial production was estimated to account for 39 percent of the sum of global production of these meats and 50 percent of total egg production. In the US, according to its National Pork Producers Council, 80 million of its 95 million pigs slaughtered each year are reared in industrial settings.
The major concentration of the industry occurs at the slaughter and meat processing phase, with only four companies slaughtering and processing 81 percent of cows, 73 percent of sheep, 57 percent of pigs and 50 percent of chickens. This concentration at the slaughter phase may be in large part due to regulatory barriers that may make it financially difficult for small slaughter plants to be built, maintained or remain in business. Factory farming may be no more beneficial to livestock producers than traditional farming because it appears to contribute to overproduction that drives down prices. Through "forward contracts" and "marketing agreements", meatpackers are able to set the price of livestock long before they are ready for production. These strategies often cause farmers to lose money, as half of all U.S. family farming operations did in 2007.
Many of the nation's livestock producers would like to market livestock directly to consumers but with limited USDA inspected slaughter facilities, livestock grown locally can not typically be slaughtered and processed locally.
Small farmers are often absorbed into factory farm operations, acting as contract growers for the industrial facilities. In the case of poultry contract growers, farmers are required to make costly investments in construction of sheds to house the birds, buy required feed and drugs – often settling for slim profit margins, or even losses.
Research has shown that many immigrant workers in concentrated animal farming operations (CAFOs) in the United States receive little to no job-specific training or safety and health information regarding the hazards associated with these jobs. Workers with limited English proficiency are significantly less likely to receive any work-related training, since it is often only provided in English. As a result, many workers do not perceive their jobs as dangerous. This causes inconsistent personal protective equipment (PPE) use, and can lead to workplace accidents and injuries. Immigrant workers are also less likely to report any workplace hazards and injuries.
Types
Intensive farms hold large numbers of animals, typically cows, pigs, turkeys, geese, or chickens, often indoors, typically at high densities.
Intensive production of livestock and poultry is widespread in developed nations. For 2002–2003, the United Nations' Food and Agriculture Organization (FAO) estimates of industrial production as a percentage of global production were 7 percent for beef and veal, 0.8 percent for sheep and goat meat, 42 percent for pork, and 67 percent for poultry meat.
Chickens
The major milestone in 20th-century poultry production was the discovery of vitamin D, which made it possible to keep chickens in confinement year-round. Before this, chickens did not thrive during the winter (due to lack of sunlight), and egg production, incubation, and meat production in the off-season were all very difficult, making poultry a seasonal and expensive proposition. Year-round production lowered costs, especially for broilers.
At the same time, egg production was increased by scientific breeding. After a few false starts, (such as the Maine Experiment Station's failure at improving egg production) success was shown by Professor Dryden at the Oregon Experiment Station.
Improvements in production and quality were accompanied by lower labor requirements. In the 1930s through the early 1950s, 1,500 hens provided a full-time job for a farm family in America. In the late 1950s, egg prices had fallen so dramatically that farmers typically tripled the number of hens they kept, putting three hens into what had been a single-bird cage or converting their floor-confinement houses from a single deck of roosts to triple-decker roosts. Not long after this, prices fell still further and large numbers of egg farmers left the business. This fall in profitability was accompanied by a general fall in prices to the consumer, allowing poultry and eggs to lose their status as luxury foods.
Robert Plamondon reports that the last family chicken farm in his part of Oregon, Rex Farms, had 30,000 layers and survived into the 1990s. However, the standard laying house of the current operators is around 125,000 hens.
The vertical integration of the egg and poultry industries was a late development, occurring after all the major technological changes had been in place for years (including the development of modern broiler rearing techniques, the adoption of the Cornish Cross broiler, the use of laying cages, etc.).
By the late 1950s, poultry production had changed dramatically. Large farms and packing plants could grow birds by the tens of thousands. Chickens could be sent to slaughterhouses for butchering and processing into prepackaged commercial products to be frozen or shipped fresh to markets or wholesalers. Meat-type chickens currently grow to market weight in six to seven weeks, whereas only fifty years ago it took three times as long. This is due to genetic selection and nutritional modifications (but not the use of growth hormones, which are illegal for use in poultry in the US and many other countries, and have no effect). Once a meat consumed only occasionally, the common availability and lower cost has made chicken a common meat product within developed nations. Growing concerns over the cholesterol content of red meat in the 1980s and 1990s further resulted in increased consumption of chicken.
Today, eggs are produced on large egg ranches on which environmental parameters are well controlled. Chickens are exposed to artificial light cycles to stimulate egg production year-round. In addition, forced molting is commonly practiced in the US, in which manipulation of light and food access triggers molting, in order to increase egg size and production. Forced molting is controversial, and is prohibited in the EU.
On average, a chicken lays one egg a day, but not on every day of the year. This varies with the breed and time of year. In 1900, average egg production was 83 eggs per hen per year. In 2000, it was well over 300. In the United States, laying hens are butchered after their second egg laying season. In Europe, they are generally butchered after a single season. The laying period begins when the hen is about 18–20 weeks old (depending on breed and season). Males of the egg-type breeds have little commercial value at any age, and all those not used for breeding (roughly fifty percent of all egg-type chickens) are killed soon after hatching. The old hens also have little commercial value. Thus, the main sources of poultry meat 100 years ago (spring chickens and stewing hens) have both been entirely supplanted by meat-type broiler chickens.
Pigs
In America, intensive piggeries (or hog lots) are a type of concentrated animal feeding operation (CAFO), specialized for the raising of domestic pigs up to slaughter weight. In this system, grower pigs are housed indoors in group-housing or straw-lined sheds, whilst pregnant sows are confined in sow stalls (gestation crates) and give birth in farrowing crates.
The use of sow stalls has resulted in lower production costs and concomitant animal welfare concerns. Many of the world's largest producers of pigs (such as U.S. and Canada) use sow stalls, but some nations (such as the UK) and U.S. states (such as Florida and Arizona) have banned them.
Intensive piggeries are generally large warehouse-like buildings. Indoor pig systems allow the pig's condition to be monitored, ensuring minimum fatalities and increased productivity. Buildings are ventilated and their temperature regulated. Most domestic pig varieties are susceptible to heat stress, and all pigs lack sweat glands and cannot cool themselves. Pigs have a limited tolerance to high temperatures and heat stress can lead to death. Maintaining a more specific temperature within the pig-tolerance range also maximizes growth and growth to feed ratio. In an intensive operation pigs will lack access to a wallow (mud), which is their natural cooling mechanism. Intensive piggeries control temperature through ventilation or drip water systems (dropping water to cool the system).
Pigs are naturally omnivorous and are generally fed a combination of grains and protein sources (soybeans, or meat and bone meal). Larger intensive pig farms may be surrounded by farmland where feed-grain crops are grown. Alternatively, piggeries are reliant on the grains industry. Pig feed may be bought packaged or mixed on-site. The intensive piggery system, where pigs are confined in individual stalls, allows each pig to be allotted a portion of feed. The individual feeding system also facilitates individual medication of pigs through feed. This has more significance to intensive farming methods, as the close proximity to other animals enables diseases to spread more rapidly. To prevent disease spreading and encourage growth, drug programs such as antibiotics, vitamins, hormones and other supplements are pre-emptively administered.
Indoor systems, especially stalls and pens (i.e. 'dry', not straw-lined systems) allow for the easy collection of waste. In an indoor intensive pig farm, manure can be managed through a lagoon system or other waste-management system. However, odor remains a problem which is difficult to manage.
The way animals are housed in intensive systems varies. Breeding sows spend the bulk of their time in sow stalls during pregnancy or farrowing crates, with their litters, until to be sent for the market.
Piglets often receive range of treatments including castration, tail docking to reduce tail biting, teeth clipped (to reduce injuring their mother's nipples, gum disease and prevent later tusk growth) and their ears notched to assist identification. Treatments are usually made without pain killers. Weak runts may be slain shortly after birth.
Piglets also may be weaned and removed from the sows at between two and five weeks old and placed in sheds. However, grower pigs – which comprise the bulk of the herd – are usually housed in alternative indoor housing, such as batch pens. During pregnancy, the use of a stall may be preferred as it facilitates feed-management and growth control. It also prevents pig aggression (e.g. tail biting, ear biting, vulva biting, food stealing). Group pens generally require higher stockmanship skills. Such pens will usually not contain straw or other material. Alternatively, a straw-lined shed may house a larger group (i.e. not batched) in age groups.
Cattle
Cattle are domesticated ungulates, a member of the family Bovidae, in the subfamily Bovinae, and descended from the aurochs (Bos primigenius). They are raised as livestock for their flesh (called beef and veal), dairy products (milk), leather and as draught animals. As of 2009–2010 it is estimated that there are 1.3–1.4 billion head of cattle in the world.
The most common interactions with cattle involve daily feeding, cleaning and milking. Many routine husbandry practices involve ear tagging, dehorning, loading, medical operations, vaccinations and hoof care, as well as training and sorting for agricultural shows and sales.
Once cattle obtain an entry-level weight, about , they are transferred from the range to a feedlot to be fed a specialized animal feed which consists of corn byproducts (derived from ethanol production), barley, and other grains as well as alfalfa and cottonseed meal. The feed also contains premixes composed of microingredients such as vitamins, minerals, chemical preservatives, antibiotics, fermentation products, and other essential ingredients that are purchased from premix companies, usually in sacked form, for blending into commercial rations. Because of the availability of these products, farmers using their own grain can formulate their own rations and be assured the animals are getting the recommended levels of minerals and vitamins.
There are many potential impacts on human health due to the modern cattle industrial agriculture system. There are concerns surrounding the antibiotics and growth hormones used, increased E. coli contamination, higher saturated fat contents in the meat because of the feed, and also environmental concerns.
As of 2010, in the U.S. 766,350 producers participate in raising beef. The beef industry is segmented with the bulk of the producers participating in raising beef calves. Beef calves are generally raised in small herds, with over 90% of the herds having less than 100 head of cattle. Fewer producers participate in the finishing phase which often occurs in a feedlot, but nonetheless there are 82,170 feedlots in the United States.
Aquaculture
Integrated multi-trophic aquaculture (IMTA), also called integrated aquaculture, is a practice in which the by-products (wastes) from one species are recycled to become inputs (fertilizers, food) for another, making aquaculture intensive. Fed aquaculture (e.g. fish and shrimp) is combined with inorganic extractive (e.g. seaweed) and organic extractive (e.g. shellfish) aquaculture to create balanced systems for environmental sustainability (biomitigation), economic stability (product diversification and risk reduction) and social acceptability (better management practices).
The system is multi-trophic as it makes use of species from different trophic or nutritional level, unlike traditional aquaculture.
Ideally, the biological and chemical processes in such a system should balance. This is achieved through the appropriate selection and proportions of different species providing different ecosystem functions. The co-cultured species should not just be biofilters, but harvestable crops of commercial value. A working IMTA system should result in greater production for the overall system, based on mutual benefits to the co-cultured species and improved ecosystem health, even if the individual production of some of the species is lower compared to what could be reached in monoculture practices over a short-term period.
Regulation
In various jurisdictions, intensive animal production of some kinds is subject to regulation for environmental protection. In the United States, a Concentrated Animal Feeding Operation (CAFO) that discharges or proposes to discharge waste requires a permit and implementation of a plan for management of manure nutrients, contaminants, wastewater, etc., as applicable, to meet requirements pursuant to the federal Clean Water Act. Some data on regulatory compliance and enforcement are available. In 2000, the US Environmental Protection Agency published 5-year and 1-year data on environmental performance of 32 industries, with data for the livestock industry being derived mostly from inspections of CAFOs. The data pertain to inspections and enforcement mostly under the Clean Water Act, but also under the Clean Air Act and Resource Conservation and Recovery Act. Of the 32 industries, livestock production was among the top seven for environmental performance over the 5-year period, and was one of the top two in the final year of that period, where good environmental performance is indicated by a low ratio of enforcement orders to inspections. The five-year and final-year ratios of enforcement/inspections for the livestock industry were 0.05 and 0.01, respectively. Also in the final year, the livestock industry was one of the two leaders among the 32 industries in terms of having the lowest percentage of facilities with violations. In Canada, intensive livestock operations are subject to provincial regulation, with definitions of regulated entities varying among provinces. Examples include Intensive Livestock Operations (Saskatchewan), Confined Feeding Operations (Alberta), Feedlots (British Columbia), High-density Permanent Outdoor Confinement Areas (Ontario) and Feedlots or Parcs d'Engraissement (Manitoba). In Canada, intensive animal production, like other agricultural sectors, is also subject to various other federal and provincial requirements.
In the United States, farmed animals are excluded by half of all state animal cruelty laws including the federal Animal Welfare Act. The 28-hour law, enacted in 1873 and amended in 1994 states that when animals are being transported for slaughter, the vehicle must stop every 28 hours and the animals must be let out for exercise, food, and water. The United States Department of Agriculture claims that the law does not apply to birds. The Humane Slaughter Act is similarly limited. Originally passed in 1958, the Act requires that livestock be stunned into unconsciousness prior to slaughter. This Act also excludes birds, who make up more than 90 percent of the animals slaughtered for food, as well as rabbits and fish. Individual states all have their own animal cruelty statutes; however many states have right-to-farm laws that serve as a provision to exempt standard agricultural practices.
In the United States there is an attempt to regulate farms in the most realistic way possible. The easiest way to effectively regulate the most animals with a limited number of resources and time is to regulate the large farms. In New York State many Animal Feeding Operations are not considered CAFOs since they have less than 300 cows. These farms are not regulated to the level that CAFOs are. Which may lead to unchecked pollution and nutrient leaching. The EPA website illustrates the scale of this problem by saying in New York State's Bay watershed there are 247 animal feeding operations and only 68 of them are State Pollutant Discharge Elimination System (SPDES) permitted CAFOs.
In Ohio animal welfare organizations reached a negotiated settlement with farm organizations while in California, Proposition 2, Standards for Confining Farm Animals, an initiated law was approved by voters in 2008. Regulations have been enacted in other states and plans are underway for referendum and lobbying campaigns in other states.
An action plan was proposed by the USDA in February 2009, called the Utilization of Manure and Other Agricultural and Industrial Byproducts. This program's goal is to protect the environment and human and animal health by using manure in a safe and effective manner. In order for this to happen, several actions need to be taken and these four components include:
Improving the Usability of Manure Nutrients through More Effective Animal Nutrition and Management
Maximizing the Value of Manure through Improved Collection, Storage, and Treatment Options
Utilizing Manure in Integrated Farming Systems to Improve Profitability and Protect Soil, Water, and Air Quality
Using Manure and Other Agricultural Byproducts as a Renewable Energy Source
In 2012 Australia's largest supermarket chain, Coles, announced that as of January 1, 2013, they will stop selling company branded pork and eggs from animals kept in factory farms. The nation's other dominant supermarket chain, Woolworths, has already begun phasing out factory farmed animal products. All of Woolworth's house brand eggs are now cage-free, and by mid-2013 all of their pork will come from farmers who operate stall-free farms.
In June 2021, the European Commission announced the plan of a ban on cages for a number of animals, including egg-laying hens, female breeding pigs, calves raised for veal, rabbits, ducks, and geese, by 2027.
Animal welfare
In the UK, the Farm Animal Welfare Council was set up by the government to act as an independent advisor on animal welfare in 1979 and expresses its policy as five freedoms: from hunger and thirst; from discomfort; from pain, injury or disease; to express normal behavior; from fear and distress.
There are differences around the world as to which practices are accepted and there continue to be changes in regulations with animal welfare being a strong driver for increased regulation. For example, the EU is bringing in further regulation to set maximum stocking densities for meat chickens by 2010, where the UK Animal Welfare Minister commented, "The welfare of meat chickens is a major concern to people throughout the European Union. This agreement sends a strong message to the rest of the world that we care about animal welfare."
Factory farming is greatly debated throughout Australia, with many people disagreeing with the methods and ways in which the animals in factory farms are treated. Animals are often under stress from being kept in confined spaces and will attack each other. In an effort to prevent injury leading to infection, their beaks, tails and teeth are removed. Many piglets will die of shock after having their teeth and tails removed, because painkilling medicines are not used in these operations. Factory farms are a popular way to gain space, with animals such as chickens being kept in spaces smaller than an A4 page.
For example, in the UK, debeaking of chickens is deprecated, but it is recognized that it is a method of last resort, seen as better than allowing vicious fighting and ultimately cannibalism. Between 60 and 70 percent of six million breeding sows in the U.S. are confined during pregnancy, and for most of their adult lives, in gestation crates. According to pork producers and many veterinarians, sows will fight if housed in pens. The largest pork producer in the U.S. said in January 2007 that it will phase out gestation crates by 2017. They are being phased out in the European Union, with a ban effective in 2013 after the fourth week of pregnancy. With the evolution of factory farming, there has been a growing awareness of the issues amongst the wider public, not least due to the efforts of animal rights and welfare campaigners. As a result, gestation crates, one of the more contentious practices, are the subject of laws in the U.S., Europe and around the world to phase out their use as a result of pressure to adopt less confined practices.
Death rates for sows have been increasing in the US from prolapse, which has been attributed to intensive breeding practices. Sows produce on average 23 piglets a year.
In the United States alone, over 20 million chickens, 330,000 pigs and 166,000 cattle die during transport to slaughterhouses annually, and some 800,000 pigs are incapable of walking upon arrival. This is often due to being exposed to extreme temperatures and trauma.
Demonstrations
From 2011 to 2014 each year between 15,000 and 30,000 people gathered under the theme We are fed up! in Berlin to protest against industrial livestock production.
Human health impact
According to the U.S. Centers for Disease Control and Prevention (CDC), farms on which animals are intensively reared can cause adverse health reactions in farm workers. Workers may develop acute and chronic lung disease, musculoskeletal injuries, and may catch infections that transmit from animals to human beings (such as tuberculosis).
Pesticides are used to control organisms which are considered harmful and they save farmers money by preventing product losses to pests. In the US, about a quarter of pesticides used are used in houses, yards, parks, golf courses, and swimming pools and about 70% are used in agriculture. However, pesticides can make their way into consumers' bodies which can cause health problems. One source of this is bioaccumulation in animals raised on factory farms.
"Studies have discovered an increase in respiratory, neurobehavioral, and mental illnesses among the residents of communities next to factory farms."
The CDC writes that chemical, bacterial, and viral compounds from animal waste may travel in the soil and water. Residents near such farms report problems such as unpleasant smell, flies and adverse health effects.
The CDC has identified a number of pollutants associated with the discharge of animal waste into rivers and lakes, and into the air. Antibiotic use in livestock may create antibiotic-resistant pathogens; parasites, bacteria, and viruses may be spread; ammonia, nitrogen, and phosphorus can reduce oxygen in surface waters and contaminate drinking water; pesticides and hormones may cause hormone-related changes in fish; animal feed and feathers may stunt the growth of desirable plants in surface waters and provide nutrients to disease-causing micro-organisms; trace elements such as arsenic and copper, which are harmful to human health, may contaminate surface waters.
Zoonotic diseases such as coronavirus disease 2019 (COVID-19), which caused the COVID-19 pandemic, are increasingly linked to environmental changes associated with intensive animal farming. The disruption of pristine forests driven by logging, mining, road building through remote places, rapid urbanisation and population growth is bringing people into closer contact with animal species they may never have been near before. According to Kate Jones, chair of ecology and biodiversity at University College London, the resulting transmission of disease from wildlife to humans is now "a hidden cost of human economic development".
Intensive farming may make the evolution and spread of harmful diseases easier. Many communicable animal diseases spread rapidly through densely spaced populations of animals and crowding makes genetic reassortment more likely. However, small family farms are more likely to introduce bird diseases and more frequent association with people into the mix, as happened in the 2009 flu pandemic.
In the European Union, growth hormones are banned on the basis that there is no way of determining a safe level. The UK has stated that in the event of the EU raising the ban at some future date, to comply with a precautionary approach, it would only consider the introduction of specific hormones, proven on a case-by-case basis. In 1998, the EU banned feeding animals antibiotics that were found to be valuable for human health. Furthermore, in 2006 the EU banned all drugs for livestock that were used for growth promotion purposes. As a result of these bans, the levels of antibiotic resistance in animal products and within the human population showed a decrease.
The international trade in animal products increases the risk of global transmission of virulent diseases such as swine fever, BSE, foot and mouth and bird flu.
In the United States, the use of antibiotics in livestock is still prevalent. The FDA reports that 80 percent of all antibiotics sold in 2009 were administered to livestock animals, and that many of these antibiotics are identical or closely related to drugs used for treating illnesses in humans. Consequently, many of these drugs are losing their effectiveness on humans, and the total healthcare costs associated with drug-resistant bacterial infections in the United States are between $16.6 billion and $26 billion annually.
Methicillin-resistant Staphylococcus aureus (MRSA) has been identified in pigs and humans raising concerns about the role of pigs as reservoirs of MRSA for human infection. One study found that 20% of pig farmers in the United States and Canada in 2007 harbored MRSA. A second study revealed that 81% of Dutch pig farms had pigs with MRSA and 39% of animals at slaughter carried the bug were all of the infections were resistant to tetracycline and many were resistant to other antimicrobials. A more recent study found that MRSA ST398 isolates were less susceptible to tiamulin, an antimicrobial used in agriculture, than other MRSA or methicillin susceptible S. aureus. Cases of MRSA have increased in livestock animals. CC398 is a new clone of MRSA that has emerged in animals and is found in intensively reared production animals (primarily pigs, but also cattle and poultry), where it can be transmitted to humans. Although dangerous to humans, CC398 is often asymptomatic in food-producing animals.
A 2011 nationwide study reported nearly half of the meat and poultry sold in U.S. grocery stores – 47 percent – was contaminated with S. aureus, and more than half of those bacteria – 52 percent – were resistant to at least three classes of antibiotics. Although Staph should be killed with proper cooking, it may still pose a risk to consumers through improper food handling and cross-contamination in the kitchen. The senior author of the study said, "The fact that drug-resistant S. aureus was so prevalent, and likely came from the food animals themselves, is troubling, and demands attention to how antibiotics are used in food-animal production today."
In April 2009, lawmakers in the Mexican state of Veracruz accused large-scale hog and poultry operations of being breeding grounds of a pandemic swine flu, although they did not present scientific evidence to support their claim. A swine flu which have quickly killed more than 100 infected persons in that area, appears to have begun in the vicinity of a Smithfield subsidiary pig CAFO (concentrated animal feeding operation).
Environmental impact
Intensive factory farming has grown to become the biggest threat to the global environment through the loss of ecosystem services and global warming. It is a major driver to global environmental degradation and biodiversity loss. The process in which feed needs to be grown for animal use only is often grown using intensive methods which involve a significant amount of fertiliser and pesticides. This sometimes results in the pollution of water, soil and air by agrochemicals and manure waste, and use of limited resources such as water and energy at unsustainable rates. Entomophagy is evaluated by many experts as a sustainable solution to traditional livestock, and, if intensively farmed on a large-scale, would cause a far-lesser amount of environmental damage.
Industrial production of pigs and poultry is an important source of greenhouse gas emissions and is predicted to become more so. On intensive pig farms, the animals are generally kept on concrete with slats or grates for the manure to drain through. The manure is usually stored in slurry form (slurry is a liquid mixture of urine and feces). During storage on farm, slurry emits methane and when manure is spread on fields it emits nitrous oxide and causes nitrogen pollution of land and water. Poultry manure from factory farms emits high levels of nitrous oxide and ammonia.
Large quantities and concentrations of waste are produced. Air quality and groundwater are at risk when animal waste is improperly recycled.
Environmental impacts of factory farming include:
Deforestation for animal feed production
Unsustainable pressure on land for production of high-protein/high-energy animal feed
Pesticide, herbicide and fertilizer manufacture and use for feed production
Unsustainable use of water for feed-crops, including groundwater extraction
Pollution of soil, water and air by nitrogen and phosphorus from fertiliser used for feed-crops and from manure
Land degradation (reduced fertility, soil compaction, increased salinity, desertification)
Loss of biodiversity due to eutrophication, acidification, pesticides and herbicides
Worldwide reduction of genetic diversity of livestock and loss of traditional breeds
Species extinctions due to livestock-related habitat destruction (especially feed-cropping)
See also
Animal–industrial complex
Animal rights
Animal rights movement
Animal welfare
Battery cage
Cattle Health Initiative
Cattle ranching
Controlled-atmosphere killing
Cultured meat
Dominion (2018 film)
Environmental vegetarianism
Environmental issues with soy
Farm Sanctuary
Factory farming divestment
Food systems
Fodder
Gestation crate
Golden Triangle of Meat-packing
Humane Slaughter Act
List of foodborne illness outbreaks
List of United States foodborne illness outbreaks
Meat Atlas
Mercy for Animals
Organic farming
Slash-and-burn
Small-scale agriculture
Veganism
References
External links
Animals Used for Food, Animal Ethics
National Agriculture Law Center – Animal Feeding Operations
Calls to reform food system: 'Factory farming belongs in a museum'. The Guardian. May 24, 2017.
Cruelty to animals
Ethically disputed business practices towards animals
Animals
Livestock
Meat industry
Poultry farming | 0.788538 | 0.991749 | 0.782031 |
Paleobiology | Paleobiology (or palaeobiology) is an interdisciplinary field that combines the methods and findings found in both the earth sciences and the life sciences. Paleobiology is not to be confused with geobiology, which focuses more on the interactions between the biosphere and the physical Earth.
Paleobiological research uses biological field research of current biota and of fossils millions of years old to answer questions about the molecular evolution and the evolutionary history of life. In this scientific quest, macrofossils, microfossils and trace fossils are typically analyzed. However, the 21st-century biochemical analysis of DNA and RNA samples offers much promise, as does the biometric construction of phylogenetic trees.
An investigator in this field is known as a paleobiologist.
Important research areas
Paleobotany applies the principles and methods of paleobiology to flora, especially green land plants, but also including the fungi and seaweeds (algae). See also mycology, phycology and dendrochronology.
Paleozoology uses the methods and principles of paleobiology to understand fauna, both vertebrates and invertebrates. See also vertebrate and invertebrate paleontology, as well as paleoanthropology.
Micropaleontology applies paleobiologic principles and methods to archaea, bacteria, protists and microscopic pollen/spores. See also microfossils and palynology.
Paleovirology examines the evolutionary history of viruses on paleobiological timescales.
Paleobiochemistry uses the methods and principles of organic chemistry to detect and analyze molecular-level evidence of ancient life, both microscopic and macroscopic.
Paleoecology examines past ecosystems, climates, and geographies so as to better comprehend prehistoric life.
Taphonomy analyzes the post-mortem history (for example, decay and decomposition) of an individual organism in order to gain insight on the behavior, death and environment of the fossilized organism.
Paleoichnology analyzes the tracks, borings, trails, burrows, impressions, and other trace fossils left by ancient organisms in order to gain insight into their behavior and ecology.
Stratigraphic paleobiology studies long-term secular changes, as well as the (short-term) bed-by-bed sequence of changes, in organismal characteristics and behaviors. See also stratification, sedimentary rocks and the geologic time scale.
Evolutionary developmental paleobiology examines the evolutionary aspects of the modes and trajectories of growth and development in the evolution of life – clades both extinct and extant. See also adaptive radiation, cladistics, evolutionary biology, developmental biology and phylogenetic tree.
Paleobiologists
The founder or "father" of modern paleobiology was Baron Franz Nopcsa (1877 to 1933), a Hungarian scientist trained at the University of Vienna. He initially termed the discipline "paleophysiology".
However, credit for coining the word paleobiology itself should go to Professor Charles Schuchert. He proposed the term in 1904 so as to initiate "a broad new science" joining "traditional paleontology with the evidence and insights of geology and isotopic chemistry."
On the other hand, Charles Doolittle Walcott, a Smithsonian adventurer, has been cited as the "founder of Precambrian paleobiology". Although best known as the discoverer of the mid-Cambrian Burgess shale animal fossils, in 1883 this American curator found the "first Precambrian fossil cells known to science" – a stromatolite reef then known as Cryptozoon algae. In 1899 he discovered the first acritarch fossil cells, a Precambrian algal phytoplankton he named Chuaria. Lastly, in 1914, Walcott reported "minute cells and chains of cell-like bodies" belonging to Precambrian purple bacteria.
Later 20th-century paleobiologists have also figured prominently in finding Archaean and Proterozoic eon microfossils: In 1954, Stanley A. Tyler and Elso S. Barghoorn described 2.1 billion-year-old cyanobacteria and fungi-like microflora at their Gunflint Chert fossil site. Eleven years later, Barghoorn and J. William Schopf reported finely-preserved Precambrian microflora at their Bitter Springs site of the Amadeus Basin, Central Australia.
In 1993, Schopf discovered O2-producing blue-green bacteria at his 3.5 billion-year-old Apex Chert site in Pilbara Craton, Marble Bar, in the northwestern part of Western Australia. So paleobiologists were at last homing in on the origins of the Precambrian "Oxygen catastrophe".
During the early part of the 21st-century, two paleobiologists Anjali Goswami and Thomas Halliday, studied the evolution of mammaliaforms during the Mesozoic and Cenozoic eras (between 299 million to 12,000 years ago). Additionally, they uncovered and studied the morphological disparity and rapid evolutionary rates of living organisms near the end and in the aftermath of the Cretaceous mass extinction (145 million to 66 million years ago).
Paleobiologic journals
Acta Palaeontologica Polonica
Biology and Geology
Historical Biology
PALAIOS
Palaeogeography, Palaeoclimatology, Palaeoecology
Paleobiology (journal)
Paleoceanography
Paleobiology in the general press
Books written for the general public on this topic include the following:
The Rise and Reign of the Mammals: A New History, from the Shadow of the Dinosaurs to Us written by Steve Brusatte
Otherlands: A Journey Through Earth's Extinct Worlds written by Thomas Halliday
Introduction to Paleobiology and the Fossil Record – 22 April 2020 by Michael J. Benton (Author), David A. T. Harper (Author)
See also
History of biology
History of paleontology
History of invertebrate paleozoology
Molecular paleontology
Taxonomy of commonly fossilised invertebrates
Treatise on Invertebrate Paleontology
Footnotes
Derek E.G. Briggs and Peter R. Crowther, eds. (2003). Palaeobiology II. Malden, Massachusetts: Blackwell Publishing. and . The second edition of an acclaimed British textbook.
Robert L. Carroll (1998). Patterns and Processes of Vertebrate Evolution. Cambridge Paleobiology Series. Cambridge, England: Cambridge University Press. and . Applies paleobiology to the adaptive radiation of fishes and quadrupeds.
Matthew T. Carrano, Timothy Gaudin, Richard Blob, and John Wible, eds. (2006). Amniote Paleobiology: Perspectives on the Evolution of Mammals, Birds and Reptiles. Chicago: University of Chicago Press. and . This new book describes paleobiological research into land vertebrates of the Mesozoic and Cenozoic eras.
Robert B. Eckhardt (2000). Human Paleobiology. Cambridge Studies in Biology and Evolutionary Anthropology. Cambridge, England: Cambridge University Press. and . This book connects paleoanthropology and archeology to the field of paleobiology.
Douglas H. Erwin (2006). Extinction: How Life on Earth Nearly Ended 250 Million Years Ago. Princeton: Princeton University Press. . An investigation by a paleobiologist into the many theories as to what happened during the catastrophic Permian-Triassic transition.
Brian Keith Hall and Wendy M. Olson, eds. (2003). Keywords and Concepts in Evolutionary Biology. Cambridge, Massachusetts: Harvard University Press. and .
David Jablonski, Douglas H. Erwin, and Jere H. Lipps (1996). Evolutionary Paleobiology. Chicago: University of Chicago Press, 492 pages. and . A fine American textbook.
Masatoshi Nei and Sudhir Kumar (2000). Molecular Evolution and Phylogenetics. Oxford, England: Oxford University Press. and . This text links DNA/RNA analysis to the evolutionary "tree of life" in paleobiology.
Donald R. Prothero (2004). Bringing Fossils to Life: An Introduction to Paleobiology. New York: McGraw Hill. and . An acclaimed book for the novice fossil-hunter and young adults.
Mark Ridley, ed. (2004). Evolution. Oxford, England: Oxford University Press. and . An anthology of analytical studies in paleobiology.
Raymond Rogers, David Eberth, and Tony Fiorillo (2007). Bonebeds: Genesis, Analysis and Paleobiological Significance. Chicago: University of Chicago Press. and . A new book regarding the fossils of vertebrates, especially tetrapods on land during the Mesozoic and Cenozoic eras.
Thomas J. M. Schopf, ed. (1972). Models in Paleobiology. San Francisco: Freeman, Cooper. and . A much-cited, seminal classic in the field discussing methodology and quantitative analysis.
Thomas J.M. Schopf (1980). Paleoceanography. Cambridge, Massachusetts: Harvard University Press. and . A later book by the noted paleobiologist. This text discusses ancient marine ecology.
J. William Schopf (2001). Cradle of Life: The Discovery of Earth's Earliest Fossils. Princeton: Princeton University Press. . The use of biochemical and ultramicroscopic analysis to analyze microfossils of bacteria and archaea.
Paul Selden and John Nudds (2005). Evolution of Fossil Ecosystems. Chicago: University of Chicago Press. and . A recent analysis and discussion of paleoecology.
David Sepkoski. Rereading the Fossil Record: The Growth of Paleobiology as an Evolutionary Discipline (University of Chicago Press; 2012) 432 pages; A history since the mid-19th century, with a focus on the "revolutionary" era of the 1970s and early 1980s and the work of Stephen Jay Gould and David Raup.
Paul Tasch (1980). Paleobiology of the Invertebrates. New York: John Wiley & Sons. and . Applies statistics to the evolution of sponges, cnidarians, worms, brachiopods, bryozoa, mollusks, and arthropods.
Shuhai Xiao and Alan J. Kaufman, eds. (2006). Neoproterozoic Geobiology and Paleobiology. New York: Springer Science+Business Media. . This new book describes research into the fossils of the earliest multicellular animals and plants, especially the Ediacaran period invertebrates and algae.
Bernard Ziegler and R. O. Muir (1983). Introduction to Palaeobiology. Chichester, England: E. Horwood. and . A classic, British introductory textbook.
External links
Paleobiology website of the National Museum of Natural History (Smithsonian) in Washington, D.C. (archived 11 March 2007)
The Paleobiology Database
Developmental biology
Evolutionary biology
Subfields of paleontology | 0.799156 | 0.978565 | 0.782026 |
Circular economy | A circular economy (also referred to as circularity or CE) is a model of resource production and consumption in any economy that involves sharing, leasing, reusing, repairing, refurbishing, and recycling existing materials and products for as long as possible. The concept aims to tackle global challenges such as climate change, biodiversity loss, waste, and pollution by emphasizing the design-based implementation of the three base principles of the model. The main three principles required for the transformation to a circular economy are: designing out waste and pollution, keeping products and materials in use, and regenerating natural systems. CE is defined in contradistinction to the traditional linear economy.
The idea and concepts of a circular economy have been studied extensively in academia, business, and government over the past ten years. It has been gaining popularity because it can help to minimize carbon emissions and the consumption of raw materials, open up new market prospects, and, principally, increase the sustainability of consumption. At a government level, a circular economy is viewed as a method of combating global warming, as well as a facilitator of long-term growth. CE may geographically connect actors and resources to stop material loops at the regional level. In its core principle, the European Parliament defines CE as "a model of production and consumption that involves sharing, leasing, reusing, repairing, refurbishing, and recycling existing materials and products as long as possible. In this way, the life cycle of products is extended." Global implementation of circular economy can reduce global emissions by 22.8 billion tons, 39% of global emissions in the year 2019. By implementing circular economy strategies in five sectors alone: cement, aluminum, steel, plastics, and food 9.3 billion metric tons of equivalent (equal to all current emissions from transportation), can be reduced.In a circular economy, business models play a crucial role in enabling the shift from linear to circular processes. Various business models have been identified that support circularity, including product-as-a-service, sharing platforms, and product life extension models, among others. These models aim to optimize resource utilization, reduce waste, and create value for businesses and customers alike, while contributing to the overall goals of the circular economy.
Businesses can also make the transition to the circular economy, where holistic adaptations in firms' business models are needed. The implementation of circular economy principles often requires new visions and strategies and a fundamental redesign of product concepts, service offerings, and channels towards long-life solutions, resulting in the so-called 'circular business models'.
Definition
There are many definitions of the circular economy. For example, in China, CE is promoted as a top-down national political objective, meanwhile in other areas, such as the European Union, Japan, and the USA, it is a tool to design bottom-up environmental and waste management policies. The ultimate goal of promoting CE is the decoupling of environmental pressure from economic growth. A comprehensive definition could be: "Circular economy is an economic system that targets zero waste and pollution throughout materials lifecycles, from environment extraction to industrial transformation, and final consumers, applying to all involved ecosystems. Upon its lifetime end, materials return to either an industrial process or, in the case of a treated organic residual, safely back to the environment as in a natural regenerating cycle. It operates by creating value at the macro, meso, and micro levels and exploiting to the fullest the sustainability nested concept. Used energy sources are clean and renewable. Resource use and consumption are efficient. Government agencies and responsible consumers play an active role in ensuring the correct system long-term operation."More generally, circular development is a model of economic, social, and environmental production and consumption that aims to build an autonomous and sustainable society in tune with the issue of environmental resources. The circular economy aims to transform our economy into one that is regenerative. An economy that innovates to reduce waste and the ecological and environmental impact of industries prior to happening, rather than waiting to address the consequences of these issues. This is done by designing new processes and solutions for the optimization of resources, decoupling reliance on finite resources.
The circular economy is a framework of three principles, driven by design: eliminating waste and pollution, keeping products and materials in use, and regenerating natural systems. It is based increasingly on renewable energy and materials, and it is accelerated by digital innovation. It is a resilient, distributed, diverse, and inclusive economic model. The circular economy is an economic concept often linked to sustainable development, provision of the Sustainable Development Goals (Global Development Goals), and an extension of a green economy.
Other definitions and precise thresholds that separate linear from circular activity have also been developed in the economic literature.
In a linear economy, natural resources are turned into products that are ultimately destined to become waste because of the way they have been designed and manufactured. This process is often summarized as "take, make, waste." By contrast, a circular economy aims to transition from a 'take-make-waste' approach to a more restorative and regenerative system. It employs reuse, sharing, repair, refurbishment, remanufacturing and recycling to create a closed-loop system, reducing the use of resource inputs and the creation of waste, pollution, and carbon emissions. The circular economy aims to keep products, materials, equipment, and infrastructure in use for longer, thus improving the productivity of these resources. Waste materials and energy should become input for other processes through waste valorization: either as a component for another industrial process or as regenerative resources for nature (e.g., compost). The Ellen MacArthur Foundation (EMF) defines the circular economy as an industrial economy that is restorative or regenerative by value and design.
Circular economy strategies can be applied at various scales, from individual products and services to entire industries and cities. For example, industrial symbiosis is a strategy where waste from one industry becomes an input for another, creating a network of resource exchange and reducing waste, pollution, and resource consumption. Similarly, circular cities aim to integrate circular principles into urban planning and development, foster local resource loops, and promote sustainable lifestyles among their citizens. Less than 10% of economic activity worldwide in 2022 and 2023 is circular. Every year, the global population uses approximately 100 billion tonnes of materials, with more than 90% of them being wasted. The circular economy seeks to address this by eliminating waste entirely.
History and aims
The concept of a circular economy cannot be traced back to one single date or author, rather to different schools of thought.
The concept of a circular economy can be linked to various schools of thought, including industrial ecology, biomimicry, and cradle-to-cradle design principles. Industrial ecology is the study of material and energy flows through industrial systems, which forms the basis of the circular economy. Biomimicry involves emulating nature's time-tested patterns and strategies in designing human systems. Cradle-to-cradle design is a holistic approach to designing products and systems that considers their entire life cycle, from raw material extraction to end-of-life disposal, and seeks to minimize waste and maximize resource efficiency. These interrelated concepts contribute to the development and implementation of the circular economy.
General systems theory, founded by the biologist Ludwig von Bertalanffy, considers growth and energy for open and closed state systems. This theory was then applied to other areas, such as, in the case of the circular economy, economics. Economist Kenneth E. Boulding, in his paper "The Economics of the Coming Spaceship Earth," argued that a circular economic system is a prerequisite for the maintenance of the sustainability of human life on Earth. Boulding describes the so-called "cowboy economy" as an open system in which the natural environment is typically perceived as limitless: no limit exists on the capacity of the outside to supply or receive energy and material flows.
Walter R. Stahel and Geneviève Reday-Mulvey, in their book "The Potential for Substituting Manpower for Energy," lay the foundation for the principles of the circular economy by describing how increasing labour may reduce energy intensive activities.
Simple economic models have ignored the economy-environment interrelationships. Allan Kneese in "The Economics of Natural Resources" indicates how resources are not endlessly renewable, and mentions the term circular economy for the first time explicitly in 1988.
In their book Economics of Natural Resources and the Environment, Pearce and Turner explain the shift from the traditional linear or open-ended economic system to the circular economic system (Pearce and Turner, 1990). They describe an economic system where waste at extraction, production, and consumption stages is turned into inputs.
In the early 2000s, China integrated the notion into its industrial and environmental policies to make them resource-oriented, production-oriented, waste-oriented, use-oriented, and life cycle-oriented. The Ellen MacArthur Foundation was instrumental in the diffusion of the concept in Europe and the Americas.
In 2010, the concept of circular economy started to become popular internationally after the publication of several reports. The European Union introduced its vision of the circular economy in 2014, with a New Circular Economy Action Plan launched in 2020 that "shows the way to a climate-neutral, competitive economy of empowered consumers".
The original diffusion of the notion benefited from three major events: the explosion of raw material prices between 2000 and 2010, the Chinese control of rare earth materials, and the 2008 economic crisis. Today, the climate emergency and environmental challenges induce companies and individuals in rethink their production and consumption patterns. The circular economy is framed as one of the answers to these challenges. Key macro-arguments in favour of the circular economy are that it could enable economic growth that does not add to the burden on natural resource extraction but decouples resource uses from the development of economic welfare for a growing population, reduces foreign dependence on critical materials, lowers CO2 emissions, reduces waste production, and introduces new modes of production and consumption able to create further value. Corporate arguments in favour of the circular economy are that it could secure the supply of raw materials, reduce the price volatility of inputs and control costs, reduce spills and waste, extend the life cycle of products, serve new segments of customers, and generate long-term shareholder value. A key idea behind the circular business models is to create loops throughout to recapture value that would otherwise be lost.
Of particular concern is the irrevocable loss of raw materials due to their increase in entropy in the linear business model. Starting with the production of waste in manufacturing, the entropy increases further by mixing and diluting materials in their manufacturing assembly, followed by corrosion and wear and tear during the usage period. At the end of the life cycle, there is an exponential increase in disorder arising from the mixing of materials in landfills. As a result of this directionality of the entropy law, the world's resources are effectively "lost forever".
Circular development is directly linked to the circular economy and aims to build a sustainable society based on recyclable and renewable resources, to protect society from waste, and to be able to form a model that no longer considering resources as infinite. This new model of economic development focuses on the production of goods and services, taking into account environmental and social costs. Circular development, therefore, supports the circular economy to create new societies in line with new waste management and sustainability objectives that meet the needs of citizens. It is about enabling economies and societies, in general, to become more sustainable.
However, critiques of the circular economy suggest that proponents of the circular economy may overstate the potential benefits of the circular economy. These critiques put forward the idea that the circular economy has too many definitions to be delimited, making it an umbrella concept that, although exciting and appealing, is hard to understand and assess. Critiques mean that the literature ignores much-established knowledge. In particular, it neglects the thermodynamic principle that one can neither create nor destroy matter. Therefore, a future where waste no longer exists, where material loops are closed, and products are recycled indefinitely is, in any practical sense, impossible. They point out that a lack of inclusion of indigenous discourses from the Global South means that the conversation is less eco-centric than it depicts itself. There is a lack of clarity as to whether the circular economy is more sustainable than the linear economy and what its social benefits might be, in particular, due to diffuse contours. Other issues include the increasing risks of cascading failures which are a feature of highly interdependent systems, and have potential harm to the general public. When implemented in bad faith, touted "Circular Economy" activities can often be little more than reputation and impression management for public relations purposes by large corporations and other vested interests; constituting a new form of greenwashing. It may thus not be the panacea many had hoped for.
Sustainability
Intuitively, the circular economy would appear to be more sustainable than the current linear economic system. Reducing the resources used and the waste and leakage created conserves resources and helps to reduce environmental pollution. However, it is argued by some that these assumptions are simplistic and that they disregard the complexity of existing systems and their potential trade-offs. For example, the social dimension of sustainability seems to be only marginally addressed in many publications on the circular economy. Some cases that might require different or additional strategies, like purchasing new, more energy-efficient equipment. By reviewing the literature, a team of researchers from Cambridge and TU Delft showed that there are at least eight different relationship types between sustainability and the circular economy. In addition, it is important to underline the innovation aspect at the heart of sustained development based on circular economy components.
Scope
The circular economy can have a broad scope. Researchers have focused on different areas such as industrial applications with both product-oriented and natural resources and services, practices and policies to better understand the limitations that the CE currently faces, strategic management for details of the circular economy and different outcomes such as potential re-use applications and waste management.
The circular economy includes products, infrastructure, equipment, services and buildings and applies to every industry sector. It includes 'technical' resources (metals, minerals, fossil resources) and 'biological' resources (food, fibres, timber, etc.). Most schools of thought advocate a shift from fossil fuels to the use of renewable energy, and emphasize the role of diversity as a characteristic of resilient and sustainable systems. The circular economy includes a discussion of the role of money and finance as part of the wider debate, and some of its pioneers have called for a revamp of economic performance measurement tools. One study points out how modularization could become a cornerstone to enabling a circular economy and enhancing the sustainability of energy infrastructure. One example of a circular economy model is the implementation of renting models in traditional ownership areas (e.g., electronics, clothes, furniture, transportation). By renting the same product to several clients, manufacturers can increase revenues per unit, thus decreasing the need to produce more to increase revenues. Recycling initiatives are often described as circular economy and are likely to be the most widespread models.
According to a report of the organization "Circle economy" global implementation of circular economy can reduce global emissions by 22.8 billion tons, 39% of global emissions in the year 2019. By 2050, 9.3 billion metric tons of equivalent, or almost half of the global greenhouse gas emissions from the production of goods, might be reduced by implementing circular economy strategies in only five significant industries: cement, aluminum, steel, plastics, and food. That would equal to eliminating all current emissions caused by transportation.
Background
As early as 1966, Kenneth Boulding raised awareness of an "open economy" with unlimited input resources and output sinks, in contrast with a "closed economy," in which resources and sinks are tied and remain as long as possible part of the economy. Boulding's essay "The Economics of the Coming Spaceship Earth" is often cited as the first expression of the "circular economy", although Boulding does not use that phrase.
The circular economy is grounded in the study of feedback-rich (non-linear) systems, particularly living systems. The contemporary understanding of the circular economy and its practical applications to economic systems has evolved, incorporating different features and contributions from a variety of concepts sharing the idea of closed loops. Some of the relevant theoretical influences are cradle to cradle, laws of ecology (e.g., ), looped and performance economy (Walter R. Stahel), regenerative design, industrial ecology, biomimicry and blue economy (see section "Related concepts").
The circular economy was further modelled by British environmental economists David W. Pearce and R. Kerry Turner in 1989. In Economics of Natural Resources and the Environment, they pointed out that a traditional open-ended economy was developed with no built-in tendency to recycle, which was reflected by treating the environment as a waste reservoir.
In the early 1990s, Tim Jackson began to create the scientific basis for this new approach to industrial production in his edited collection Clean Production Strategies, including chapters from preeminent writers in the field such as Walter R Stahel, Bill Rees and Robert Constanza. At the time still called 'preventive environmental management', his follow-on book Material Concerns: Pollution, Profit and Quality of Life synthesized these findings into a manifesto for change, moving industrial production away from an extractive linear system towards a more circular economy.
Emergence of the idea
In their 1976 research report to the European Commission, "The Potential for Substituting Manpower for Energy," Walter Stahel and Genevieve Reday sketched the vision of an economy in loops (or a circular economy) and its impact on job creation, economic competitiveness, resource savings and waste prevention. The report was published in 1982 as the book Jobs for Tomorrow: The Potential for Substituting Manpower for Energy.
In 1982, Walter Stahel was awarded third prize in the Mitchell Prize competition on sustainable business models with his paper, The Product-Life Factor. The first prize went to the then US Secretary of Agriculture, the second prize to Amory and Hunter Lovins, and fourth prize to Peter Senge.
Considered one of the first pragmatic and credible sustainability think tanks, the main goals of Stahel's institute are to extend the working life of products, to make goods last longer, to reuse existing goods, and ultimately to prevent waste. This model emphasizes the importance of selling services rather than products, an idea referred to as the "functional service economy" and sometimes put under the wider notion of "performance economy." This model also advocates "more localization of economic activity".
Promoting a circular economy was identified as a national policy in China's 11th five-year plan starting in 2006. The Ellen MacArthur Foundation has more recently outlined the economic opportunity of a circular economy, bringing together complementary schools of thought in an attempt to create a coherent framework, thus giving the concept a wide exposure and appeal.
Most frequently described as a framework for thinking, its supporters claim it is a coherent model that has value as part of a response to the end of the era of cheap oil and materials and, moreover, contributes to the transition to a low-carbon economy. In line with this, a circular economy can contribute to meeting the COP 21 Paris Agreement. The emissions reduction commitments made by 195 countries at the COP 21 Paris Agreement are not sufficient to limit global warming to 1.5 °C. To reach the 1.5 °C ambition, it is estimated that additional emissions reductions of 15 billion tonnes of per year need to be achieved by 2030. Circle Economy and Ecofys estimated that circular economy strategies may deliver emissions reductions that could bridge the gap by half.
Moving away from the linear model
Linear "take, make, dispose" industrial processes, and the lifestyles dependent on them, use up finite reserves to create products with a finite lifespan, which end up in landfills or in incinerators. The circular approach, by contrast, takes insights from living systems. It considers that our systems should work like organisms, processing nutrients that can be fed back into the cycle—whether biological or technical—hence the "closed loop" or "regenerative" terms usually associated with it. The generic circular economy label can be applied to or claimed by several different schools of thought, but all of them gravitate around the same basic principles.
One prominent thinker on the topic is Walter R. Stahel, an architect, economist, and founding father of industrial sustainability. Credited with having coined the expression "Cradle to Cradle" (in contrast with "Cradle to Grave," illustrating our "Resource to Waste" way of functioning), in the late 1970s, Stahel worked on developing a "closed loop" approach to production processes, co-founding the Product-Life Institute in Geneva. In the UK, Steve D. Parker researched waste as a resource in the UK agricultural sector in 1982, developing novel closed-loop production systems. These systems mimicked and worked with the biological ecosystems they exploited.
Cradle to Cradle
Circular economy often refers to quantities of recycled materials or reduced waste, however Cradle to Cradle Design focuses on quality of products including safety for humans and environmental health. Popularized by the book Cradle to Cradle: Remaking The Way We Make Things, Cradle to Cradle Design has been widely implemented by architect William McDonough, who was introduced as the "father of the circular economy" while receiving the 2017 Fortune Award for Circular Economy Leadership in Davos during the World Economic Forum.
Levels of circularity ("R" models)
In the 2010s, several models of a circular economy were developed that employed a set of steps, or levels of circularity, typically using English verbs or nouns starting with the letter "r". The first such model, known as the "Three R principle", was "Reduce, Reuse, Recycle", which can be traced back as early as the 1970s. According to Breteler (2022), the 'most comprehensive and extensive' of four compared models was the "10R principle", developed by sustainable entrepreneurship professor and former Dutch Environment Minister Jacqueline Cramer.
Towards the circular economy
In 2013, a report was released entitled Towards the Circular Economy: Economic and Business Rationale for an Accelerated Transition. The report, commissioned by the Ellen MacArthur Foundation and developed by McKinsey & Company, was the first volume of its kind to consider the economic and business opportunity for the transition to a restorative, circular model. Using product case studies and economy-wide analysis, the report details the potential for significant benefits across the EU. It argues that a subset of the EU manufacturing sector could realize net materials cost savings worth up to $630 billion annually towards 2025—stimulating economic activity in the areas of product development, remanufacturing and refurbishment. Towards the Circular Economy also identified the key building blocks in making the transition to a circular economy, namely in skills in circular design and production, new business models, skills in building cascades and reverse cycles, and cross-cycle/cross-sector collaboration. This is supported by a case study from the automotive industry, highlighting the importance of integrating a circular model holistically within the entire value chain of a company, taking into account the interdependencies between the product, process, and system level.
Another report by WRAP and the Green Alliance (called "Employment and the circular economy: job creation in a more resource efficient Britain"), done in 2015 has examined different public policy scenarios to 2030. It estimates that, with no policy change, 200,000 new jobs will be created, reducing unemployment by 54,000. A more aggressive policy scenario could create 500,000 new jobs and permanently reduce unemployment by 102,000. The International Labour Organization predicts that implementing a circular economy by 2030 might result in an additional 7-8 million jobs being created globally. However, other research has also found that the adoption of circular economy principles may lead to job losses in emerging economies.
On the other hand, implementing a circular economy in the United States has been presented by Ranta et al. who analyzed the institutional drivers and barriers for the circular economy in different regions worldwide, by following the framework developed by Scott R. In the article, different worldwide environment-friendly institutions were selected, and two types of manufacturing processes were chosen for the analysis (1) a product-oriented, and (2) a waste management. Specifically, in the U.S., the product-oriented company case in the study was Dell, a US manufacturing company for computer technology, which was the first company to offer free recycling to customers and to launch to the market a computer made from recycling materials from a verified third-party source. Moreover, the waste management case that includes many stages such as collection, disposal, recycling in the study was Republic Services, the second-largest waste management company in the US. The approach to defining the drivers and barriers was to first identify indicators for their cases in study and then to categorize these indicators into drivers when the indicator was in favor of the circular economy model or a barrier when it was not.
On 2 March 2022 in Nairobi, representatives of 175 countries pledged to create a legally binding agreement to end plastic pollution by the end of the year 2024. The agreement should address the full lifecycle of plastic and propose alternatives including reusability. The agreement is expected to facilitate the transition to a circular economy that will reduce GHG emissions by 25 percent, according to the published statement.
Circular product design and standards
Product designs that optimize durability, ease of maintenance and repair, upgradability, re-manufacturability, separability, disassembly, and reassembly are considered key elements for the transition toward circularity of products. Standardization can facilitate related "innovative, sustainable and competitive advantages for European businesses and consumers". Design for standardization and compatibility would make "product parts and interfaces suitable for other products and aims at multi-functionality and modularity". A "Product Family Approach" has been proposed to establish "commonality, compatibility, standardization, or modularization among different products or product lines".
It has been argued that emerging technologies should be designed with circular economy principles from the start, including solar panels.
Design of circularity processes
For sustainability and health, the circularity process designs may be of crucial importance. Large amounts of electronic waste are already recycled but far from where they were consumed, with often low efficiency, and with substantial negative effects on human health and the foreign environment.
Recycling should therefore "reduce environmental impacts of the overall product/service provision system assessed based on the life-cycle assessment approach".
One study suggests that "a mandatory certification scheme for recyclers of electronic waste, in or out of Europe, would help to incentivize high-quality treatment processes and efficient material recovery".
Digitalization may enable more efficient corporate processes and minimize waste.
Circular business models
While the initial focus of the academic, industry, and policy activities was mainly focused on the development of re-X (recycling, remanufacturing, reuse, etc.) technology, it soon became clear that the technological capabilities increasingly exceed their implementation. To leverage this technology for the transition toward a circular economy, various stakeholders have to work together. This shifted attention towards business-model innovation as a key leverage for 'circular' technology adaption. Rheaply, a platform that aims to scale reuse within and between organizations, is an example of a technology that focuses on asset management & disposition to support organizations transitioning to circular business models.Circular business models can be defined as business models that are closing, narrowing, slowing, intensifying, and dematerializing loops, to minimize the resource inputs into and the waste and emission leakage out of the organizational system. This comprises recycling measures (closing), efficiency improvements (narrowing), use phase extensions (slowing), a more intense use phase (intensifying), and the substitution of products by service and software solutions (dematerializing). These strategies can be achieved through the purposeful design of material recovery processes and related circular supply chains. As illustrated in the Figure, these five approaches to resource loops can also be seen as generic strategies or archetypes of circular business model innovation. The development of circular products, circular business models, and, more generally, the circular economy is conditioned upon the affordances of the materials involved, that is the enablement and constraints afforded by these materials to someone engaging with them for circular purposes.
Circular business models, as the economic model more broadly, can have different emphases and various objectives, for example: extend the life of materials and products, where possible over multiple 'use cycles'; use a 'waste = food' approach to help recover materials, and ensure those biological materials returned to earth are benign, not toxic; retain the embedded energy, water, and other process inputs in the product and the material for as long as possible; Use systems-thinking approaches in designing solutions; regenerate or at least conserve nature and living systems; push for policies, taxes and market mechanisms that encourage product stewardship, for example 'polluter pays' regulations.
Circular business models are enabled by circular supply chains. In practice, collaboration for circular supply chains can enable the creation, transfer, and/or capture of value stemming from circular business solutions. Collaboration in supply chains can extend to downstream and upstream partners, and include existing and new collaboration. Similarly, circular supply chain collaboration allows innovation into the circular business model, focusing on its processes, products, or services.
Digital circular economy
Building on circular business model innovation, digitalization and digital technologies (e.g., internet of things, big data, artificial intelligence, blockchain) are seen as a key enabler for upscaling the circular economy. Also referred to as the data economy, the central role of digital technologies for accelerating the circular economy transition is emphasized within the Circular Economy Action Plan of the European Green deal. The smart circular economy framework illustrates this by establishing a link between digital technologies and sustainable resource management. This allows assessment of different digital circular economy strategies with their associated level of maturity, providing guidance on how to leverage data and analytics to maximize circularity (i.e., optimizing functionality and resource intensity). Supporting this, a Strategic Research and Innovation Agenda for circular economy was published in the framework of the Horizon 2020 project CICERONE that puts digital technologies at the core of many key innovation fields (waste management, industrial symbiosis, products traceability). Some researchers have emphasised a need to comply with several requirements for implementing blockchain technology in order to make circular economy a reality.
Platform for Accelerating the Circular Economy (PACE)
In 2018, the World Economic Forum, World Resources Institute, Philips, Ellen MacArthur Foundation, United Nations Environment Programme, and over 40 other partners launched the Platform for Accelerating the Circular Economy (PACE). PACE follows on the legacy of WEF's CEO-led initiative, Project MainStream, which sought to scale up circular economy innovations. PACE's original intent has three focal areas:
developing models of blended finance for circular economy projects, especially in developing and emerging economies;
creating policy frameworks to address specific barriers to advancing the circular economy; and
promoting public–private partnership for these purposes.
In 2020, PACE released a report with partner Circle Economy claiming that the world is 8.6% circular, claiming all countries are "developing countries" given the unsustainable levels of consumption in countries with higher levels of human development.
PACE is a coalition of CEOs and Ministers—including the leaders of global corporations like IKEA, Coca-Cola, Alphabet Inc., and DSM, governmental partners and development institutions from Denmark, The Netherlands, Finland, Rwanda, UAE, China, and beyond. Initiatives currently managed under PACE include the Capital Equipment Coalition with Philips and numerous other partners and the Global Battery Alliance with over 70 partners. In January 2019, PACE released a report entitled "A New Circular Vision for Electronics: Time for a Global Reboot" (in support of the United Nations E-waste Coalition).
The coalition is hosted by a Secretariat headed by David B. McGinty, former leader of the Human Development Innovation Fund and Palladium International, and board member of BoardSource. Board Members include Inger Andersen, Frans van Houten, Ellen MacArthur, Lisa P. Jackson, and Stientje van Veldhoven.
Circular economy standard BS 8001:2017
To provide authoritative guidance to organizations implementing circular economy (CE) strategies, in 2017, the British Standards Institution (BSI) developed and launched the first circular economy standard "BS 8001:2017 Framework for implementing the principles of the circular economy in organizations". The circular economy standard BS 8001:2017 tries to align the far-reaching ambitions of the CE with established business routines at the organizational level. It contains a comprehensive list of CE terms and definitions, describes the core CE principles, and presents a flexible management framework for implementing CE strategies in organizations. Little concrete guidance on circular economy monitoring and assessment is given, however, as there is no consensus yet on a set of central circular economy performance indicators applicable to organizations and individual products.
Development of ISO/TC 323 circular economy standard
In 2018, the International Organization for Standardization (ISO) established a technical committee, TC 323, in the field of circular economy to develop frameworks, guidance, supporting tools, and requirements for the implementation of activities of all involved organizations, to maximize the contribution to Sustainable Development. Four new ISO standards are under development and in the direct responsibility of the committee (consisting of 70 participating members and 11 observing members).
Strategic management in a circular economy
The CE does not aim at changing the profit maximization paradigm of businesses. Rather, it suggests an alternative way of thinking how to attain a sustained competitive advantage (SCA), while concurrently addressing the environmental and socio-economic concerns of the 21st century. Indeed, stepping away from linear forms of production most often leads to the development of new core competencies along the value chain and ultimately superior performance that cuts costs, improves efficiency, promote brand names, mitigate risks, develop new products, and meets advanced government regulations and the expectations of green consumers. But despite the multiple examples of companies successfully embracing circular solutions across industries, and notwithstanding the wealth of opportunities that exist when a firm has clarity over what circular actions fit its unique profile and goals, CE decision-making remains a highly complex exercise with no one-size-fits-all solution. The intricacy and fuzziness of the topic is still felt by most companies (especially SMEs), which perceive circular strategies as something not applicable to them or too costly and risky to implement. This concern is today confirmed by the results of ongoing monitoring studies like the Circular Readiness Assessment.
Strategic management is the field of management that comes to the rescue allowing companies to carefully evaluate CE-inspired ideas, but also to take a firm apart and investigate if/how/where seeds of circularity can be found or implanted. Prior research has identified strategic development for circularity to be a challenging process for companies, demanding multiple iterative strategic cycles. The book Strategic Management and the Circular Economy defined for the first time a CE strategic decision-making process, covering the phases of analysis, formulation, and planning. Each phase is supported by frameworks and concepts popular in management consulting—like idea tree, value chain, VRIE, Porter's five forces, PEST, SWOT, strategic clock, or the internationalization matrix—all adapted through a CE lens, hence revealing new sets of questions and considerations. Although yet to be verified, it is argued that all standard tools for strategic management can and should be calibrated and applied to a CE. A specific argument has already been made for the strategy direction matrix of product vs market and the 3 × 3 GE-McKinsey matrix to assess business strength vs industry attractiveness, the BCG matrix of market share vs industry growth rate, and Kraljic's portfolio matrix.
Engineering the Circular Life cycle
The engineering lifecycle is a well-established approach in the design and systems engineering of complex and certified systems. It refers to the series of stages that a complex engineered product passes through, from initial concept and design through production, use, and end-of-life management. The approach is commonly used in heavy manufacturing and heavily regulated industries (for example aviation).
Complex and certified engineering systems, however, include many of the smaller products encountered on a daily basis, for example bicycles and household appliances. Implementing the principles of circularity requires all engineering design teams to take a lifecycle approach to the product.
The Circular Lifecycle for Complex Engineering Systems
Building on both the engineering lifecycle and the principles of the circular economy, the Circular Lifecycle for Complex Engineering Systems newly established framework, "Circular Lifecycle for Complex Engineering Systems", forms the core of this approach. This framework advocates for a reassessment of recognized engineering disciplines with an emphasis on integrating less familiar circular principles. It particularly focuses on designing to meet user needs, the application of established engineering disciplines to achieve product longevity, engineering for the transition to renewable energy sources, and maximizing value generation from waste.
As with the traditional engineering lifecycle, this approach can be applied to all engineering systems, with the depth of activity tailored depending on the complexity of the product. and can incorporate multiple inter requiring planning, substantial resource consumption, and prolonged service lifetimes.
Lifecycle-Value Stream Matrix
The key to implementing the circular lifecycle for complex engineering systems is ensuring the engineering design team have a solid understanding of the product's ecosystem. The Lifecycle-Value Stream Matrix for complex and certified circular systems assists engineers and product design teams in visualizing the product's ecosystem more effectively. It enables engineers to map the intricate ecosystem surrounding their products, leading to the identification of potential strategic partners and novel opportunities for technology and service innovation.
The matrix captures the value stream for various suppliers, providing increasing levels of complexity in products and services. It is important to note that these suppliers will change throughout the life cycle. In the design phase of the complex engineering system, traditionally, the system-level suppliers would only be those suppliers who are integrating the engineering system itself. Later in the life cycle, the initial systems-level suppliers will be joined by other suppliers operating at a systems level, who may deliver products and services that facilitate the operation and usage of the initial engineering system.
Circular Engineering Lifecycle Implementation Challenges and Opportunities
Adopting an engineering circular lifecycle approach undeniably brings a considerable set of challenges. Complex engineering systems, especially those with extended lifecycles and intricate safety and certification governance frameworks, may encounter difficulties while transitioning to renewable energy sources. However, the circular lifecycle concept is adaptable to a broad range of manufactured and engineered products, affirming its universal applicability.
The primary challenge within organizations will be a mindset shift and establishment of these innovative methodologies. Despite these hurdles, the implementation of this engineering lifecycle approach holds enormous potential for both consumers and businesses. This is especially true when a collaborative, through-life service approach is applied, highlighting the vast economic opportunities that can arise from embracing circularity in engineering lifecycles.
Adoption and applications by industry
Textile industry
A circular economy within the textiles industry refers to the practice of clothes and fibers continually being recycled, to re-enter the economy as much as possible rather than ending up as waste.
A circular textile economy is in response to the current linear model of the fashion industry, "in which raw materials are extracted, manufactured into commercial goods, and then bought, used, and eventually discarded by consumers" (Business of Fashion, 2017). 'Fast fashion' companies have fueled the high rates of consumption which further magnify the issues of a linear system. "The take-make-dispose model not only leads to an economic value loss of over $500 billion per year but also has numerous negative environmental and societal impacts" (Business of Fashion, 2018). Such environmental effects include tons of clothing ending up in landfills and incineration, while the societal effects put human rights at risk. A documentary about the world of fashion, The True Cost (2015), explained that in fast fashion, "wages, unsafe conditions, and factory disasters are all excused because of the needed jobs they create for people with no alternatives." This shows that fast fashion is harming the planet in more ways than one by running on a linear system.
It is argued that by following a circular economy, the textile industry can be transformed into a sustainable business. A 2017 report, "A New Textiles Economy," stated the four key ambitions needed to establish a circular economy: "phasing out substances of concern and microfiber release; transforming the way clothes are designed, sold, and used to break free from their increasingly disposable nature; radically improving recycling by transforming clothing design, collection, and reprocessing; and making effective use of resources and moving to renewable input." While it may sound like a simple task, only a handful of designers in the fashion industry have taken charge, including Patagonia, Eileen Fisher, Nathalia JMag, and Stella McCartney. An example of a circular economy within a fashion brand is Eileen Fisher's Tiny Factory, in which customers are encouraged to bring their worn clothing to be manufactured and resold. In a 2018 interview, Fisher explained, "A big part of the problem with fashion is overconsumption. We need to make less and sell less. You get to use your creativity but you also get to sell more but not create more stuff."
Circular initiatives, such as clothing rental start-ups, are also getting more and more highlight in the EU and in the US as well. Operating with circular business model, rental services offer everyday fashion, baby wear, maternity wear for rent. The companies either offer flexible pricing in a 'pay as you rent' model like Palanta does, or offer fixed monthly subscriptions such as Rent The Runway or Le Tote.
Both China and Europe have taken the lead in pushing a circular economy. McDowall et al. 2017 stated that the "Chinese perspective on the circular economy is broad, incorporating pollution and other issues alongside waste and resource concerns, [while] Europe's conception of the circular economy has a narrower environmental scope, focusing on waste and resources and opportunities for business".
Construction industry
The construction sector is one of the world's largest waste generators. The circular economy appears as a helpful solution to diminish the environmental impact of the industry.
Construction is very important to the economy of the European Union and its state members. It provides 18 million direct jobs and contributes to about 9% of the EU's GDP. The main causes of the construction's environmental impact are found in the consumption of non-renewable resources and the generation of contaminant residues, both of which are increasing at an accelerating pace. In the European Union alone, people and companies generate more than 2 billion tonnes of garbage year, or 4.8 tonnes per person, mostly from the building, mining, and manufacturing sectors. Each individual in Europe generates half a tonne of municipal garbage annually, less than half of which gets recycled.
Cement production accounts for 2.4% of worldwide CO2 emissions from industrial and energy sources.
Decision making about the circular economy can be performed on the operational (connected with particular parts of the production process), tactical (connected with whole processes) and strategic (connected with the whole organization) levels. It may concern both construction companies as well as construction projects (where a construction company is one of the stakeholders).
End-of-life buildings can be deconstructed, hereby creating new construction elements that can be used for creating new buildings and freeing up space for new development.
Modular construction systems can be useful to create new buildings in the future, and have the advantage of allowing easier deconstruction and reuse of the components afterwards (end-of-life buildings).
Another example that fits the idea of circular economy in the construction sector on the operational level, there can be pointed walnut husks, that belong to hard, light and natural abrasives used for example in cleaning brick surfaces. Abrasive grains are produced from crushed, cleaned and selected walnut shells. They are classified as reusable abrasives. A first attempt to measure the success of circular economy implementation was done in a construction company. The circular economy can contribute to creating new posts and economic growth. According to Gorecki, one of such posts may be the Circular economy manager employed for construction projects.
Automotive industry
The circular economy is beginning to catch on inside the automotive industry. A case study within the heavy-duty and off-road industry analyses the implementation of circular practices into a lean manufacturing context, the currently dominant production strategy in automotive. Lean has continuously shown to increase efficiency by eliminating waste and focusing on customer value, contributing to eco-efficiency by narrowing resource loops. However, other measures are needed to slow down and close the resource loops altogether and reach eco-effectiveness. The study finds significant potentials by combining the lean and the circular approach, to not only focus on the product and process levels (eco-efficiency), but also on the system perspective (eco-effectiveness). There are also incentives for carmakers to do so as a 2016 report by Accenture stated that the circular economy could redefine competitiveness in the automotive sector in terms of price, quality, and convenience and could double revenue by 2030 and lower the cost base by up to fourteen percent. So far, it has typically translated itself into using parts made from recycled materials, remanufacturing of car parts and looking at the design of new cars. Remanufacturing is currently limited to provide spare parts, where a common use is remanufacturing gearboxes, which has the potential of reducing the global warming potential (CO2-eq) by 36% compared to a newly manufactured one. With the vehicle recycling industry (in the EU) only being able to recycle just 75% of the vehicle, meaning 25% is not recycled and may end up in landfills, there is much to improve here. In the electric vehicle industry, disassembly robots are used to help disassemble the vehicle. In the EU's ETN-Demeter project (European Training Network for the Design and Recycling of Rare-Earth Permanent Magnet Motors and Generators in Hybrid and Full Electric Vehicles) they are looking at the sustainable design issue. They are for example making designs of electric motors of which the magnets can be easily removed for recycling the rare earth metals.
Some car manufacturers such as Volvo are also looking at alternative ownership models (leasing from the automotive company; "Care by Volvo").
Logistics industry
The logistics industry plays an important role in the Dutch economy because the Netherlands is located in a specific area where the transit of commodities takes place on a daily basis. The Netherlands is an example of a country from the EU that has increasingly moved towards incorporating a circular economy given the vulnerability of the Dutch economy (as well as other EU countries) to be highly dependable on raw materials imports from countries such as China, which makes the country susceptible to the unpredictable importation costs for such primary goods.
Research related to the Dutch industry shows that 25% of the Dutch companies are knowledgeable and interested in a circular economy; furthermore, this number increases to 57% for companies with more than 500 employees. Some of the areas are chemical industries, wholesale trade, industry and agriculture, forestry and fisheries because they see a potential reduction of costs when reusing, recycling and reducing raw materials imports. In addition, logistic companies can enable a connection to a circular economy by providing customers incentives to reduce costs through shipment and route optimization, as well as, offering services such as prepaid shipping labels, smart packaging, and take-back options. The shift from linear flows of packaging to circular flows as encouraged by the circular economy is critical for the sustainable performance and reputation of the packaging industry. The government-wide program for a circular economy is aimed at developing a circular economy in the Netherlands by 2050.
Several statistics have indicated that there will be an increase in freight transport worldwide, which will affect the environmental impacts of the global warming potential causing a challenge to the logistics industry. However, the Dutch council for the Environment and Infrastructure (Dutch acronym: Rli) provided a new framework in which it suggests that the logistics industry can provide other ways to add value to the different activities in the Dutch economy. Examples of adding value in innovative ways to the Dutch economy are an exchange of resources (either waste or water flows) for production from different industries and changing the transit port to a transit hub concept. The Rli studied the role of the potentials of the logistics industry for three sectors, agriculture and food, chemical industries and high tech industries.
Agriculture
There has been widespread adoption of circular economic models in agriculture which is essential to global food security and to help mitigate against climate change, however there are also potential risks to human and environmental health from contaminants remaining in recycled water or organic material.
These risks can be mitigated by addressing three specific issues that will also depend on the local context. These are contaminant monitoring, collection, transport, and treatment, and regulation and policy.
The Netherlands, aiming to have a completely circular economy by 2050, intends a shift to circular agriculture as part of this plan. This shift plans on having a "sustainable and strong agriculture" by as early as 2030. Changes in the Dutch laws and regulations will be introduced. Some key points in this plant include:
closing the fodder-manure cycle
reusing as much waste streams as possible (a team Reststromen will be appointed)
reducing the use of artificial fertilizers in favor of natural manure
providing the chance for farms within experimentation areas to deviate from law and regulations
implementing uniform methods to measure the soil quality
providing the opportunity to agricultural entrepreneurs to sign an agreement with the Staatsbosbeheer ("State forest management") to have it use the lands they lease for natuurinclusieve landbouw ("nature-inclusive management")
providing initiatives to increase the earnings of farmers
Furniture industry
When it comes to the furniture industry, most of the products are passive durable products, and accordingly implementing strategies and business models that extend the lifetime of the products (like repairing and remanufacturing) would usually have lower environmental impacts and lower costs. Companies such as GGMS are supporting a circular approach to furniture by refurbishing and reupholstering items for reuse.
The EU has seen a huge potential for implementing a circular economy in the furniture sector. Currently, out of 10,000,000 tonnes of annually discarded furniture in the EU, most of it ends up in landfills or is incinerated. There is a potential increase of €4.9 billion in Gross Value Added by switching to a circular model by 2030, and 163,300 jobs could be created.
A study about the status of Danish furniture companies' efforts on a circular economy states that 44% of the companies included maintenance in their business models, 22% had take-back schemes, and 56% designed furniture for recycling. The authors of the study concluded that although a circular furniture economy in Denmark is gaining momentum, furniture companies lack knowledge on how to effectively transition, and the need to change the business model could be another barrier.
Another report in the UK saw a huge potential for reuse and recycling in the furniture sector. The study concluded that around 42% of the bulk waste sent to landfills annually (1.6 million tonnes) is furniture. They also found that 80% of the raw material in the production phase is waste.
Oil and gas industry
Between 2020 and about 2050, the oil and gas sector will have to decommission 600 installations in the UK alone. Over the next decade around 840,000 tonnes of materials will have to be recovered at an estimated cost of £25Bn. In 2017 North Sea oil and gas decommissioning became a net drain on the public purse. With UK taxpayers covering 50–70% of the bill, discussion of the most economic, social and environmentally beneficial decommissioning solutions for the general public may lead to financial benefits.
Organizations such as Zero Waste Scotland have conducted studies to identify areas with reuse potential, allowing equipment to continue life in other industries, or to be redeployed for oil and gas.
Renewable energy industry
Oil and gas energy resources are incompatible with the idea of a circular economy, since they are defined as "development that meets the needs of the present while compromising the ability of future generations to meet their own needs". A sustainable circular economy can only be powered by renewable energies, such as wind, solar, hydropower, and geothermal.
What gives entities the ability to achieve 'net zero' carbon-emissions, is that they can offset their fossil fuel consumption by removing carbon from the atmosphere. While this is a necessary first step, global smart grid technologist, Steve Hoy, believes that in order to create a circular economy we should adapt the concept of 'True Zero' as opposed to 'net zero', which is eliminating fossil fuel consumption entirely so that all energy is produced from renewable sources.
Current growth projections in the renewable energy industry expect a significant amount of energy and raw materials to manufacture and maintain these renewable systems. "Due to the emissions attributed to fossil-fuel electricity generation, the overall carbon footprint of renewable energy technologies is significantly lower than for fossil-fuel generation over the respective systems lifespan." However, there are still linear trajectories when establishing renewable energy systems that should be assessed in order to fully transition to a circular economy.
Education industry
In 2018, The Ellen MacArthur Foundation identified 138 institutions with circular economy course offerings. Since then the theme of CE topics in teaching has been incorporated at a steadily increasing pace, with plans for adoption at university, city, and country wide levels. Zero Waste Scotland is an example of a country wide program that plans to implement CE into the Scottish education system through the "YES Circular Economy Challenge" which advocates that "every learning environment should have a whole-environment approach to learning for sustainability that is robust, demonstrable, evaluated and supported by leadership at all levels". A 2021 report by the EMF compares London and New York CE course offerings and finds that there is not a "whole-environment" representation when it comes to different CE topics, with an element of the technical CE cycle being covered in 90% and element of the biological cycle covered in 50% of the 80 analyzed circular economy courses. The EMF looks critically at the distribution of CE courses and researchers at Utrecht University Julian Kirchherr and Laura Piscicelli analyze the success of their introductory CE course in "Towards an Education for the Circular Economy (ECE): Five Teaching Principles and a Case Study". With 114 published definitions for the Circular Economy, synthesis and collaboration, previously exemplified, could benefit and popularize CE application in higher education.
Plastic waste management
Rare-earth elements recovery
One study suggests that by 2050, up to 40 to 75% of the EU's clean energy metal needs could come from local recycling.
A study estimates losses of 61 metals, showing that use spans of, often scarce, tech-critical metals are short. A study using Project Drawdown's modeling framework indicates that, even without considering costs or bottlenecks of expansion of renewable energy generation, metal recycling can lead to significant climate change mitigation.
Chemistry
Researchers have developed recycling-routes for 200 industrial waste chemicals into important drugs and agrochemicals, for productive reuse that reduces disposal costs and hazards to the environment. A study has called for new molecules and materials for products with open-environmental applications, such as pesticides, that can be neither circulated nor recycled and provides a set of guidelines on how to integrate chemistry into a circular economy.
Circular developments around the world
Overview
Already since 2006, the European Union has been concerned about environmental transition issues by translating this into directives and regulations. Three important laws can be mentioned in this regard:
The Ecodesign Framework Directive
The Waste Framework Directive
The Registration, Evaluation, Authorisation and Restriction of Chemicals Regulation
On 17 December 2012, the European Commission published a document entitled "Manifesto for a Resource Efficient Europe".
In July 2014, a zero-waste program for Europe has been put in place aiming at the circular economy. Since then, several documents on this subject have been published. The following table summarizes the various European reports and legislation on the circular economy that have been developed between 2014 and 2018.
In addition to the above legislation, the EU has amended the Eco-design Working Plan to add circularity criteria and has enacted eco-design regulations with circular economy components for 7 product types (refrigerators, dishwashers, electronic displays, washing machines, welding equipment and servers and data storage products). These eco-design regulations are aimed at increasing the reparability of products by improving the availability of spare parts and manuals. At the same time, the European research budget related to the circular economy has increased considerably in the last few years: it has reached 964 million euros between 2018 and 2020. In total, the European Union has invested 10 billion euros on Circular Economy projects between 2016 and 2019.
One waste atlas aggregates some data about waste management of countries and cities, albeit the data is very limited.
The "Circularity Gap Report" indicates that "out of all the minerals, biomass, fossil fuels and metals that enter the world's economy, only 8.6 percent are reused".
The European Commission's Circular Economy Action Plan has resulted in a wide range of projects, with an emphasis on waste and material sustainability, as well as the circularity of consumer items. Despite a huge number of EU legislative measures, the European Union's circularity rate was 11.5% in 2022 and is slowing down currently.
Programs
The "Manifesto for a Resource Efficient Europe" of 2012 clearly stated that "In a world with growing pressures on resources and the environment, the EU has no choice but to go for the transition to a resource-efficient and ultimately regenerative circular economy." Furthermore, the document highlighted the importance of "a systemic change in the use and recovery of resources in the economy" in ensuring future jobs and competitiveness, and outlined potential pathways to a circular economy, in innovation and investment, regulation, tackling harmful subsidies, increasing opportunities for new business models, and setting clear targets.
The European environmental research and innovation policy aims at supporting the transition to a circular economy in Europe, defining and driving the implementation of a transformative agenda to green the economy and the society as a whole, to achieve a truly sustainable development. Research and innovation in Europe are financially supported by the program Horizon 2020, which is also open to participation worldwide. Circular economy is found to play an important role to economic growth of European Countries, highlighting the crucial role of sustainability, innovation, and investment in no-waste initiatives to promote wealth.
The European Union plans for a circular economy are spearheaded by its 2018 Circular Economy Package. Historically, the policy debate in Brussels mainly focused on waste management which is the second half of the cycle, and very little is said about the first half: eco-design. To draw the attention of policymakers and other stakeholders to this loophole, the Ecothis, an EU campaign was launched raising awareness about the economic and environmental consequences of not including eco-design as part of the circular economy package.
In 2020, the European Union released its Circular Economy Action Plan.
"Closing the loop" (December 2015 – 2018)
This first circular economy Action Plan consisted of 54 measures to strengthen Europe's global competitiveness, promote sustainable economic growth and create more jobs. Among these 54 measures, for example, is the importance of optimizing the use of raw materials, products and waste in order to create energy savings and reduce greenhouse gas emissions. The main goal being in this respect to lead to the development of a framework conducive to the circular economy. In addition, the development of this Action Plan was also intended to enable the development of a new market for secondary raw materials. Concretely, here are the principal areas concerned by the Action Plan:
Production
Consumption
Waste Management
Boosting markets for secondary materials
Innovation, investment and 'horizontal' measures
Monitoring progress
The Action plan was also a way to integrate a policy framework, an integration of existing policies and legal instruments. It includes notably some amendments. As a matter of fact, the implementation of this new plan was supported by the European Economic and Social Committee (EESC). This support included in-depth consultation.
Circular Economy Action Plan of 2020
This new action was adopted by the European Commission in March 2020. A total of 574 out of 751 MEPs voted in favour of the action plan. It focuses on better management of resource-intensive industries, waste reduction, zero-carbonization and standardization of sustainable products in Europe. Prior to the development of this new action plan, we can also mention the Green Deal of 2019, which integrated ecological and environmental ambitions to make Europe a carbon-neutral continent. On 10 February 2021, the European Parliament submitted its proposals to the Circular Economic Action Plan (CEAP) of the commission, highlighting five major areas in particular. Those are the following:
Batteries
Construction and Buildings
ICT
Plastics
Textiles
Two additional sectors on which the CEAP focuses could be added: packaging & food and water.
Countries ranking
The European leaders in terms of circular economy are designated mostly by their current efforts for a shift towards circular economy but also by their objectives and the means implemented in this shift. It remains difficult to precisely rank how countries score in terms of circular economy, given the many principles and aspects of it and how differently one single country can score in each of these principles but some tendencies do appear in the average score, when combining the principles.
The Netherlands: the government aims to reuse 50% of all materials as far as possible by 2030 and to convert waste into reusable materials anywhere it is possible. The next goal is then to make the country shift towards a 100% waste-free economy by 2050. These objectives were all set from 2016 to 2019 in a series of programs for a governmental circular economy, raw materials agreements and transition agendas focusing on the five most important sectors for waste: biomass and food, plastics, manufacturing industry, construction and consumer goods.
Germany: Germany is a leader in some aspects of circular economy, like waste management and recycling.
France is also adding several texts and measures for a better circular economy in the country such as the roadmap for circular economy in 2018, consisting of 50 measures for a successful transition to circular economy.
Belgium is also a consequent actor in the field. It scored second in the circular material use rate, before France but after the Netherlands. In the other principles of circular economy, it usually scores in the top 5.
Other notable countries are Italy, the United Kingdom, Austria, Slovenia, and Denmark.
Outside the EU, countries such as Brazil, China, Canada, the US and especially Japan are working on the shift towards it.
Most countries that are in the lead in the field of circular economy are European countries, meaning that Europe in general is in the lead group at the moment. The reasons behind this are numerous. First of all, circular economy is a field that is, at the moment mostly advanced in the developed countries, thanks to, between others, technology. The efforts of the European Commission are also non negligible, with documents such as the Commission staff working document "Leading the way to a global circular economy: state of play and outlook" or the new action plan for circular economy in Europe, being one of the main blocs of the green deal.
Even if Europe as a whole is a good actor in the field, some European countries are still struggling to make the shift faster. These countries are mostly the eastern European countries (Romania, Hungary, Bulgaria, Slovakia, etc.) but also in some fields Portugal, Greece, Croatia and even Germany.
In 2018, the newspaper Politico made a ranking of the (by then) 28 European countries by making an aggregation of the seven key metrics of the commission for each country. The advantage here is that it gives a general view of how countries work towards circular development and how they compare to each other but the main drawback is that, as mentioned in the article, the seven metrics all have equal weight and importance in Politico's calculations, which is not the case in real life. Indeed, it is said in the same article that the countries that score the highest in CE are not necessarily the greenest according to the Environmental Performance Index. For example, Germany, which scores 1st in the Politico ranking, only scores 13th worldwide in the EPI and is behind 10 European countries.
China
Beginning in the early 2000s, China started passing a series of laws and regulations to promote the circular economy. Policymakers' views expanded from a focus on recycling to broad efforts to promote efficiency and closed-loop flows of materials at all stages, from production to distribution to consumption. As part of its efforts to enhance the circular economy, China is attempting to decrease its reliance on mining for its mineral supply. Academic Jing Vivian Zhan writes that promoting the circular economy helps China to avoid the resource curse and helps to alleviate overreliance on extractive industries.
Calendar
Europe
Since 2015, there is a plan concerning the circular economy adopted by the European Commission. This first plan includes 54 actions. There are also 4 legislative proposals with the objective of legal change.
a) the framework directive on waste
b) the directive on the landfill of waste
c) the directive on packaging and packaging waste
d) the directive on batteries and accumulators and their waste
During the 2018 negotiations between the Parliament and the council, different elements will be adopted in four directives. These are mainly: « The main objectives are the following in the European framework
Minimum 65% of municipal waste to be recycled by 2035
Minimum 70% of all packaging waste to be recycled by 2030
Maximum 10% of municipal waste to be landfilled by 2035
Certain types of single use plastic will be prohibited to place on market as of July 2021
Minimum 32% of the Union's gross final consumption of energy to originate from renewable sources by 2030
The main objectives are the following in the European framework.
Since 2020, Europe's new green deal plan focuses on "design and production from the perspective of the circular economy", its main objective is to ensure that the European economy keeps these resources as long as possible.
The action plan of this circular development is mainly based on different objectives. They are:
"To make sustainable products the norm in the EU.
To empower consumers to choose.
Focusing on the most resource-intensive sectors with a high potential to contribute to the circular economy.
Ensure less waste."
Europe's green deal, which came into being in 2019, aims at a climate-neutral circular economy. For this, a distinct difference between economic growth and resources will be found. "A circular economy reduces the pressure on natural resources and is an indispensable prerequisite for achieving the goal of climate neutrality by 2050 and halting biodiversity loss."
From 2019 to 2023, the European Investment Bank funded €3.83 billion to co-finance 132 circular economy initiatives across many industries. Circular economy initiatives with a higher risk profile have secured finance through risk-sharing instruments and EU guarantees.
Benelux
Belgium
Since 2014, Belgium has adopted a circular strategy. This is marked by 21 measures to be followed. In Belgium, the three Belgian regions (Flanders, Brussels and Wallonia) have different personal objectives. For Flanders, a strategy called Vision 2050 has been put in place. For Wallonia, there is a plan following the declaration of the regional policy for Wallonia from 2019 to 2024. Since 23 January 2020, Wallonia has adopted a new strategy including three governance bodies: a steering committee, an intra-administration platform and an orientation committee.
For Brussels, a plan was adopted in 2016 to develop the circular economy in its region. This plan will be in place for a period of 10 years.
The Netherlands
The Netherlands set a plan of action for circular economy in 2016 and have been doing additional efforts for a transition towards a 100% circular economy by 2050 (and 50% by 2030). The Netherlands Organization for Applied Scientific Research estimates that a full shift towards Circular Economy will, at the long term, generate not less than 7.3 billion euros and 540,000 new jobs in the sector. The work will be developed around the five pillars mentioned above: plastics, biomass and food, the construction sector, the manufacturing industry, and consumer goods. The government has also put a fund in place to facilitate and accelerate the shift. These funds are part of the 300 million € annually spent by the government for climate-related decisions and actions. The envelope is also completed by the ministry of infrastructure, which allocated €40 million for circular economy-related actions in 2019 and 2020. Other actions such as an allocation of subsidies for enterprises that make change or invest in the field have been taken. Initiatives at the subnational level are also encouraged and regions such as Groningen, Friesland, the Northern Netherlands, etc. ere taking actions to not only reduce their environmental impact but accelerate and accentuate their actions towards Circular economy.
Luxembourg
CE is one of the major deals of the 2018-2023 Luxembourg government.
The Luxembourg added in 2019 Circular economy in their data-driven innovation strategy, considering it now as a crucial field for innovation in the next years. It is present in most sectors of the country's development plan even if it is still only at the beginning of its development.
More initiatives are starting to emerge, however, to develop better in the field:
The 2019 "Circular economy strategy Luxembourg", a document testifying on the efforts made and to be made and the willingness to transform the Grand Duchy into an example in the field;
Holistic strategic studies such as the "strategic group for circular economy";
Insertion of circular economy as a subject to be discussed by all the six main pillars of the "third industrial revolution";
Creation of the Fit4Circularity program to allocate funds to innovative businesses in the field;
Participation in Circular economy-related events such as "Financing the circular economy" (2015) at the European Investment Bank or the "Circular economy hotspot" (2017);
Work on educational tools in the field;
Collaboration with municipalities, at the subnational level, to encourage them to become more circular;
The establishment of value chains for local materials such as wood and a better management of raw materials in general;
A cooperation between the public and the private sector;
The 'Product Circularity Data Sheet' (PCDS) launched in 2019 by the government to study and determine the circular potential of products and materials;
An implementation of tools and methods such as a regulatory framework (laws), a financial framework (financial helps and sanctions), creation, management and sharing of knowledge on the subject, etc.;
A coordination of the Luxembourg goals with the SDGs and the 2030 agenda.
United Kingdom
In 2020, the UK government published its Circular Economy Package policy statement in coordination with the Welsh and Scottish governments.
England
From 1 October 2023, certain single-use plastic items have been placed under bans or restrictions in England.
Scotland
In 2021 the Scottish Parliament banned single use plastics being provided by businesses.
In 2024, the Scottish Parliament passed the Circular Economy (Scotland) Act 2024, which would require the setting of targets and providing updates to the strategy to achieve those targets as frequently as 5 years or more frequently than that.
Wales
In 2021, the Welsh Government published its Circular Economy strategy.
In 2023, the Senedd banned single use plastics, which would require the setting of targets and providing updates to the strategy to achieve those targets as frequently as 5 years or more frequently than that.
Northern Ireland
In 2022, the Northern Ireland Executive held a Circular Economy consultation.
Circular bioeconomy
The bio-economy, especially the circular bio-economy, decreases dependency on natural resources by encouraging sustainable goods that generate food, materials, and energy using renewable biological resources (such as lupins). According to the European Commission's EU Science Center, the circular bioeconomy produces €1.5 trillion in value added, accounting for 11% of EU GDP. The European Investment Bank invests between €6 billion and €9 billion in the bio-economy per year.
The European Circular Bioeconomy Fund
Eligibility requirements and core terms of reference for an equity and mezzanine debt fund were established by the European Investment Bank and the European Commission directorates-general for agriculture and research and innovation. As a result, an investment adviser was chosen, and the European Circular Bioeconomy Fund was created. As of 2023, a €65 million investment from the EIB has been made.
The European Circular Bioeconomy Fund invests in early-stage companies with developed innovations that are searching for funds to broaden their activities and reach new markets. It specifically invests in:
circular/bio-economy technologies,
biomass/feed stock production that boots agricultural productivity while lowering environmental impact,
biomass/feed stock technologies that result in higher-value, green goods,
bio-based chemicals and materials, and
biological alternatives in fields such as cosmetics.
Circular Carbon Economy
During the 2019 COP25 in Madrid, William McDonough and marine ecologist Carlos Duarte presented the Circular Carbon Economy at an event with the BBVA Foundation. The Circular Carbon Economy is based on McDonough's ideas from Carbon Is Not The Enemy and aims to serve as the framework for developing and organizing effective systems for carbon management. McDonough used the Circular Carbon Economy to frame discussions at the G20 workshops in March 2020 before the framework's formal acceptance by the G20 Leaders in November 2020.
Critiques of circular economy models
There is some criticism of the idea of the circular economy. As Corvellec (2015) put it, the circular economy privileges continued economic growth with soft "anti-programs", and the circular economy is far from the most radical "anti-program". Corvellec (2019) raised the issue of multi-species and stresses "impossibility for waste producers to dissociate themselves from their waste and emphasizes the contingent, multiple, and transient value of waste". "Scatolic engagement draws on Reno's analogy of waste as scats and of scats as signs for enabling interspecies communication. This analogy stresses the impossibility for waste producers to dissociate themselves from their waste and emphasizes the contingent, multiple, and transient value of waste".
Corvellec and Stål (2019) are mildly critical of apparel manufacturing circular economy take-back systems as ways to anticipate and head off more severe waste reduction programs:
Research by Zink and Geyer (2017: 593) questioned the circular economy's engineering-centric assumptions: "However, proponents of the circular economy have tended to look at the world purely as an engineering system and have overlooked the economic part of the circular economy. Recent research has started to question the core of the circular economy—namely, whether closing material and product loops do, in fact, prevent primary production."
There are other critiques of the circular economy (CE). For example, Allwood (2014) discussed the limits of CE 'material circularity', and questioned the desirability of the CE in a reality with growing demand. Do CE secondary production activities (reuse, repair, and remake) actually reduce, or instead displace, primary production (natural resource extraction)? The problem CE overlooks, its untold story, is how displacement is governed mainly by market forces, according to McMillan et al. (2012). It's the tired old narrative, that the invisible hand of market forces will conspire to create full displacement of virgin material of the same kind, said Zink & Geyer (2017). Korhonen, Nuur, Feldmann, and Birkie (2018) argued that "the basic assumptions concerning the values, societal structures, cultures, underlying world-views and the paradigmatic potential of CE remain largely unexplored".
It is also often pointed out that there are fundamental limits to the concept, which are based, among other things, on the laws of thermodynamics. According to the second law of thermodynamics, all spontaneous processes are irreversible and associated with an increase in entropy. It follows that in a real implementation of the concept, one would either have to deviate from the perfect reversibility in order to generate an entropy increase by generating waste, which would ultimately amount to still having parts of the economy which follow a linear scheme, or enormous amounts of energy would be required (from which a significant part would be dissipated in order to for the total entropy to increase). In its comment to concept of the circular economy the European Academies' Science Advisory Council (EASAC) came to a similar conclusion:
In addition to this, the circular economy has been criticized for lacking a strong social justice component. Indeed, most circular economy visions, projects and policies do not address key social questions regarding how circular economy technologies and solutions will be controlled and how their benefits and costs will be distributed. To respond to these limitations some academics and social movements prefer to speak of a circular society rather than a circular economy. They thereby advocate for a circular society where knowledge, political power, wealth, and resources are sustainably circulated in fundamentally democratic and redistributive manners, rather than just improving resource efficiency as most circular economy proposals do.
Moreover, it has been argued that a post-growth approach should be adopted for the circular economy where material loops are put (directly) at the service of wellbeing, instead of attempting to reconcile the circular economy with GDP growth. For example, efficiency improvements at the level of individual products could be offset by a growth in total or per-capita consumption, which only beyond-circularity measures like choice editing and rationing unsustainable products or emissions may be able to address.
Related concepts
The various approaches to 'circular' business and economic models share several common principles with other conceptual frameworks:
Biomimicry
Janine Benyus, author of Biomimicry: Innovation Inspired by Nature, defined biomimicry as "a new discipline that studies nature's best ideas and then imitates these designs and processes to solve human problems. Studying a leaf to invent a better solar cell is an example. I think of it as 'innovation' inspired by nature".
Blue economy
Initiated by former Ecover CEO and Belgian entrepreneur Gunter Pauli, derived from the study of natural biological production processes the official manifesto states, "using the resources available ... the waste of one product becomes the input to create a new cash flow".
Cradle to cradle
Created by Walter R. Stahel and similar theorists, in which industry adopts the reuse and service-life extension of goods as a strategy of waste prevention, regional job creation, and resource efficiency in order to decouple wealth from resource consumption.
Industrial ecology
Industrial ecology is the study of material and energy flows through industrial systems. Focusing on connections between operators within the "industrial ecosystem", this approach aims at creating closed-loop processes in which waste is seen as input, thus eliminating the notion of undesirable by-product.
Resource recovery
Resource recovery is using wastes as an input material to create valuable products as new outputs. The aim is to reduce the amount of waste generated, therefore, reducing the need for landfill space and also extracting maximum value from waste.
Sound Material-Cycle Society
A similar concept used in Japan.
Systems thinking
The ability to understand how things influence one another within a whole. Elements are considered as 'fitting in' their infrastructure, environment and social context.
See also
References
Financial systems
Bright green environmentalism
Economic ideologies
Environmental economics
Products and the environment | 0.784337 | 0.997035 | 0.782011 |
Nomothetic and idiographic | Nomothetic and idiographic are terms used by Neo-Kantian philosopher Wilhelm Windelband to describe two distinct approaches to knowledge, each one corresponding to a different intellectual tendency, and each one corresponding to a different branch of academia. To say that Windelband supported that last dichotomy is a consequent misunderstanding of his own thought. For him, any branch of science and any discipline can be handled by both methods as they offer two integrating points of view.
Nomothetic is based on what Kant described as a tendency to generalize, and is typical for the natural sciences. It describes the effort to derive laws that explain types or categories of objective phenomena, in general.
Idiographic is based on what Kant described as a tendency to specify, and is typical for the humanities. It describes the effort to understand the meaning of contingent, unique, and often cultural or subjective phenomena.
Use in the social sciences
The problem of whether to use nomothetic or idiographic approaches is most sharply felt in the social sciences, whose subject are unique individuals (idiographic perspective), but who have certain general properties or behave according to general rules (nomothetic perspective).
Often, nomothetic approaches are quantitative, and idiographic approaches are qualitative, although the "Personal Questionnaire" developed by Monte B. Shapiro and its further developments (e.g. Discan scale and PSYCHLOPS) are both quantitative and idiographic. Another very influential quantitative but idiographic tool is the Repertory grid when used with elicited constructs and perhaps elicited elements. Personal cognition (D.A. Booth) is idiographic, qualitative and quantitative, using the individual's own narrative of action within situation to scale the ongoing biosocial cognitive processes in units of discrimination from norm (with M.T. Conner 1986, R.P.J. Freeman 1993 and O. Sharpe 2005). Methods of "rigorous idiography" allow probabilistic evaluation of information transfer even with fully idiographic data.
In psychology, idiographic describes the study of the individual, who is seen as a unique agent with a unique life history, with properties setting them apart from other individuals (see idiographic image). A common method to study these unique characteristics is an (auto)biography, i.e. a narrative that recounts the unique sequence of events that made the person who they are. Nomothetic describes the study of classes or cohorts of individuals. Here the subject is seen as an exemplar of a population and their corresponding personality traits and behaviours. It is widely held that the terms idiographic and nomothetic were introduced to American psychology by Gordon Allport in 1937, but Hugo Münsterberg used them in his 1898 presidential address at the American Psychological Association meeting. This address was published in Psychological Review in 1899.
Theodore Millon stated that when spotting and diagnosing personality disorders, first clinicians start with the nomothetic perspective and look for various general scientific laws; then when they believe they have identified a disorder, they switch their view to the idiographic perspective to focus on the specific individual and his or her unique traits.
In sociology, the nomothetic model tries to find independent variables that account for the variations in a given phenomenon (e.g. What is the relationship between timing/frequency of childbirth and education?). Nomothetic explanations are probabilistic and usually incomplete. The idiographic model focuses on a complete, in-depth understanding of a single case (e.g. Why do I not have any pets?).
In anthropology, idiographic describes the study of a group, seen as an entity, with specific properties that set it apart from other groups. Nomothetic refers to the use of generalization rather than specific properties in the same context.
See also
Nomological
References
Further reading
Cone, J. D. (1986). "Idiographic, nomothetic, and related perspectives in behavioral assessment." In: R. O. Nelson & S. C. Hayes (eds.): Conceptual foundations of behavioral assessment (pp. 111–128). New York: Guilford.
Thomae, H. (1999). "The nomothetic-idiographic issue: Some roots and recent trends." International Journal of Group Tensions, 28(1), 187–215.
Concepts in epistemology | 0.792137 | 0.987158 | 0.781964 |
Evolutionary medicine | Evolutionary medicine or Darwinian medicine is the application of modern evolutionary theory to understanding health and disease. Modern biomedical research and practice have focused on the molecular and physiological mechanisms underlying health and disease, while evolutionary medicine focuses on the question of why evolution has shaped these mechanisms in ways that may leave us susceptible to disease. The evolutionary approach has driven important advances in the understanding of cancer, autoimmune disease, and anatomy. Medical schools have been slower to integrate evolutionary approaches because of limitations on what can be added to existing medical curricula. The International Society for Evolution, Medicine and Public Health coordinates efforts to develop the field. It owns the Oxford University Press journal Evolution, Medicine and Public Health and The Evolution and Medicine Review.
Core principles
Utilizing the Delphi method, 56 experts from a variety of disciplines, including anthropology, medicine, nursing, and biology agreed upon 14 core principles intrinsic to the education and practice of evolutionary medicine. These 14 principles can be further grouped into five general categories: question framing, evolution I and II (with II involving a higher level of complexity), evolutionary trade-offs, reasons for vulnerability, and culture. Additional information regarding these principles may be found in the table below.
Human adaptations
Adaptation works within constraints, makes compromises and trade-offs, and occurs in the context of different forms of competition.
Constraints
Adaptations can only occur if they are evolvable. Some adaptations which would prevent ill health are therefore not possible.
DNA cannot be totally prevented from undergoing somatic replication corruption; this has meant that cancer, which is caused by somatic mutations, has not (so far) been eliminated by natural selection.
Humans cannot biosynthesize vitamin C, and so risk scurvy, vitamin C deficiency disease, if dietary intake of the vitamin is insufficient.
Retinal neurons and their axon output have evolved to be inside the layer of retinal pigment cells. This creates a constraint on the evolution of the visual system such that the optic nerve is forced to exit the retina through a point called the optic disc. This, in turn, creates a blind spot. More importantly, it makes vision vulnerable to increased pressure within the eye (glaucoma) since this cups and damages the optic nerve at this point, resulting in impaired vision.
Other constraints occur as the byproduct of adaptive innovations.
Trade-offs and conflicts
One constraint upon selection is that different adaptations can conflict, which requires a compromise between them to ensure an optimal cost-benefit tradeoff.
Running efficiency in women, and birth canal size
Encephalization, and gut size
Skin pigmentation protection from UV, and the skin synthesis of vitamin D
Speech and its use of a descended larynx, and increased risk of choking
Competition effects
Different forms of competition exist and these can shape the processes of genetic change.
mate choice and disease susceptibility
genomic conflict between mother and fetus that results in pre-eclampsia
Lifestyle
Humans evolved to live as simple hunter-gatherers in small tribal bands, while contemporary humans have a more complex life. This change may make present-day humans susceptible to lifestyle diseases.
Diet
In contrast to the diet of early hunter-gatherers, the modern Western diet often contains high quantities of fat, salt, and simple carbohydrates, such as refined sugars and flours.
Trans fat health risks
Dental caries
High GI foods
Modern diet based on "common wisdom" regarding diets in the paleolithic era
Among different countries, the incidence of colon cancer varies widely, and the extent of exposure to a Western pattern diet may be a factor in cancer incidence.
Life expectancy
Examples of aging-associated diseases are atherosclerosis and cardiovascular disease, cancer, arthritis, cataracts, osteoporosis, type 2 diabetes, hypertension and Alzheimer's disease. The incidence of all of these diseases increases rapidly with aging (increases exponentially with age, in the case of cancer).
Of the roughly 150,000 people who die each day across the globe, about two thirds—100,000 per day—die of age-related causes. In industrialized nations, the proportion is much higher, reaching 90%.
Exercise
Many contemporary humans engage in little physical exercise compared to the physically active lifestyles of ancestral hunter-gatherers. Prolonged periods of inactivity may have only occurred in early humans following illness or injury, so a modern sedentary lifestyle may continuously cue the body to trigger life preserving metabolic and stress-related responses such as inflammation, and some theorize that this causes chronic diseases.
Cleanliness
Contemporary humans in developed countries are mostly free of parasites, particularly intestinal ones. This is largely due to frequent washing of clothing and the body, and improved sanitation. Although such hygiene can be very important when it comes to maintaining good health, it can be problematic for the proper development of the immune system. The hygiene hypothesis is that humans evolved to be dependent on certain microorganisms that help establish the immune system, and modern hygiene practices can prevent necessary exposure to these microorganisms. "Microorganisms and macroorganisms such as helminths from mud, animals, and feces play a critical role in driving immunoregulation" (Rook, 2012). Essential microorganisms play a crucial role in building and training immune functions that fight off and repel some diseases, and protect against excessive inflammation, which has been implicated in several diseases. For instance, recent studies have found evidence supporting inflammation as a contributing factor in Alzheimer's Disease.
Specific explanations
This is a partial list: all links here go to a section describing or debating its evolutionary origin.
Life stage related
Adipose tissue in human infants
Arthritis and other chronic inflammatory diseases
Ageing
Alzheimer disease
Childhood
Menarche
Menopause
Menstruation
Morning sickness
Other
Atherosclerosis
Arthritis and other chronic inflammatory diseases
Cough]
Cystic fibrosis
Dental occlusion
Diabetes Type II
Diarrhea
Essential hypertension
Fever
Gestational hypertension
Gout
Iron deficiency (paradoxical benefits)
Obesity
Phenylketonuria
Placebos
Osteoporosis
Red blood cell polymorphism disorders
Sickle cell anemia
Sickness behavior
Women's reproductive cancers
Evolutionary psychology
As noted in the table below, adaptationist hypotheses regarding the etiology of psychological disorders are often based on analogies with evolutionary perspectives on medicine and physiological dysfunctions (see in particular, Randy Nesse and George C. Williams' book Why We Get Sick).
Evolutionary psychiatrists and psychologists suggest that some mental disorders likely have multiple causes.
See several topic areas, and the associated references, below.
Agoraphobia
Anxiety
Depression
Drug abuse
Schizophrenia
Unhappiness
History
Charles Darwin did not discuss the implications of his work for medicine, though biologists quickly appreciated the germ theory of disease and its implications for understanding the evolution of pathogens, as well as an organism's need to defend against them.
Medicine, in turn, ignored evolution, and instead focused (as done in the hard sciences) upon proximate mechanical causes.
George C. Williams was the first to apply evolutionary theory to health in the context of senescence. Also in the 1950s, John Bowlby approached the problem of disturbed child development from an evolutionary perspective upon attachment.
An important theoretical development was Nikolaas Tinbergen's distinction made originally in ethology between evolutionary and proximate mechanisms.
Randolph M. Nesse summarizes its relevance to medicine:
The paper of Paul Ewald in 1980, "Evolutionary Biology and the Treatment of Signs and Symptoms of Infectious Disease", and that of Williams and Nesse in 1991, "The Dawn of Darwinian Medicine" were key developments. The latter paper "draw a favorable reception",page x and led to a book, Why We Get Sick (published as Evolution and healing in the UK). In 2008, an online journal started: Evolution and Medicine Review.
In 2000, Paul Sherman hypothesised that morning sickness could be an adaptation that protects the developing fetus from foodborne illnesses, some of which can cause miscarriage or birth defects, such as listeriosis and toxoplasmosis.
See also
Evolutionary therapy
Evolutionary psychiatry
Evolutionary physiology
Evolutionary psychology
Evolutionary developmental psychopathology
Evolutionary approaches to depression
Illness
Paleolithic lifestyle
Universal Darwinism
References
Further reading
Books
Online articles
External links
Evolution and Medicine Network
Special Issue of Evolutionary Applications on Evolutionary Medicine
Evolutionary biology
Clinical medicine | 0.797134 | 0.980744 | 0.781784 |
Bottom–up and top–down design | Bottom–up and top–down are both strategies of information processing and ordering knowledge, used in a variety of fields including software, humanistic and scientific theories (see systemics), and management and organization. In practice they can be seen as a style of thinking, teaching, or leadership.
A top–down approach (also known as stepwise design and stepwise refinement and in some cases used as a synonym of decomposition) is essentially the breaking down of a system to gain insight into its compositional subsystems in a reverse engineering fashion. In a top–down approach an overview of the system is formulated, specifying, but not detailing, any first-level subsystems. Each subsystem is then refined in yet greater detail, sometimes in many additional subsystem levels, until the entire specification is reduced to base elements. A top–down model is often specified with the assistance of black boxes, which makes it easier to manipulate. However black boxes may fail to clarify elementary mechanisms or be detailed enough to realistically validate the model. A top–down approach starts with the big picture, then breaks down into smaller segments.
A bottom–up approach is the piecing together of systems to give rise to more complex systems, thus making the original systems subsystems of the emergent system. Bottom–up processing is a type of information processing based on incoming data from the environment to form a perception. From a cognitive psychology perspective, information enters the eyes in one direction (sensory input, or the "bottom"), and is then turned into an image by the brain that can be interpreted and recognized as a perception (output that is "built up" from processing to final cognition). In a bottom–up approach the individual base elements of the system are first specified in great detail. These elements are then linked together to form larger subsystems, which then in turn are linked, sometimes in many levels, until a complete top-level system is formed. This strategy often resembles a "seed" model, by which the beginnings are small but eventually grow in complexity and completeness. But "organic strategies" may result in a tangle of elements and subsystems, developed in isolation and subject to local optimization as opposed to meeting a global purpose.
Product design and development
During the development of new products, designers and engineers rely on both bottom–up and top–down approaches. The bottom–up approach is being used when off-the-shelf or existing components are selected and integrated into the product. An example includes selecting a particular fastener, such as a bolt, and designing the receiving components such that the fastener will fit properly. In a top–down approach, a custom fastener would be designed such that it would fit properly in the receiving components. For perspective, for a product with more restrictive requirements (such as weight, geometry, safety, environment), such as a spacesuit, a more top–down approach is taken and almost everything is custom designed.
Computer science
Software development
Part of this section is from the Perl Design Patterns Book.
In the software development process, the top–down and bottom–up approaches play a key role.
Top–down approaches emphasize planning and a complete understanding of the system. It is inherent that no coding can begin until a sufficient level of detail has been reached in the design of at least some part of the system. Top–down approaches are implemented by attaching the stubs in place of the module. But these delay testing of the ultimate functional units of a system until significant design is complete.
Bottom–up emphasizes coding and early testing, which can begin as soon as the first module has been specified. But this approach runs the risk that modules may be coded without having a clear idea of how they link to other parts of the system, and that such linking may not be as easy as first thought. Re-usability of code is one of the main benefits of a bottom–up approach.
Top–down design was promoted in the 1970s by IBM researchers Harlan Mills and Niklaus Wirth. Mills developed structured programming concepts for practical use and tested them in a 1969 project to automate the New York Times morgue index. The engineering and management success of this project led to the spread of the top–down approach through IBM and the rest of the computer industry. Among other achievements, Niklaus Wirth, the developer of Pascal programming language, wrote the influential paper Program Development by Stepwise Refinement. Since Niklaus Wirth went on to develop languages such as Modula and Oberon (where one could define a module before knowing about the entire program specification), one can infer that top–down programming was not strictly what he promoted. Top–down methods were favored in software engineering until the late 1980s, and object-oriented programming assisted in demonstrating the idea that both aspects of top-down and bottom-up programming could be used.
Modern software design approaches usually combine top–down and bottom–up approaches. Although an understanding of the complete system is usually considered necessary for good design—leading theoretically to a top-down approach—most software projects attempt to make use of existing code to some degree. Pre-existing modules give designs a bottom–up flavor.
Programming
Top–down is a programming style, the mainstay of traditional procedural languages, in which design begins by specifying complex pieces and then dividing them into successively smaller pieces. The technique for writing a program using top–down methods is to write a main procedure that names all the major functions it will need. Later, the programming team looks at the requirements of each of those functions and the process is repeated. These compartmentalized subroutines eventually will perform actions so simple they can be easily and concisely coded. When all the various subroutines have been coded the program is ready for testing. By defining how the application comes together at a high level, lower-level work can be self-contained.
In a bottom–up approach the individual base elements of the system are first specified in great detail. These elements are then linked together to form larger subsystems, which in turn are linked, sometimes at many levels, until a complete top–level system is formed. This strategy often resembles a "seed" model, by which the beginnings are small, but eventually grow in complexity and completeness. Object-oriented programming (OOP) is a paradigm that uses "objects" to design applications and computer programs. In mechanical engineering with software programs such as Pro/ENGINEER, Solidworks, and Autodesk Inventor users can design products as pieces not part of the whole and later add those pieces together to form assemblies like building with Lego. Engineers call this "piece part design".
Parsing
Parsing is the process of analyzing an input sequence (such as that read from a file or a keyboard) in order to determine its grammatical structure. This method is used in the analysis of both natural languages and computer languages, as in a compiler.
Nanotechnology
Top–down and bottom–up are two approaches for the manufacture of products. These terms were first applied to the field of nanotechnology by the Foresight Institute in 1989 to distinguish between molecular manufacturing (to mass-produce large atomically precise objects) and conventional manufacturing (which can mass-produce large objects that are not atomically precise). Bottom–up approaches seek to have smaller (usually molecular) components built up into more complex assemblies, while top–down approaches seek to create nanoscale devices by using larger, externally controlled ones to direct their assembly. Certain valuable nanostructures, such as Silicon nanowires, can be fabricated using either approach, with processing methods selected on the basis of targeted applications.
A top–down approach often uses the traditional workshop or microfabrication methods where externally controlled tools are used to cut, mill, and shape materials into the desired shape and order. Micropatterning techniques, such as photolithography and inkjet printing belong to this category. Vapor treatment can be regarded as a new top–down secondary approaches to engineer nanostructures.
Bottom–up approaches, in contrast, use the chemical properties of single molecules to cause single-molecule components to (a) self-organize or self-assemble into some useful conformation, or (b) rely on positional assembly. These approaches use the concepts of molecular self-assembly and/or molecular recognition. See also Supramolecular chemistry. Such bottom–up approaches should, broadly speaking, be able to produce devices in parallel and much cheaper than top–down methods but could potentially be overwhelmed as the size and complexity of the desired assembly increases.
Neuroscience and psychology
These terms are also employed in cognitive sciences including neuroscience, cognitive neuroscience and cognitive psychology to discuss the flow of information in processing. Typically, sensory input is considered bottom–up, and higher cognitive processes, which have more information from other sources, are considered top–down. A bottom-up process is characterized by an absence of higher-level direction in sensory processing, whereas a top-down process is characterized by a high level of direction of sensory processing by more cognition, such as goals or targets (Biederman, 19).
According to college teaching notes written by Charles Ramskov, Irvin Rock, Neiser, and Richard Gregory claim that top–down approach involves perception that is an active and constructive process. Additionally, it is an approach not directly given by stimulus input, but is the result of stimulus, internal hypotheses, and expectation interactions. According to theoretical synthesis, "when a stimulus is presented short and clarity is uncertain that gives a vague stimulus, perception becomes a top-down approach."
Conversely, psychology defines bottom–up processing as an approach in which there is a progression from the individual elements to the whole. According to Ramskov, one proponent of bottom–up approach, Gibson, claims that it is a process that includes visual perception that needs information available from proximal stimulus produced by the distal stimulus. Theoretical synthesis also claims that bottom–up processing occurs "when a stimulus is presented long and clearly enough."
Certain cognitive processes, such as fast reactions or quick visual identification, are considered bottom–up processes because they rely primarily on sensory information, whereas processes such as motor control and directed attention are considered top–down because they are goal directed. Neurologically speaking, some areas of the brain, such as area V1 mostly have bottom–up connections. Other areas, such as the fusiform gyrus have inputs from higher brain areas and are considered to have top–down influence.
The study of visual attention is an example. If your attention is drawn to a flower in a field, it may be because the color or shape of the flower are visually salient. The information that caused you to attend to the flower came to you in a bottom–up fashion—your attention was not contingent on knowledge of the flower: the outside stimulus was sufficient on its own. Contrast this situation with one in which you are looking for a flower. You have a representation of what you are looking for. When you see the object, you are looking for, it is salient. This is an example of the use of top–down information.
In cognition, two thinking approaches are distinguished. "Top–down" (or "big chunk") is stereotypically the visionary, or the person who sees the larger picture and overview. Such people focus on the big picture and from that derive the details to support it. "Bottom–up" (or "small chunk") cognition is akin to focusing on the detail primarily, rather than the landscape. The expression "seeing the wood for the trees" references the two styles of cognition.
Studies in task switching and response selection show that there are differences through the two types of processing. Top–down processing primarily focuses on the attention side, such as task repetition (Schneider, 2015). Bottom–up processing focuses on item-based learning, such as finding the same object over and over again (Schneider, 2015). Implications for understanding attentional control of response selection in conflict situations are discussed (Schneider, 2015).
This also applies to how we structure these processing neurologically. With structuring information interfaces in our neurological processes for procedural learning. These processes were proven effective to work in our interface design. But although both top–down principles were effective in guiding interface design; they were not sufficient. They can be combined with iterative bottom–up methods to produce usable interfaces (Zacks & Tversky, 2003).
Schooling
Undergraduate (or bachelor) students are taught the basis of top–down bottom–up processing around their third year in the program. Going through four main parts of the processing when viewing it from a learning perspective. The two main definitions are that bottom–up processing is determined directly by environmental stimuli rather than the individual's knowledge and expectations (Koch, 2022).
Management and organization
In the fields of management and organization, the terms "top–down" and "bottom–up" are used to describe how decisions are made and/or how change is implemented.
A "top–down" approach is where an executive decision maker or other top person makes the decisions of how something should be done. This approach is disseminated under their authority to lower levels in the hierarchy, who are, to a greater or lesser extent, bound by them. For example, when wanting to make an improvement in a hospital, a hospital administrator might decide that a major change (such as implementing a new program) is needed, and then use a planned approach to drive the changes down to the frontline staff.
A bottom–up approach to changes is one that works from the grassroots, and originates in a flat structure with people working together, causing a decision to arise from their joint involvement. A decision by a number of activists, students, or victims of some incident to take action is a "bottom–up" decision. A bottom–up approach can be thought of as "an incremental change approach that represents an emergent process cultivated and upheld primarily by frontline workers".
Positive aspects of top–down approaches include their efficiency and superb overview of higher levels; and external effects can be internalized. On the negative side, if reforms are perceived to be imposed "from above", it can be difficult for lower levels to accept them (e.g., Bresser-Pereira, Maravall, and Przeworski 1993). Evidence suggests this to be true regardless of the content of reforms (e.g., Dubois 2002). A bottom–up approach allows for more experimentation and a better feeling for what is needed at the bottom. Other evidence suggests that there is a third combination approach to change.
Public health
Both top–down and bottom–up approaches are used in public health. There are many examples of top–down programs, often run by governments or large inter-governmental organizations; many of these are disease-or issue-specific, such as HIV control or smallpox eradication. Examples of bottom–up programs include many small NGOs set up to improve local access to healthcare. But many programs seek to combine both approaches; for instance, guinea worm eradication, a single-disease international program currently run by the Carter Center has involved the training of many local volunteers, boosting bottom-up capacity, as have international programs for hygiene, sanitation, and access to primary healthcare.
Architecture
Often the École des Beaux-Arts school of design is said to have primarily promoted top–down design because it taught that an architectural design should begin with a parti, a basic plan drawing of the overall project.
By contrast, the Bauhaus focused on bottom–up design. This method manifested itself in the study of translating small-scale organizational systems to a larger, more architectural scale (as with the wood panel carving and furniture design).
Ecology
In ecology top–down control refers to when a top predator controls the structure or population dynamics of the ecosystem. The interactions between these top predators and their prey are what influences lower trophic levels. Changes in the top level of trophic levels have an inverse effect on the lower trophic levels. Top–down control can have negative effects on the surrounding ecosystem if there is a drastic change in the number of predators. The classic example is of kelp forest ecosystems. In such ecosystems, sea otters are a keystone predator. They prey on urchins, which in turn eat kelp. When otters are removed, urchin populations grow and reduce the kelp forest creating urchin barrens. This reduces the diversity of the ecosystem as a whole and can have detrimental effects on all of the other organisms. In other words, such ecosystems are not controlled by productivity of the kelp, but rather, a top predator. One can see the inverse effect that top–down control has in this example; when the population of otters decreased, the population of the urchins increased.
Bottom–up control in ecosystems refers to ecosystems in which the nutrient supply, productivity, and type of primary producers (plants and phytoplankton) control the ecosystem structure. If there are not enough resources or producers in the ecosystem, there is not enough energy left for the rest of the animals in the food chain because of biomagnification and ecological efficiency. An example would be how plankton populations are controlled by the availability of nutrients. Plankton populations tend to be higher and more complex in areas where upwelling brings nutrients to the surface.
There are many different examples of these concepts. It is common for populations to be influenced by both types of control, and there are still debates going on as to which type of control affects food webs in certain ecosystems.
Philosophy and ethics
Top–down reasoning in ethics is when the reasoner starts from abstract universalizable principles and then reasons down them to particular situations. Bottom–up reasoning occurs when the reasoner starts from intuitive particular situational judgements and then reasons up to principles. Reflective equilibrium occurs when there is interaction between top-down and bottom-up reasoning until both are in harmony. That is to say, when universalizable abstract principles are reflectively found to be in equilibrium with particular intuitive judgements. The process occurs when cognitive dissonance occurs when reasoners try to resolve top–down with bottom–up reasoning, and adjust one or the other, until they are satisfied, they have found the best combinations of principles and situational judgements.
See also
The Cathedral and the Bazaar
Pseudocode
References cited
https://philpapers.org/rec/COHTNO
Citations and notes
Further reading
Corpeño, E (2021). "The Top-Down Approach to Problem Solving: How to Stop Struggling in Class and Start Learning". .
Goldstein, E.B. (2010). Sensation and Perception. USA: Wadsworth.
Galotti, K. (2008). Cognitive Psychology: In and out of the laboratory. USA: Wadsworth.
Dubois, Hans F.W. 2002. Harmonization of the European vaccination policy and the role TQM and reengineering could play. Quality Management in Health Care 10(2): 47–57.
J. A. Estes, M. T. Tinker, T. M. Williams, D. F. Doak "Killer Whale Predation on Sea Otters Linking Oceanic and Nearshore Ecosystems", Science, October 16, 1998: Vol. 282. no. 5388, pp. 473 – 476
Luiz Carlos Bresser-Pereira, José María Maravall, and Adam Przeworski, 1993. Economic reforms in new democracies. Cambridge: Cambridge University Press. .
External links
"Program Development by Stepwise Refinement", Communications of the ACM, Vol. 14, No. 4, April (1971)
Integrated Parallel Bottom-up and Top-down Approach. In Proceedings of the International Emergency Management Society's Fifth Annual Conference (TIEMS 98), May 19–22, Washington DC, USA (1998).
Changing Your Mind: On the Contributions of Top-Down and Bottom-Up Guidance in Visual Search for Feature Singletons, Journal of Experimental Psychology: Human Perception and Performance, Vol. 29, No. 2, 483–502, 2003.
K. Eric Drexler and Christine Peterson, Nanotechnology and Enabling Technologies, Foresight Briefing No. 2, 1989.
Empowering sustained patient safety: the benefits of combining top-down and bottom-up approaches
Dichotomies
Information science
Neuropsychology
Software design
Hierarchy | 0.784935 | 0.995917 | 0.781731 |
Human genetic enhancement | Human genetic enhancement or human genetic engineering refers to human enhancement by means of a genetic modification. This could be done in order to cure diseases (gene therapy), prevent the possibility of getting a particular disease (similarly to vaccines), to improve athlete performance in sporting events (gene doping), or to change physical appearance, metabolism, and even improve physical capabilities and mental faculties such as memory and intelligence.
These genetic enhancements may or may not be done in such a way that the change is heritable (which has raised concerns within the scientific community).
Ethics
Genetics is the study of genes and inherited traits and while the ongoing advancements in this field have resulted in the advancement of healthcare at multiple levels, ethical consideration have become increasingly crucial especially alongside. Genetic engineering has always been a topic of moral debate among bioethicists. Even though the technological advancements in this field present exciting prospects for biomedical improvement, it also prompts the need for ethical, societal, and practical assessments to understand its impact on human biology, evolution, and the environment. Genetic testing, genetic engineering, and stem cell research are often discussed together due to the interrelated moral arguments surrounding these topics. The distinction between repairing genes and enhancing genes is a central idea in many moral debates surrounding genetic enhancement because some argue that repairing genes is morally permissible, but that genetic enhancement is not due to its potential to lead to social injustice through discriminatory eugenics initiatives.
Moral questions related to genetic testing are often related to duty to warn family members if an inherited disorder is discovered, how physicians should navigate patient autonomy and confidentiality with regard to genetic testing, the ethics of genetic discrimination, and the moral permissibility of using genetic testing to avoid causing seriously disabled persons to exist, such as through selective abortion.
The responsibility of public health professionals is to determine potential exposures and suggest testing for communicable diseases that require reporting. Public health professionals may encounter disclosure concerns if the extension of obligatory screening results in genetic abnormalities being classified as reportable conditions.
Genetic data is personal and closely linked to a person's identity. Confidentiality concerns not only work, health care, and insurance coverage, but a family's whole genetic test results can be impacted. Affected individuals may also have their parents, children, siblings, sisters, and even extended relatives if the condition is either genetically dominant or carried by them. Moreover, a person's decisions could change their entire life depending on the outcome of a genetic test. Results of genetic testing may need to be disclosed in all facets of a person's life.
Non-invasive prenatal testing (NIPT) has the capability to accurately determine the sex of the fetus at an early stage of gestation, raising concerns about the potential facilitation of sex-selective termination of pregnancy (TOP) due to its ease, timing, and precision. Even though the ultrasound technology has the capacity to do the same, NIPT is being explored recently because of it capability to accurately identify the fetus's sex at an early stage in the pregnancy is achievable, with increasing precision as early as 7 weeks' gestation. This timeframe precedes the typical timing for other sex determination techniques, such as ultrasound or chorionic villus sampling (CVS). The high early accuracy of NIPT reduces the uncertainty associated with other methods, such as the aforementioned, leading to more informed decisions and eliminating the risk of inaccurate results that could influence decision-making regarding sex-selective TOP. Additionally, NIPT enables sex-selective TOP in the first trimester, which is more practical, and allows pregnant women to postpone maternal-fetal bonding. These considerations may significantly facilitate the pursuit of sex-selective TOP when NIPT is utilized. Therefore, it is crucial to examine these ethical concerns within the framework of NIPT adoption.
Ethical issues related to gene therapy and human genetic enhancement concern the medical risks and benefits of the therapy, the duty to use the procedures to prevent suffering, reproductive freedom in genetic choices, and the morality of practicing positive genetics, which includes attempts to improve normal functions.
In every genetic based study conducted for humanity, studies must be carried out in accordance with the ethics committee approval statement, ethical, legal norms and human morality. CAR T cell therapy, which is intended to be a new treatment aims to change the genetics of T cells and transform immune system cells that do not recognize cancer into cells that recognize and fight cancer. it works with the T cell therapy method which is arranged with palindromic repeats at certain short intervals called with CRISPR.
All research involving human subjects in healthcare settings must be registered in a public database before the recruitment of the first trial. The informed consent statement should include adequate information about possible conflicts of interest, the expected benefits of the study, its potential risks, and other issues related to the discomfort it may involve.
Technological advancements are play integral role to new forms of human enhancement. While phenotypic and somatic interventions for human enhancement provide noteworthy ethical and sociological dilemmas, germline heritable genetic intervention necessitates even more comprehensive deliberations at the individual and societal levels.
Moral judgments are empirically based and entail evaluating prospective risk-benefit ratios particularly in the field of biomedicine. The technology of CRISPR genome editing raises ethical questions for several reasons. To be more specific, concerns exist regarding the capabilities and technological constraints of CRISPR technology. Furthermore, the long-term effects of the altered organisms and the possibility of the edited genes being passed down to succeeding generations and having unanticipated effects are two further issues to be concerned about. Decision-making on morality becomes more difficult when uncertainty from these circumstances prevents appropriate risk/benefit assessments.
The potential benefits of revolutionary tools like CRISPR are endless. For example, because it can be applied directly in the embryo, CRISPR/Cas9 reduces the time required to modify target genes compared to gene targeting technologies that rely on the use of embryonic stem (ES) cells. Bioinformatics tools developed to identify the optimal sequences for designing guide RNAs and optimization of experimental conditions have provided very robust procedures that guarantee the successful introduction of the desired mutation. Major benefits are likely to develop from the use of safe and effective HGGM, making a precautionary stance against HGGM unethical.
Going forward, many people support the establishment of an organization that would provide guidance on how best to control the ethical complexities mentioned above. Recently, a group of scientists founded the Association for Responsible Research and Innovation in Genome Editing (ARRIGE) to study and provide guidance on the ethical use of genome editing.
In addition, Janasoff and Hurlbut have recently advocated for the establishment and international development of an interdisciplinary "global observatory for gene regulation".
Researchers proposed that debates in gene editing should not be controlled by the scientific community. The network is envisioned to focus on gathering information from dispersed sources, bringing to the fore perspectives that are often overlooked, and fostering exchange across disciplinary and cultural divides.
The interventions aimed at enhancing human traits from a genetic perspective are emphasized to be contingent upon the understanding of genetic engineering, and comprehending the outcomes of these interventions requires an understanding of the interactions between humans and other living beings. Therefore, the regulation of genetic engineering underscores the significance of examining the knowledge between humans and the environment.
To cope with the ethical challenges and uncertainties arising from genetic advancements, it has been emphasized that the development of comprehensive guidelines based on universal principles is essential. The importance of adopting a cautious approach to safeguard fundamental values such as autonomy, global well-being, and individual dignity has been elucidated when overcoming these challenges.
When contemplating genetic enhancement, genetic technologies should be approached from a broad perspective, using a definition that encompasses not only direct genetic manipulation but also indirect technologies such as biosynthetic drugs. It has been emphasized that attention should be given to expectations that can shape the marketing and availability of these technologies, anticipating the allure of new treatments. These expectations have been noted to potentially signify the encouragement of appropriate public policies and effective professional regulations.
Clinical stem cell research must be conducted in accordance with ethical values. This entails a full respect for ethical principles, including the accurate assessment of the balance between risks and benefits, as well as obtaining informed and voluntary participant consent. The design of research should be strengthened, scientific and ethical reviews should be effectively coordinated, assurance should be provided that participants understand the fundamental features of the research, and full compliance with additional ethical requirements for disclosing negative findings has been addressed.
Clinicians have been emphasized to understand the role of genomic medicine in accurately diagnosing patients and guiding treatment decisions. It has been highlighted that detailed clinical information and expert opinions are crucial for the accurate interpretation of genetic variants. While personalized medicine applications are exciting, it has been noted that the impact and evidence base of each intervention should be carefully evaluated. The human genome contains millions of genetic variants, so caution should be exercised and expert opinions sought when analyzing genomic results.
Disease prevention
With the discovery of various types of immune-related disorders, there is a need for diversification in prevention and treatment. Developments in the field of gene therapy are being studied to be included in the scope of this treatment, but of course more research is needed to increase the positive results and minimize the negative effects of gene therapy applications.
The CRISPR/Cas9 system is also designed as a gene editing technology for the treatment of HIV-1/AIDS. CRISPR/Cas9 has been developed as the latest gene editing technique that allows the insertion, deletion and modification of DNA sequences and provides advantages in the disruption of the latent HIV-1 virus. However, the production of some vectors for HIV-1-infected cells is still limited and further studies are needed
Being an HIV carrier also plays an important role in the incidence of cervical cancer. While there are many personal and biological factors that contribute to the development of cervical cancer, HIV carriage is correlated with its occurrence. However, long-term research on the effectiveness of preventive treatment is still ongoing. Early education, accessible worldwide, will play an important role in prevention.
When medications and treatment methods are consistently adhered to, safe sexual practices are maintained and healthy lifestyle changes are implemented, the risk of transmission is reduced in most people living with HIV. Consistently implemented proactive prevention strategies can significantly reduce the incidence of HIV infections. Education on safe sex practices and risk-reducing changes for everyone, whether they are HIV carriers or not, is critical to preventing the disease.
However, controlling the HIV epidemic and eliminating the stigma associated with the disease may not be possible only through a general AIDS awareness campaign. It is observed that HIV awareness, especially among individuals in low socio-economic regions, is considerably lower than the general population. Although there is no clear-cut solution to prevent the transmission of HIV and the spread of the disease through sexual transmission, a combination of preventive measures can help to control the spread of HIV. Increasing knowledge and awareness plays an important role in preventing the spread of HIV by contributing to the improvement of behavioral decisions with high risk perception.
Genetics plays a pivotal role in disease prevention, offering insights into an individual's predisposition to certain conditions and paving the way for personalized strategies to mitigate disease risk. The burgeoning field of genetic testing and analysis has provided valuable tools for identifying genetic markers associated with various diseases, allowing for proactive measures to be taken in disease prevention Disease prevention via genetic testing is made easier as genetic testing can unveil an individual's genetic susceptibility to certain diseases, enabling early detection and intervention which can be very crucial in disease like heritable cancers such and breast and ovarian cancer. Having genetic information can inform the development of precision medicine approaches and targeted therapies for disease prevention in general. By identifying genetic factors contributing to disease susceptibility, such as specific gene mutations associated with autoimmune disorders, researchers can develop targeted therapies to modulate the immune response and prevent the onset or progression of these conditions.
There are many types of neurodegenerative diseases. Alzheimer's disease is the one of the most common one of these diseases and it affects millions of people worldwide. The CRISPR-Cas9 techniques can be used to prevent the Alzheimer's disease. For example, it has a potential to correct the autosomal dominant mutaitons, problematic neurons, restoring the associated electrophysiological deficits and decreased the Aβ peptides. Amyotrophic Lateral Sclerosis (ALS) is another highly lethal neurodegenerative disease. And CRISPR-Cas9 technology is simple and effective for changinc specific point mutations about ALS. Also with this technology Chen and his colleagues were found some important alterations in major indicators of ALS like decreasing in RNA foci, polypeptides and haplosufficiency.
Some individuals experience immunocompromise, a condition in which their immune systems are weakened and less effective in defending against various diseases, including but not limited to influenza. This susceptibility to infections can be attributed to a range of factors, including genetic flaws and genetic diseases such as Severe Combined Immunodeficiency (SCID). Some gene therapies have already been developed or are being developed to correct these genetic flaws/diseases, hereby making these people less susceptible to catching additional diseases (i.e. influenza, ). These genetic flaws and diseases can significantly impact the body's ability to mount an effective immune response, leaving individuals vulnerable to a wide array of pathogens. However, advancements in gene therapy research and development have shown promising potential in addressing these genetic deficiencies however not without associated challenges
CRISPR technology is a promising tool not only for genetic disease corrections but also for the prevention of viral and bacterial infections. Utilizing CRISPR–Cas therapies, researchers have targeted viral infections like HSV-1, EBV, HIV-1, HBV, HPV, and HCV, with ongoing clinical trials for an HIV-clearing strategy named EBT-101. Additionally, CRISPR has demonstrated efficacy in preventing viral infections such as IAV and SARS-CoV-2 by targeting viral RNA genomes with Cas13d, and it has been used to sensitize antibiotic-resistant S. aureus to treatment through Cas9 delivered via bacteriophages.
Advancements in gene editing and gene therapy hold promise for disease prevention by addressing genetic factors associated with certain conditions. Techniques like CRISPR-Cas9 offer the potential to correct genetic mutations associated with hereditary diseases, thereby preventing their manifestation in future generations and reducing disease burden. In November 2018, Lulu and Nana were created. By using clustered regularly interspaced short palindromic repeat (CRISPR)-Cas9, a gene editing technique, they disabled a gene called CCR5 in the embryos, aiming to close the protein doorway that allows HIV to enter a cell and make the subjects immune to the HIV virus.
Despite existing evidence of CRISPR technology, advancements in the field persist in reducing limitations. Researchers developed a new, gentle gene editing method for embryos using nanoparticles and peptide nucleic acids. Delivering editing tools without harsh injections, the method successfully corrected genes in mice without harming development. While ethical and technical questions remain, this study paves the way for potential future use in improving livestock and research animals, and maybe even in human embryos for disease prevention or therapy.
Informing prospective parents about their susceptibility to genetic diseases is crucial. Pre-implantation genetic diagnosis also holds significance for disease prevention by inheritance, as whole genome amplification and analysis help select a healthy embryo for implantation, preventing the transmission of a fatal metabolic disorder in the family.
Genetic human enhancement emerges as a potential frontier in disease prevention by precisely targeting genetic predispositions to various illnesses. Through techniques like CRISPR, specific genes associated with diseases can be edited or modified, offering the prospect of reducing the hereditary risk of conditions such as cancer, cardiovascular disorders, or neurodegenerative diseases. This approach not only holds the potential to break the cycle of certain genetic disorders but also to influence the health trajectories of future generations.
Furthermore, genetic enhancement can extend its impact by focusing on fortifying the immune system and optimizing overall health parameters. By enhancing immune responses and fine-tuning genetic factors related to general well-being, the susceptibility to infectious diseases can be minimized. This proactive approach to health may contribute to a population less prone to ailments and more resilient in the face of environmental challenges.
However, the ethical dimensions of genetic manipulation cannot be overstated. Striking a delicate balance between scientific progress and ethical considerations is imperative. Robust regulatory frameworks and transparent guidelines are crucial to ensuring that genetic human enhancement is utilized responsibly, avoiding unintended consequences or potential misuse. As the field advances, the integration of ethical, legal, and social perspectives becomes paramount to harness the full potential of genetic human enhancement for disease prevention while respecting individual rights and societal values.
Overall, the technology requires improvements in effectiveness, precision, and applications. Immunogenicity, off-target effects, mutations, delivery systems, and ethical issues are the main challenges that CRISPR technology faces. The safety concerns, ethical considerations, and the potential for misuse underscore the need for careful and responsible exploration of these technologies. CRISPR-Cas9 technology offers so much on disease prevention and treatment yet its future aspects, especially those that affect the next generations, should be investigated strictly.
Disease treatment
Gene therapy
Modification of human genes in order to treat genetic diseases is referred to as gene therapy. Gene therapy is a medical procedure that involves inserting genetic material into a patient's cells to repair or fix a malfunctioning gene in order to treat hereditary illnesses. Between 1989 and December 2018, over 2,900 clinical trials of gene therapies were conducted, with more than half of them in phase I. Since that time, many gene therapy based drugs became available, such as Zolgensma and Patisiran. Most of these approaches utilize viral vectors, such as adeno-associated viruses (AAVs), adenoviruses (AV) and lentiviruses (LV), for inserting or replacing transgenes in vivo or ex vivo.
In 2023, nanoparticles that act similarly to viral vectors were created. These nanoparticles, called bioorthgonal engineered virus-like recombinant biosomes, display strong and rapid binding capabilities to LDL receptors on cell surfaces, allowing them to enter cells efficiently and deliver genes to specific target areas, such as tumor and arthritic tissues.
RNA interference-based agents, such as zilebesiran, contain siRNA which binds with mRNA of the target cells, modifying gene expression.
CRISPR/Cas9
Many diseases are complex and cannot be effectively treated by simple coding sequence-targeting strategies. CRISPR/Cas9 is one technology that targets double strand breaks in the human genome, modifying genes and providing a quick way to treat genetic disorders. Gene treatment employing the CRISPR/Cas genome editing method is known as CRISPR/Cas-based gene therapy. Mammalian cells can be genetically modified using the straightforward, affordable, and extremely specific CRISPR/Cas method. It can help with single-base exchanges, homology-directed repair, and non-homologous end joining. The primary application is targeted gene knockouts, involving the disruption of coding sequences to silence deleterious proteins. Since the development of the CRISPR-Cas9 gene editing method between 2010 and 2012, scientists have been able to alter genes by making specific breaks in their DNA. This technology has many uses, including genome editing and molecular diagnosis.
Genetic engineering has undergone a revolution because to CRISPR/Cas technology, which provides a flexible framework for building disease models in larger animals. This breakthrough has created new opportunities to evaluate possible therapeutic strategies and comprehend the genetic foundations of different diseases. But in order to fully realize the promise of CRISPR/Cas-based gene therapy, a number of obstacles must be removed. Improving CRISPR/Cas systems' editing precision and efficiency is one of the main issues. Although this technology makes precise gene editing possible, reducing off-target consequences is still a major challenge. Unintentional genetic changes resulting from off-target modifications may have unanticipated effects or difficulties. Using enhanced guide RNA designs, updated Cas proteins, and cutting-edge bioinformatics tools, researchers are actively attempting to improve the specificity and reduce off-target effects of CRISPR/Cas procedures. Moreover, the effective and specific delivery of CRISPR components to target tissues presents another obstacle. Delivery systems must be developed or optimized to ensure the CRISPR machinery reaches the intended cells or organs efficiently and safely. This includes exploring various delivery methods such as viral vectors, nanoparticles, or lipid-based carriers to transport CRISPR components accurately to the target tissues while minimizing potential toxicity or immune responses.
Despite recent progress, further research is needed to develop safe and effective CRISPR therapies. CRISPR/Cas9 technology is not actively used today, however there are ongoing clinical trials of its use in treating various disorders, including sickle cell disease, human papillomavirus (HPV)-related cervical cancer, COVID-19 respiratory infection, renal cell carcinoma, and multiple myeloma.
Gene therapy has emerged as a promising field in medical science, aiming to address and treat various genetic diseases by modifying human genes. The process involves the introduction of genetic material into a patient's cells, with the primary goal of repairing or correcting malfunctioning genes that contribute to hereditary illnesses. This innovative medical procedure has seen significant advancements and a growing number of clinical trials since its inception.
Between 1989 and December 2018, more than 2,900 clinical trials of gene therapies were conducted, with over half of them reaching the phase I stage. Over the years, several gene therapy-based drugs have been developed and made available to the public, marking important milestones in the treatment of genetic disorders. Examples include Zolgensma and Patisiran, which have demonstrated efficacy in addressing specific genetic conditions.
The majority of gene therapy approaches leverage viral vectors, such as adeno-associated viruses (AAVs), adenoviruses (AV), and lentiviruses (LV), to facilitate the insertion or replacement of transgenes either in vivo or ex vivo. These vectors serve as delivery vehicles for introducing the therapeutic genetic material into the patient's cells.
A notable development in 2023 was the creation of nanoparticles designed to function similarly to viral vectors. These bioorthogonal engineered virus-like recombinant biosomes represent a novel approach to gene delivery. They exhibit robust and rapid binding capabilities to low-density lipoprotein (LDL) receptors on cell surfaces, enhancing their efficiency in entering cells. This capability enables the targeted delivery of genes to specific areas, such as tumor and arthritic tissues. This advancement holds the potential to enhance the precision and effectiveness of gene therapy, minimizing off-target effects and improving overall therapeutic outcomes.
In addition to viral vector and nanoparticle-based approaches, RNA interference (RNAi) has emerged as another strategy in gene therapy. Agents like zilebesiran utilize small interfering RNA (siRNA) that binds with the messenger RNA (mRNA) of target cells, effectively modifying gene expression. This RNA interference-based approach provides a targeted and specific method for regulating gene activity, presenting further opportunities for treating genetic disorders.
The continuous evolution of gene therapy techniques, along with the development of innovative delivery systems and therapeutic agents, underscores the ongoing commitment of the scientific and medical communities to advance the field and provide effective treatments for a wide range of genetic diseases.
Gene doping
Athletes might adopt gene therapy technologies to improve their performance. Gene doping is not known to occur, but multiple gene therapies may have such effects. Kayser et al. argue that gene doping could level the playing field if all athletes receive equal access. Critics claim that any therapeutic intervention for non-therapeutic/enhancement purposes compromises the ethical foundations of medicine and sports. Therefore, this technology, which is a subfield of genetic engineering commonly referred to as gene doping in sports, has been prohibited due to its potential risks. The primary objective of gene doping is to aid individuals with medical conditions. However, athletes, cognizant of its associated health risks, resort to employing this method in pursuit of enhanced athletic performance. The prohibition of the indiscriminate use of gene doping in sports has been enforced since the year 2003, pursuant to the decision taken by the World Anti-Doping Agency (WADA). A study conducted in 2011 underscored the significance of addressing issues related to gene doping and highlighted the importance of promptly comprehending how gene doping in sports and exercise medicine could impact healthcare services by elucidating its potential to enhance athletic performance. The article elucidates, according to the World Anti-Doping Agency (WADA), how gene doping poses a threat to the fairness of sports. Additionally, the paper delves into health concerns that may arise as a consequence of the utilization of gene doping solely for the purpose of enhancing sports performance. The misuse of gene doping to enhance athletic performance constitutes an unethical practice and entails significant health risks, including but not limited to cancer, viral infections, myocardial infarction, skeletal damage, and autoimmune complications. In addition, gene doping may give rise to various health issues, such as excessive muscle development leading to conditions like hypertonic cardiomyopathy, and render bones and tendons more susceptible to injuries Several genes such as EPO, IGF1, VEGFA, GH, HIFs, PPARD, PCK1, and myostatins are prominent choices for gene doping. Particularly in gene doping, athletes employ substances such as antibodies against myostatin or myostatin blockers. These substances contribute to the augmentation of the athletes' mass, facilitation of increased muscle development, and enhancement of strength. However, the primary genes utilized for gene doping in humans may lead to complications such as excessive muscle growth, which can adversely impact the cardiovascular system and increase the likelihood of injuries. However, due to insufficient awareness of these risks, numerous athletes resort to employing gene doping for purposes divergent from its genuine intent. Within the realm of athlete health, sports ethics and the ethos of fair play, scientists have developed various technologies for the detection of gene doping. Although in its early years the technology used wasn’t reliable, more extensive research has been done for better techniques to uncover gene doping instances that have been more successful. In the beginning, scientist resorted to techniques such as PCR in its various forms. This was not successful due to the fact that such technologies rely on exon-exon junctions in the DNA. This leads to a lack of precision in its detection as results can be easily tampered using misleading primers and gene doping would go undetected. With the emerge of new technologies, more recent studies utilized Next Generation Sequencing (NGS) as a method of detection. With the help of bioinformatics, this technology surpassed previous sequencing techniques in its in-depth analysis of DNA make up. Next Generation Sequencing (NGS) focuses on using an elaborate method of analyzing sample sequence and comparing it to a pre-existing reference sequence from a gene database. This way, primer tampering is not possible as the detection is on a genomic level. Using bioinformatic visualizing tools, data can be easily read and sequences that do not align with reference sequence can be highlighted. Most recently, One of the high-efficiency gene doping analysis methods conducted in the year 2023, leveraging cutting-edge technology, is HiGDA (High-efficiency Gene Doping Analysis), which employs CRISPR/deadCas9 technology.
The ethical issues concerning gene doping have been present long before its discovery. Although gene doping is relatively new, the concept of genetic enhancement of any kind has always been subject to ethical concerns. Even when used in a therapeutic manner, gene therapy poses many risks due to its unpredictability among other reasons. Factors other than health issues have raised ethical questions as well. These are mostly concerned with the hereditary factor of these therapies, where gene editing in some cases can be transmitted to the next generation with higher rates of unpredictability and risks in outcomes. For this reason, non-therapeutic application of gene therapy can be seen as a riskier approach to a non-medical concern.
In a study, from history to today, human beings have always been in competition. While in the past warriors competed to be stronger in wars, today there is competition to be successful in every field, and it is understood that this psychology is a phenomenon that has always existed in human history until today. It is known that although an athlete has genetic potential, he cannot become a champion if he does not comply with the necessary training and lifestyle. However, as competition increases, both more physical training and more mental performance are needed. Just as warriors in history used some herbal cures to look stronger and more aggressive, it is a fact that today, athletes resort to doping methods to increase their performance. However, this situation is against sports ethics because it does not comply with the morality and understanding of the game.
One of the negative effects is the risk of cancer, and as a positive effect is taking precautions against certain pathological conditions.Altering genes could lead to unintended and unpredictable changes in the body, potentially causing unforeseen health issues. Further effects of gene doping in sports is the constant fight against drugs not approved by the World Anti doping agency and unfairness regarding athletes that take drugs and don't. The long-term health consequences of gene doping may not be fully understood, and athletes may face health problems later in life.
Other uses
Other hypothetical gene therapies could include changes to physical appearance, metabolism, mental faculties such as memory and intelligence, and well-being (by increasing resistance to depression or relieving chronic pain, for example).
Physical appearance
The exploration of challenges in understanding the effects of gene alterations on phenotypes, particularly within natural genetic diversity, is highlighted. Emphasis is placed on the potential of systems biology and advancements in genotyping/phenotyping technologies for studying complex traits. Despite progress, persistent difficulties in predicting the influence of gene alterations on phenotypic changes are acknowledged, emphasizing the ongoing need for research in this area.
Some congenital disorders (such as those affecting the muscoskeletal system) may affect physical appearance, and in some cases may also cause physical discomfort. Modifying the genes causing these congenital diseases (on those diagnosed to have mutations of the gene known to cause these diseases) may prevent this.
- Phenotypic Impacts of CRISPR-Cas9 Editing in Mice Targeting the Tyr Gene:
In a comprehensive CRISPR-Cas9 study on gene editing, the Tyr gene in mice was targeted, seeking to instigate genetic alterations. The analysis found no off-target effects across 42 subjects, observing modifications exclusively at the intended Tyr locus. Though specifics were not explicitly discussed, these alterations may potentially influence non-defined aspects, such as coat color, emphasizing the broader potential of gene editing in inducing diverse phenotype changes.
Also changes in the myostatin gene may alter appearance.
Behavior
Significant quantitative genetic discoveries were made in the 1970s and 1980s, going beyond estimating heritability. However, issues such as The Bell Curve resurfaced, and by the 1990s, scientists recognized the importance of genetics for behavioral traits such as intelligence. The American Psychological Association's Centennial Conference in 1992 chose behavioral genetics as a theme for the past, present, and future of psychology. Molecular genetics synthesized, resulting in the DNA revolution and behavioral genomics, as quantitative genetic discoveries slowed. Individual behavioral differences can now be predicted early thanks to the behavioral sciences' DNA revolution. The first law of behavioral genetics was established in 1978 after a review of thirty twin studies revealed that the average heritability estimate for intelligence was 46%. Behavior may also be modified by genetic intervention. Some people may be aggressive, selfish, and may not be able to function well in society. Mutations in GLI3 and other patterning genes have been linked to HH etiology, according to genetic research. Approximately 50%-80% of children with HH have acute wrath and violence, and the majority of patients have externalizing problems. Epilepsy may be preceded by behavioral instability and intellectual incapacity. There is currently research ongoing on genes that are or may be (in part) responsible for selfishness (e.g. ruthlessness gene), aggression (e.g. warrior gene), altruism (e.g. OXTR, CD38, COMT, DRD4, DRD5, IGF2, GABRB2)
There has been a great anticipation of gene editing technology to modify genes and regulate our biology since the invention of recombinant DNA technology. These expectations, however, have mostly gone unmet. Evaluation of the appropriate uses of germline interventions in reproductive medicine should not be based on concerns about enhancement or eugenics, despite the fact that gene editing research has advanced significantly toward clinical application.
Cystic fibrosis (CF) is a hereditary disease caused by mutations in the Cystic fibrosis transmembrane conductance regulator (CFTR) gene. While 90% of CF patients can be treated, current treatments are not curative and do not address the entire spectrum of CFTR mutations. Therefore, a comprehensive, long-term therapy is needed to treat all CF patients once and for all. CRISPR/Cas gene editing technologies are being developed as a viable platform for genetic treatment. However, the difficulties of delivering enough CFTR gene and sustaining expression in the lungs has hampered gene therapy's efficacy. Recent technical breakthroughs, including as viral and non-viral vector transport, alternative nucleic acid technologies, and new technologies like mRNA and CRISPR gene editing, have taken use of our understanding of CF biology and airway epithelium.
Human gene transfer has held the promise of a lasting remedy to hereditary illnesses such as cystic fibrosis (CF) since its conception and use. The emergence of sophisticated technologies that allow for site-specific alteration with programmable nucleases has greatly revitalized the area of gene therapy. There is some research going on on the hypothetical treatment of psychiatric disorders by means of gene therapy. It is assumed that, with gene-transfer techniques, it is possible (in experimental settings using animal models) to alter CNS gene expression and thereby the intrinsic generation of molecules involved in neural plasticity and neural regeneration, and thereby modifying ultimately behaviour.
In recent years, it was possible to modify ethanol intake in animal models. Specifically, this was done by targeting the expression of the aldehyde dehydrogenase gene (ALDH2), lead to a significantly altered alcohol-drinking behaviour. Reduction of p11, a serotonin receptor binding protein, in the nucleus accumbens led to depression-like behaviour in rodents, while restoration of the p11 gene expression in this anatomical area reversed this behaviour.
Recently, it was also shown that the gene transfer of CBP (CREB (c-AMP response element binding protein) binding protein) improves cognitive deficits in an animal model of Alzheimer's dementia via increasing the expression of BDNF (brain-derived neurotrophic factor). The same authors were also able to show in this study that accumulation of amyloid-β (Aβ) interfered with CREB activity which is physiologically involved in memory formation.
In another study, it was shown that Aβ deposition and plaque formation can be reduced by sustained expression of the neprilysin (an endopeptidase) gene which also led to improvements on the behavioural (i.e. cognitive) level.
Similarly, the intracerebral gene transfer of ECE (endothelin-converting enzyme) via a virus vector stereotactically injected in the right anterior cortex and hippocampus, has also shown to reduce Aβ deposits in a transgenic mouse model of Alzeimer's dementia.
There is also research going on on genoeconomics, a protoscience that is based on the idea that a person's financial behavior could be traced to their DNA and that genes are related to economic behavior. , the results have been inconclusive. Some minor correlations have been identified.
Some studies show that our genes may affect some of our behaviors. For example, some genes may follow our state of stagnation, while others may be responsible for our bad habits. To give an example, the MAOA (Mono oxidase A) gene, the feature of this gene affects the release of hormones such as serotonin, epinephrine and dopamine and suppresses them. It prevents us from reacting in some situations and from stopping and making quick decisions in other situations, which can cause us to make wrong decisions in possible bad situations. As a result of some research, mood states such as aggression, feelings of compassion and irritability can be observed in people carrying this gene. Additionally, as a result of research conducted on people carrying the MAOA gene, this gene can be passed on genetically from parents, and mutations can also develop due to later epigenetic reasons. If we talk about epigenetic reasons, children of families growing up in bad environments begin to implement whatever they see from their parents. For this reason, those children begin to exhibit bad habits or behaviors such as irritability and aggression in the future.
Military
In 2022, the People's Liberation Army Academy of Military Sciences reported a notable experiment where military scientists inserted a gene from the tardigrade into human embryonic stem cells. This experiment aimed to explore the potential enhancement of soldiers' resistance to acute radiation syndrome, thereby increasing their ability to survive nuclear fallout. This development reflects the intersection of genetic engineering and military research, with a focus on bioenhancement for military personnel.
CRISPR/Cas9 technologies have garnered attention for their potential applications in military contexts. Various projects are underway, including those focused on protecting soldiers from specific challenges. For instance, researchers are exploring the use of CRISPR/Cas9 to provide protection from frostbite, reduce stress levels, alleviate sleep deprivation, and enhance strength and endurance. The Defense Advanced Research Projects Agency (DARPA) is actively involved in researching and developing these technologies. One of their projects aims to engineer human cells to function as nutrient factories, potentially optimizing soldiers' performance and resilience in challenging environments.
Additionally, military researchers are conducting animal trials to explore the prophylactic treatment for long-term protection against chemical weapons of mass destruction. This involves using non-pathogenic AAV8 vectors to deliver a candidate catalytic bioscavenger, PON1-IF11, into the bloodstream of mice. These initiatives underscore the broader exploration of genetic and molecular interventions to enhance military capabilities and protect personnel from various threats.
In the realm of bioenhancement, concerns have been raised about the use of dietary supplements and other biomedical enhancements by military personnel. A significant portion of American special operations forces reportedly use dietary supplements to enhance performance, but the extent of the use of other bioenhancement methods, such as steroids, human growth hormone, and erythropoietin, remains unclear. The lack of completed safety and efficacy testing for these bioenhancements raises ethical and regulatory questions. This concern is not new, as issues surrounding the off-label use of products like pyridostigmine bromide and botulinum toxoid vaccine during the Gulf War, as well as the DoD's Anthrax Vaccine Immunization Program in 1998, have prompted discussions about the need for thorough FDA approval for specific military applications.
The intersection of genetic engineering, CRISPR/Cas9 technologies, and military research introduces complex ethical considerations regarding the potential augmentation of human capabilities for military purposes. Striking a balance between scientific advancements, ethical standards, and regulatory oversight remains crucial as these technologies continue to evolve.
Databases about potential modifications
George Church has compiled a list of potential genetic modifications based on scientific studies for possibly advantageous traits such as less need for sleep, cognition-related changes that protect against Alzheimer's disease, disease resistances, higher lean muscle mass and enhanced learning abilities along with some of the associated studies and potential negative effects.
See also
Biohappiness
Crossbreeding
Directed evolution (transhumanism)
Designer baby
Epigenetics
Genetic screening: allows detecting personal genetic weaknesses to be addressed
Genetic factors of addiction
Procreative beneficence
New eugenics
Life extension
References
Biological engineering
Body modification
Biotechnology
Molecular biology
Engineering disciplines | 0.79461 | 0.983791 | 0.78173 |
Consumption (economics) | Consumption is the act of using resources to satisfy current needs and wants. It is seen in contrast to investing, which is spending for acquisition of future income. Consumption is a major concept in economics and is also studied in many other social sciences.
Different schools of economists define consumption differently. According to mainstream economists, only the final purchase of newly produced goods and services by individuals for immediate use constitutes consumption, while other types of expenditure — in particular, fixed investment, intermediate consumption, and government spending — are placed in separate categories (see consumer choice). Other economists define consumption much more broadly, as the aggregate of all economic activity that does not entail the design, production and marketing of goods and services (e.g., the selection, adoption, use, disposal and recycling of goods and services).
Economists are particularly interested in the relationship between consumption and income, as modelled with the consumption function. A similar realist structural view can be found in consumption theory, which views the Fisherian intertemporal choice framework as the real structure of the consumption function. Unlike the passive strategy of structure embodied in inductive structural realism, economists define structure in terms of its invariance under intervention.
Behavioural economics, Keynesian consumption function
The Keynesian consumption function is also known as the absolute income hypothesis, as it only bases consumption on current income and ignores potential future income (or lack of). Criticism of this assumption led to the development of Milton Friedman's permanent income hypothesis and Franco Modigliani's life cycle hypothesis.
More recent theoretical approaches are based on behavioural economics and suggest that a number of behavioural principles can be taken as microeconomic foundations for a behaviourally-based aggregate consumption function.
Behavioural economics also adopts and explains several human behavioural traits within the constraint of the standard economic model. These include bounded rationality, bounded willpower, and bounded selfishness.
Bounded rationality was first proposed by Herbert Simon. This means that people sometimes respond rationally to their own cognitive limits, which aimed to minimize the sum of the costs of decision making and the costs of error. In addition, bounded willpower refers to the fact that people often take actions that they know are in conflict with their long-term interests. For example, most smokers would rather not smoke, and many smokers willing to pay for a drug or a program to help them quit. Finally, bounded self-interest refers to an essential fact about the utility function of a large part of people: under certain circumstances, they care about others or act as if they care about others, even strangers.
Consumption and household production
Aggregate consumption is a component of aggregate demand.
Consumption is defined in part by comparison to production.
In the tradition of the Columbia School of Household Economics, also known as the New Home Economics, commercial consumption has to be analyzed in the context of household production. The opportunity cost of time affects the cost of home-produced substitutes and therefore demand for commercial goods and services. The elasticity of demand for consumption goods is also a function of who performs chores in households and how their spouses compensate them for opportunity costs of home production.
Different schools of economists define production and consumption differently. According to mainstream economists, only the final purchase of goods and services by individuals constitutes consumption, while other types of expenditure — in particular, fixed investment, intermediate consumption, and government spending — are placed in separate categories (See consumer choice). Other economists define consumption much more broadly, as the aggregate of all economic activity that does not entail the design, production and marketing of goods and services (e.g., the selection, adoption, use, disposal and recycling of goods and services).
Consumption can also be measured in a variety of different ways such as energy in energy economics metrics.
Consumption as part of GDP
GDP (Gross domestic product) is defined via this formula:
Where stands for consumption.
Where stands for total government spending. (including salaries)
Where stands for Investments.
Where stands for net exports. Net exports are exports minus imports.
In most countries consumption is the most important part of GDP. It usually ranges from 45% from GDP to 85% of GDP.
Consumption in microeconomics
In microeconomics, consumer choice is a theory that assumes that people are rational consumers and they decide on what combinations of goods to buy based on their utility function (which goods provide them with more use/happiness) and their budget constraint (which combinations of goods they can afford to buy). Consumers try to maximize utility while staying within the limits of their budget constrain or to minimalize cost while getting the target level of utility. A special case of this is the consumption-leisure model where a consumer chooses between a combination of leisure and working time, which is represented by income.
However, behavioural economics shows that consumers do not behave rationally and they are influenced by factors other than their utility from the given good. Those factors can be the popularity of a given good or its position in a supermarket.
Consumption in macroeconomics
In macroeconomics in the theory of national accounts consumption is not only the amount of money that is spent by households on goods and services from companies, but also the expenditures of government that are meant to provide things for citizens they would have to buy themselves otherwise. This means things like healthcare. Where consumption is equal to income minus savings. Consumption can be calculated via this formula:
Where stands for autonomous consumption which is minimal consumption of household that is achieved always, by either reducing the savings of household or by borrowing money.
is marginal propensity to consume where and it reveals how much of household income is spent on consumption.
is the disposable income of the household.
Consumption as a measurement of growth
Consumption of electric energy is positively correlated with economical growth. As electric energy is one of the most important inputs of the economy. Electric energy is needed to produce goods and to provide services to consumers. There is a statistically significant effect of electrical energy consumption and economic growth that is positive. Electricity consumption reflects economic growth. With the gradual rise of people's material level, electric energy consumption is also gradually increasing. In Iran, for example, electricity consumption has increased along with economic growth since 1970. But as countries continue to develop this effect is decreasing as they optimize their production, by getting more energy-efficient equipment. Or by transferring parts of their production to foreign nations where the cost of electrical energy is smaller.
Determinant factors of consumption
The main factors affecting consumption studied by economists include:
Income: Economists consider the income level to be the most crucial factor affecting consumption. Therefore, the offered consumption functions often emphasize this variable. Keynes considers absolute income, Duesenberry considers relative income, and Friedman considers permanent income as factors that determine one's consumption.
Consumer expectations: Changes in the prices would change the real income and purchasing power of the consumer. If the consumer's expectations about future prices change, it can change his consumption decisions in the present period.
Consumer assets and wealth: These refer to assets in the form of cash, bank deposits, securities, as well as physical assets such as stocks of durable goods or real estate such as houses, land, etc. These factors can affect consumption; if the mentioned assets are sufficiently liquid, they will remain in reserve and can be used in emergencies.
Consumer credits: The increase in the consumer's credit and his credit transactions can allow the consumer to use his future income at present. As a result, it can lead to more consumption expenditure compared to the case that the only purchasing power is current income.
Interest rate: Fluctuations in interest rates can affect household consumption decisions. An increase in interest rates increases people's savings and, as a result, reduces their consumption expenditures.
Household size: Households' absolute consumption costs increase as the number of family members increases. Although for some goods, as the number of households increases, the consumption of such goods would increase relatively less than the number of households. This happens due to the phenomena of the economy of scale.
Social groups: Household consumption varies in different social groups. For example, the consumption pattern of employers is different from the consumption pattern of workers. The smaller the gap between groups in a society, the more homogeneous consumption pattern within the society.
Consumer taste: One of the important factors in shaping the consumption pattern is consumer taste. This factor, to some extent, can affect other factors such as income and price levels. On the other hand, society's culture has a significant impact on shaping the tastes of consumers.
Area: Consumption patterns are different in different geographical regions. For example, this pattern differs from urban and rural areas, crowded and sparsely populated areas, economically active and inactive areas, etc.
Consumption theories
Consumption theories began with John Maynard Keynes in 1936 and were developed by economists such as Friedman, Dusenbery, and Modigliani. The relationship between consumption and income was a crucial concept in macroeconomic analysis for a long time.
Absolute Income Hypothesis
In his 1936 General Theory, Keynes introduced the consumption function. He believed that various factors influence consumption decisions; But in the short run, the most important factor is real income. According to the Absolute Income Hypothesis, consumer spending on consumption goods and services is a linear function of his current disposable income.
Relative Income Hypothesis
James Duesenberry proposed this model in 1949. This theory is based on two assumptions:
People's consumption behavior is not independent of each other. In other words, two people with the same income that live in two different positions within the income distribution will have different consumptions. In fact, one compares oneself with other people, and what has a significant impact on one's consumption is one's position among individuals and groups in society; Therefore, a person only feels an improvement in his situation in terms of consumption if his average consumption increases relative to the average level of society. This phenomenon is called the Demonstration Effect.
Consumer behavior over time is irreversible. This means that when income declines, consumer spending is sticky to the former level. After getting used to a level of consumption, a person shows resistance to reducing it and is unwilling to reduce that level of consumption. This phenomenon is called the ratchet effect.
Intertemporal consumption
The model of intertemporal consumption was first thought of by John Rae in 1830s and it was later expanded by Irving Fisher in 1930s in the book Theory of interest. This model describes how consumption is distributed over periods of life. In the basic model with 2 periods for example young and old age.
And then
Where is the consumption in a given year.
Where is the income received in a given year.
Where are saving from a given year.
Where is the interest rate.
Indexes 1,2 stand for period 1 and period 2.
This model can be expanded to represent each year of a lifetime.
Permanent income hypothesis
The permanent income hypothesis was developed by Milton Friedman in the 1950s in his book A theory of the Consumption Function. This theory divides income into two components: is transitory income and is permanent income, such that .
Changes in the two components have different impacts on consumption. If changes then consumption changes accordingly by , where is known as the marginal propensity to consume. If we expect part of income to be saved or invested, then , otherwise . On the other hand, if changes (for example as a result of winning the lottery), then this increase in income is distributed over the remaining lifespan. For example, winning $1000 with the expectation of living for 10 more years will result in yearly increase of consumption by $100.
Life-cycle hypothesis
The life-cycle hypothesis was published by Franco Modigliani in 1966. It describes how people make consumption decisions based on their past income, current income, and future income as they tend to distribute their consumption over their lifetime. It is, in its basic form:
Where is the consumption in given year.
Where is the number of years the individual is going to live for.
Where is for how many more years will the individual be working.
Where is the average wage the individual will be paid over his or her remaining work time
And is the wealth he has already accumulated in his or her life.
Access-based consumption
The term "access-based consumption" refers to the increasing extent to which people seek the experience of temporarily accessing goods rather than owning them, thus there are opportunities for a "sharing economy" to develop, although Bardhi and Eckhardt outline differences between "access" and "sharing". Social theorist Jeremy Rifkin put forward the idea in his 2000 publication The Age of Access.
Old-age spending
Spending the Kids' Inheritance (originally the title of a book on the subject by Annie Hulley) and the acronyms SKI and SKI'ing refer to the growing number of older people in Western society spending their money on travel, cars and property, in contrast to previous generations who tended to leave that money to their children. According to a study from 2017 that was conducted in the USA 20% of married people consider leaving inheritance a priority, while 34% do not consider it as a priority. And about one in ten unmarried Americans (14 percent) plan to spend their retirement money to improve their lives, rather than saving it to leave an inheritance to their children. In addition, three in ten married Americans (28 percent) have downsized or plan to downsize their home after retirement.
Die Broke (from the book Die Broke: A Radical Four-Part Financial Plan by Stephen Pollan and Mark Levine) is a similar idea.
See also
Aggregate demand
Consumer debt
Classification of Individual Consumption by Purpose (COICOP)
Consumer choice
Consumerism
Life cycle hypothesis
Measures of national income and output
Overconsumption
Permanent income hypothesis
List of largest consumer markets
References
Further reading
External links
An essay examining the strengths and weaknesses of Keynes's theory of consumption
Consumption
Macroeconomic aggregates | 0.786133 | 0.994389 | 0.781721 |
Well-being | Well-being, or wellbeing, also known as wellness, prudential value, prosperity or quality of life, is what is intrinsically valuable relative to someone. So the well-being of a person is what is ultimately good for this person, what is in the self-interest of this person. Well-being can refer to both positive and negative well-being. In its positive sense, it is sometimes contrasted with ill-being as its opposite. The term "subjective well-being" denotes how people experience and evaluate their lives, usually measured in relation to self-reported well-being obtained through questionnaires.
Well-being has been traditionally treated as a variable ranging from none to a high degree of well-being. This usage of well-being has in later times been widened to also include a negative aspect. With the aim of understanding how different route environmental variables affect the wellbeing during walking or cycling, the term "environmental unwellbeing" has been coined.
Overview
Different forms of well-being, such as mental, physical, economic, or emotional are often closely interlinked. For example, improved physical well-being (e.g., by reducing or ceasing an addiction) is associated with improved emotional well-being. And better economic well-being (e.g., possessing more wealth) tends to be associated with better emotional well-being even in adverse situations such as the COVID-19 pandemic. Well-being plays a central role in ethics since what a person ought to do depends, at least to some degree, on what would make someone's life get better or worse. According to welfarism, there are no other values besides well-being.
The terms well-being, pleasure, and happiness are used in overlapping ways in everyday language, but their meanings tend to come apart in technical contexts like philosophy or psychology. Pleasure refers to experience that feels good and is usually seen as one constituent of well-being. But there may be other factors, such as health, virtue, knowledge or the fulfillment of desires. Happiness for example, often seen either as "the individual's balance of pleasant over unpleasant experience" or as the state of being satisfied with one's life as a whole, is also commonly taken to be a constituent of well-being.
Theories of well-being try to determine what is essential to all forms of well-being. Hedonistic theories equate well-being with the balance of pleasure over pain. Desire theories hold that well-being consists in desire-satisfaction: the higher the number of satisfied desires, the higher the well-being. Objective list theories state that a person's well-being depends on a list of factors that may include both subjective and objective elements.
Well-being is also scientifically dependent on endogenous molecules that impact feelings of happiness such as dopamine, serotonin, endorphins, oxytocin, cortisol and more "Well-being related markers" or "Well-being bio markers" play an important role in the regulation of an organism's metabolism, and when not working in proper order can lead to malfunction.
Well-being is the central subject of positive psychology, which aims to discover the factors that contribute to human well-being. Martin Seligman, for example, suggests that these factors consist in having positive emotions, being engaged in an activity, having good relationships with other people, finding meaning in one's life and a sense of accomplishment in the pursuit of one's goals.
The Oxford English Dictionary traces the term well-being to a 16th-century calque of the Italian concept benessere.
Theories of well-being
The well-being of a person is what is good for the person. Theories of well-being try to determine which features of a state are responsible for this state contributing to the person's well-being. Theories of well-being are often classified into hedonistic theories, desire theories, and objective list theories. Hedonistic theories and desire theories are subjective theories. According to them, the degree of well-being of a person depends on the subjective mental states and attitudes of this person. Objective list theories, on the other hand, allow that things can benefit a person independent of that person's subjective attitudes towards these things.
For hedonistic theories, the relevant mental states are experiences of pleasure and pain. One example of such an account can be found in Jeremy Bentham's works, where it is suggested that the value of experiences only depends on their duration and the intensity of pleasure or pain present in them. Various counterexamples have been formulated against this view. They usually involve cases where lower aggregate pleasure are intuitively preferable, for example, that the intellectual or aesthetic pleasures are superior to sensory pleasures or that it would be unwise to enter Robert Nozick's experience machine. These counter-examples are not necessarily conclusive, yet the proponent of hedonistic theories faces the challenge of explaining why common-sense misleads us in the problematic cases.
Desire theories can avoid some of the problems of hedonistic theories by holding that well-being consists in desire-satisfaction: the higher the number of satisfied desires, the higher the well-being. One problem for some versions of desire theory is that not all desires are good: some desires may even have terrible consequences for the agent. Desire theorists have tried to avoid this objection by holding that what matters are not actual desires but the desires the agent would have if she was fully informed. Thus, desire theories can incorporate what is plausible about subjective theories of well-being with the lack of personal bias of objective list theories.
Objective list theories state that a person's well-being depends on many different basic objective goods. These goods often include subjective factors like a pleasure-pain-balance or desire-satisfaction besides factors that are independent of the subject's attitudes, like friendship or having virtues. Objective list theories face the problem of explaining how subject-independent factors can determine a person's well-being even if this person does not care about these factors. Another objection concerns the selection of the specific factors included. Different theorists have provided very different combinations of basic objective goods. These groupings seem to constitute arbitrary selections unless a clear criterion could be provided why all and only the items within their selections are relevant factors.
Scientific approaches
Three subdisciplines in psychology are critical for the study of psychological well-being:
Developmental psychology, in which psychological well-being may be analyzed in terms of a pattern of growth across the lifespan.
Personality psychology, in which it is possible to apply Maslow's concept of self-actualization, Rogers' concept of the fully functioning person, Jung's concept of individuation, and Allport's concept of maturity to account for psychological well-being.
Clinical psychology, in which well-being consists of biological, psychological and social needs being met.
According to Corey Keyes' five-component model, social well-being is constituted by the following factors:
social integration,
social contribution,
social coherence,
social actualization,
social acceptance.
There are two approaches typically taken to understand psychological well-being:
Distinguishing positive and negative effects and defining optimal psychological well-being and happiness as a balance between the two.
Emphasizes life satisfaction as the key indicator of psychological well-being.
According to Guttman and Levy (1982) well-being is "...a special case of attitude". This approach serves two purposes in the study of well-being: "developing and testing a [systematic] theory for the structure of [interrelationships] among varieties of well-being, and integration of well-being theory with the ongoing cumulative theory development in the fields of attitude of related research".
Models and components of well-being
Many different models have been developed.
Causal network models (and ill-being)
Philosopher Michael Bishop developed a causal network account of well-being in The Good Life: Unifying the Philosophy and Psychology of Well-being. The causal network account holds that well-being is the product of many factors—feelings, beliefs, motivations, habits, resources, etc.—that are causally related in ways that explain increases in well-being or ill-being. More recently causal network theories of ill-being have been applied to depression and digital technology. Network approaches have also been applied to mental health more generally.
Diener: tripartite model of subjective well-being
Diener's tripartite model of subjective well-being is one of the most comprehensive models of well-being in psychology. It was synthesized by Diener in 1984, positing "three distinct but often related components of wellbeing: frequent positive affect, infrequent negative affect, and cognitive evaluations such as life satisfaction".
Cognitive, affective and contextual factors contribute to subjective well-being. According to Diener and Suh, subjective well-being is "...based on the idea that how each person thinks and feels about his or her life is important".
Six-factor model of psychological well-being
Carol Ryff's multidimensional model of psychological well-being has philosophical foundation based on Aristotle's eudaimonia. It postulates six factors which are key for well-being with smaller subsections for each minor school of thought:
Self-acceptance
Personal growth
Purpose in life
Environmental mastery
Autonomy
Positive relations with others
Corey Keyes: flourishing
According to Corey Keyes, who collaborated with Carol Ryff, mental well-being has three components, namely emotional or subjective well-being (also called hedonic well-being), psychological well-being, and social well-being (together also called eudaimonic well-being). Emotional well-being concerns subjective aspects of well-being, in concreto, feeling well, whereas psychological and social well-being concerns skills, abilities, and psychological and social functioning.
Keyes' model of mental well-being has received extensive empirical support across cultures.
Seligman: positive psychology
Well-being is a central concept in positive psychology. Positive psychology is concerned with eudaimonia, "the good life", reflection about what holds the greatest value in life – the factors that contribute the most to a well-lived and fulfilling life. While not attempting a strict definition of the good life, positive psychologists agree that one must live a happy, engaged, and meaningful life in order to experience "the good life". Martin Seligman referred to "the good life" as "using your signature strengths every day to produce authentic happiness and abundant gratification".
PERMA-theory
In Flourish (2011) Seligman argued that "meaningful life" can be considered as five different categories. The resulting acronym is PERMA: Positive emotions, Engagement, Relationships, Meaning and purpose, and Accomplishments. It is a mnemonic for the five elements of Martin Seligman's well-being theory:
Positive emotions include a wide range of feelings, not just happiness and joy. Included are emotions like excitement, satisfaction, pride and awe, amongst others. These emotions are frequently seen as connected to positive outcomes, such as longer life and healthier social relationships.
Engagement refers to involvement in activities that draws and builds upon one's interests. Mihaly Csikszentmihalyi explains true engagement as flow, a feeling of intensity that leads to a sense of ecstasy and clarity. The task being done needs to call upon higher skill and be a bit difficult and challenging yet still possible. Engagement involves passion for and concentration on the task at hand and is assessed subjectively as to whether the person engaged was completely absorbed, losing self-consciousness.
Relationships are all important in fueling positive emotions, whether they are work-related, familial, romantic, or platonic. As Christopher Peterson puts it simply, "Other people matter." Humans receive, share, and spread positivity to others through relationships. They are important not only in bad times, but good times as well. In fact, relationships can be strengthened by reacting to one another positively. It is typical that most positive things take place in the presence of other people.
Meaning is also known as purpose, and prompts the question of "why". Discovering and figuring out a clear "why" puts everything into context from work to relationships to other parts of life. Finding meaning is learning that there is something greater than one's self. Despite potential challenges, working with meaning drives people to continue striving for a desirable goal.
Accomplishments are the pursuit of success and mastery. Unlike the other parts of PERMA, they are sometimes pursued even when accomplishments do not result in positive emotions, meaning, or relationships. That being noted, accomplishments can activate the other elements of PERMA, such as pride, under positive emotion. Accomplishments can be individual or community-based, fun- or work-based.
Biopsychosocial model of well-being
The Biomedical approach was challenged by George Engel in 1977 as it gave little importance to various factors like beliefs, upbringing , trauma, etc. and put main emphasis on biology.
The biopsychosocial model replaces the Biomedical model of wellbeing. The Biopsychosocial model of well being emphasises the modifiable components needed for an individual to have a sense of wellbeing. These are:
healthy environments (physical, social, cultural, and economic)
developmental competencies (healthy identity, emotional and behavioural regulation, interpersonal skills, and problem-solving skills)
sense of belonging
healthy behaviours (sleep, nutrition, exercise, pleasurable and mastery activities)
healthy coping
resilience (recognition of one's innate resilience)
treatment of illness (early evidence-based treatments of physical and psychological illnesses)
UK Office for National Statistics (ONS) definition
The UK ONS defines wellbeing:as having 10 broad dimensions which have been shown to matter most to people in the UK as identified through a national debate. The dimensions are:
the natural environment,
personal well-being,
our relationships,
health,
what we do,
where we live,
personal finance,
the economy,
education and skills, and
governance.
Personal well-being is a particularly important dimension which we define as how satisfied we are with our lives, our sense that what we do in life is worthwhile, our day to day emotional experiences (happiness and anxiety) and our wider mental wellbeing.The ONS then introduced four questions pertaining to wellbeing in their 2011 national survey of the UK population, relating to evaluative well-being, eudemonic well-being, and positive and negative affect. They later switched to referring to the construct being measured as "personal well-being".
Welfarism
Welfarism is a theory of value based on well-being. It states that well-being is the only thing that has intrinsic value, i.e. that is good in itself and not just good as a means to something else. On this view, the value of a situation or whether one alternative is better than another only depends on the degrees of well-being of each entity affected. All other factors are relevant to value only to the extent that they have an impact on someone's well-being. The well-being in question is usually not restricted to human well-being but includes animal well-being as well.
Different versions of welfarism offer different interpretations of the exact relation between well-being and value. Pure welfarists offer the simplest approach by holding that only the overall well-being matters, for example, as the sum total of everyone's well-being. This position has been criticized in various ways. On the one hand, it has been argued that some forms of well-being, like sensory pleasures, are less valuable than other forms of well-being, like intellectual pleasures. On the other hand, certain intuitions indicate that what matters is not just the sum total but also how the individual degrees of well-being are distributed. There is a tendency to prefer equal distributions where everyone has roughly the same degree instead of unequal distributions where there is a great divide between happy and unhappy people, even if the overall well-being is the same. Another intuition concerning the distribution is that people who deserve well-being, like the morally upright, should enjoy higher degrees of well-being than the undeserving.
These criticisms are addressed by another version of welfarism: impure welfarism. Impure welfarists agree with pure welfarists that all that matters is well-being. But they allow aspects of well-being other than its overall degree to have an impact on value, e.g. how well-being is distributed. Pure welfarists sometimes argue against this approach since it seems to stray away from the core principle of welfarism: that only well-being is intrinsically valuable. But the distribution of well-being is a relation between entities and therefore not intrinsic to any of them.
Some objections based on counterexamples are directed against all forms of welfarism. They often focus on the idea that there are things other than well-being that have intrinsic value. Putative examples include the value of beauty, virtue, or justice. Such arguments are often rejected by welfarists holding that the cited things would not be valuable if they had no relation to well-being. This is often extended to a positive argument in favor of welfarism based on the claim that nothing would be good or bad in a world without sentient beings. In this sense, welfarists may agree that the cited examples are valuable in some form but disagree that they are intrinsically valuable.
Some authors see welfarism as including the ethical thesis that morality fundamentally depends on well-being. On this view, welfarism is also committed to the consequentialist claim that actions, policies, or rules should be evaluated based on how their consequences affect everyone's well-being.
Global studies
Research on positive psychology, well-being, eudaimonia and happiness, and the theories of Diener, Ryff, Keyes and Seligmann covers a broad range of levels and topics, including "the biological, personal, relational, institutional, cultural, and global dimensions of life". The World Happiness Report series provide annual updates on the global status of subjective well-being. A global study using data from 166 nations, provided a country ranking of psycho-social well-being. The latter study showed that subjective well-being and psycho-social well-being (i.e. eudaimonia) measures capture distinct constructs and are both needed for a comprehensive understanding of mental well-being.
Gallup's wellbeing research finds that 33% of workers globally are thriving, 55% struggling and 11% suffering.
Well-being as a political goal
Focusing on wellbeing as a political goal involves prioritizing citizens' overall quality of life, encompassing factors like health, education, and social harmony. It emphasizes policies that enhance happiness and fulfillment for a more holistic approach to governance. Both the UK and New Zealand have begun to focus on population well-being within their political aims. The United States has taken actions designed to improve the health of citizens regarding issues with the COVID-19 pandemic and racism.
See also
Notes
References
Sources
Further reading
Routledge Handbook of the Philosophy of Well-Being
External links
Theories of Well-Being, in William MacAskill & Richard Yetter-Chappell (2021), Introduction to Utilitarianism.
PhilPapers: 'Well-being', 'Desire-satisfaction accounts', 'Objective accounts', 'Hedonistic accounts', 'Perfectionist accounts'
EuroHealthNet Policy Paper: 'Achieving Wellbeing through EU decision-making processes'
16th-century neologisms
Welfare economics
Quality of life
Positive psychology | 0.784038 | 0.996925 | 0.781627 |
Pedagogy | Pedagogy, most commonly understood as the approach to teaching, is the theory and practice of learning, and how this process influences, and is influenced by, the social, political, and psychological development of learners. Pedagogy, taken as an academic discipline, is the study of how knowledge and skills are imparted in an educational context, and it considers the interactions that take place during learning. Both the theory and practice of pedagogy vary greatly as they reflect different social, political, and cultural contexts.
Pedagogy is often described as the act of teaching. The pedagogy adopted by teachers shapes their actions, judgments, and teaching strategies by taking into consideration theories of learning, understandings of students and their needs, and the backgrounds and interests of individual students. Its aims may range from furthering liberal education (the general development of human potential) to the narrower specifics of vocational education (the imparting and acquisition of specific skills).
Instructive strategies are governed by the pupil's background knowledge and experience, situation and environment, as well as learning goals set by the student and teacher. One example would be the Socratic method.
Definition
The meaning of the term "pedagogy" is often contested and a great variety of definitions has been suggested. The most common approach is to define it as the study or science of teaching methods. In this sense, it is the methodology of education. As a methodology, it investigates the ways and practices that can be used to realize the aims of education. The main aim is often identified with the transmission of knowledge. Other aims include fostering skills and character traits. They include helping the student develop their intellectual and social abilities as well as psychomotor and affective learning, which are about developing practical skills and adequate emotional dispositions, respectively.
However, not everyone agrees with this characterization of pedagogy and some see it less as a science and more as an art or a craft. This characterization puts more emphasis on the practical aspect of pedagogy, which may involve various forms of "tacit knowledge that is hard to put into words". This approach is often based on the idea that the most central aspects of teaching are only acquired by practice and cannot be easily codified through scientific inquiry. In this regard, pedagogy is concerned with "observing and refining one's skill as a teacher". A more inclusive definition combines these two characterizations and sees pedagogy both as the practice of teaching and the discourse and study of teaching methods. Some theorists give an even wider definition by including considerations such as "the development of health and bodily fitness, social and moral welfare, ethics and aesthetics". Due to this variety of meanings, it is sometimes suggested that pedagogy is a "catch-all term" associated with various issues of teaching and learning. In this sense, it lacks a precise definition.
According to Patricia Murphy, a detailed reflection on the meaning of the term "pedagogy" is important nonetheless since different theorists often use it in very different ways. In some cases, non-trivial assumptions about the nature of learning are even included in its definition. Pedagogy is often specifically understood in relation to school education. But in a wider sense, it includes all forms of education, both inside and outside schools. In this wide sense, it is concerned with the process of teaching taking place between two parties: teachers and learners. The teacher's goal is to bring about certain experiences in the learner to foster their understanding of the subject matter to be taught. Pedagogy is interested in the forms and methods used to convey this understanding.
Pedagogy is closely related to didactics but there are some differences. Usually, didactics is seen as the more limited term that refers mainly to the teacher's role and activities, i.e how their behavior is most beneficial to the process of education. This is one central aspect of pedagogy besides other aspects that consider the learner's perspective as well. In this wider sense, pedagogy focuses on "any conscious activity by one person designed to enhance learning in another".
The word pedagogy is a derivative of the Greek (paidagōgia), from (paidagōgos), itself a synthesis of (ágō), "I lead", and (, genitive , ) "boy, child": hence, "attendance on boys, to lead a child". It is pronounced variously, as , , or . The related word pedagogue has had a negative connotation of pedantry, dating from at least the 1650s; a related expression is educational theorist. The term "pedagogy" is also found in the English discourse, but it is more broadly discussed in other European languages, such as French and German.
History
Western
In the Western world, pedagogy is associated with the Greek tradition of philosophical dialogue, particularly the Socratic method of inquiry. A more general account of its development holds that it emerged from the active concept of humanity as distinct from a fatalistic one and that history and human destiny are results of human actions. This idea germinated in ancient Greece and was further developed during the Renaissance, the Reformation, and the Age of Enlightenment.
Socrates
Socrates (470 – 399 BCE) employed the Socratic method while engaging with a student or peer. This style does not impart knowledge, but rather tries to strengthen the logic of the student by revealing the conclusions of the statement of the student as erroneous or supported. The instructor in this learning environment recognizes the learners' need to think for themselves to facilitate their ability to think about problems and issues. It was first described by Plato in the Socratic Dialogues.
Plato
Plato (428/427 or 424/423 – 348/347 BCE) describes a system of education in The Republic (375 BCE) in which individual and family rights are sacrificed to the State. He describes three castes: one to learn a trade; one to learn literary and aesthetic ideas; and one to be trained in literary, aesthetic, scientific, and philosophical ideas. Plato saw education as a fulfillment of the soul, and by fulfilling the soul the body subsequently benefited. Plato viewed physical education for all as a necessity to a stable society.
Aristotle
Aristotle (384–322 BCE) composed a treatise, On Education, which was subsequently lost. However, he renounced Plato's view in subsequent works, advocating for a common education mandated to all citizens by the State. A small minority of people residing within Greek city-states at this time were considered citizens, and thus Aristotle still limited education to a minority within Greece. Aristotle advocates physical education should precede intellectual studies.
Quintilian
Marcus Fabius Quintilianus (35 – 100 CE) published his pedagogy in Institutio Oratoria (95 CE). He describes education as a gradual affair, and places certain responsibilities on the teacher. He advocates for rhetorical, grammatical, scientific, and philosophical education.
Tertullian
Quintus Septimius Florens Tertullianus (155 – 240 CE) was a Christian scholar who rejected all pagan education, insisting this was "a road to the false and arrogant wisdom of ancient philosophers".
Jerome
Saint Jerome (347 – 30 September 420 CE), or Saint Hieronymus, was a Christian scholar who detailed his pedagogy of girls in numerous letters throughout his life. He did not believe the body in need of training, and thus advocated for fasting and mortification to subdue the body. He only recommends the Bible as reading material, with limited exposure, and cautions against musical instruments. He advocates against letting girls interact with society, and of having "affections for one of her companions than for others." He does recommend teaching the alphabet by ivory blocks instead of memorization so "She will thus learn by playing." He is an advocate of positive reinforcement, stating "Do not chide her for the difficulty she may have in learning. On the contrary, encourage her by commendation..."
Jean Gerson
Jean Charlier de Gerson (13 December 1363 – 12 July 1429), the Chancellor of the University of Paris, wrote in De parvulis ad Christum trahendis "Little children are more easily managed by caresses than fear," supporting a more gentle approach than his Christian predecessors. He also states "Above all else, let the teacher make an effort to be a father to his pupils." He is considered a precursor of Fenelon.
John Amos Comenius
John Amos Comenius (28 March 1592 – 15 November 1670) is considered the father of modern education.
Johann Pestalozzi
Johann Heinrich Pestalozzi (January 12, 1746 – February 17, 1827), founder of several educational institutions both in German- and French-speaking regions of Switzerland and wrote many works explaining his revolutionary modern principles of education. His motto was "Learning by head, hand and heart".
Johann Herbart
The educational philosophy and pedagogy of Johann Friedrich Herbart (4 May 1776 – 14 August 1841) highlighted the correlation between personal development and the resulting benefits to society. In other words, Herbart proposed that humans become fulfilled once they establish themselves as productive citizens. Herbartianism refers to the movement underpinned by Herbart's theoretical perspectives. Referring to the teaching process, Herbart suggested five steps as crucial components. Specifically, these five steps include: preparation, presentation, association, generalization, and application. Herbart suggests that pedagogy relates to having assumptions as an educator and a specific set of abilities with a deliberate end goal in mind.
John Dewey
The pedagogy of John Dewey (20 October 1859 – 1 June 1952) is presented in several works, including My Pedagogic Creed (1897), The School and Society (1900), The Child and the Curriculum (1902), Democracy and Education (1916), Schools of To-morrow (1915) with Evelyn Dewey, and Experience and Education (1938). In his eyes, the purpose of education should not revolve around the acquisition of a pre-determined set of skills, but rather the realization of one's full potential and the ability to use those skills for the greater good (My Pedagogic Creed, Dewey, 1897). Dewey advocated for an educational structure that strikes a balance between delivering knowledge while also taking into account the interests and experiences of the student (The Child and the Curriculum, Dewey, 1902). Dewey not only re-imagined the way that the learning process should take place but also the role that the teacher should play within that process. He envisioned a divergence from the mastery of a pre-selected set of skills to the cultivation of autonomy and critical-thinking within the teacher and student alike.
Eastern
Confucius
Confucius (551–479 BCE) stated that authority has the responsibility to provide oral and written instruction to the people under the rule, and "should do them good in every possible way." One of the deepest teachings of Confucius may have been the superiority of personal exemplification over explicit rules of behavior. His moral teachings emphasized self-cultivation, emulation of moral exemplars, and the attainment of skilled judgement rather than knowledge of rules. Other relevant practices in the Confucian teaching tradition include the Rite and its notion of body-knowledge as well as Confucian understanding of the self, one that has a broader conceptualization than the Western individual self.
Pedagogical considerations
Teaching method
Hidden curriculum
A hidden curriculum refers to extra educational activities or side effect of an education, "[lessons] which are learned but not openly intended" such as the transmission of norms, values, and beliefs conveyed in the classroom and the social environment.
Learning space
Learning space or learning setting refers to a physical setting for a learning environment, a place in which teaching and learning occur. The term is commonly used as a more definitive alternative to "classroom", but it may also refer to an indoor or outdoor location, either actual or virtual. Learning spaces are highly diverse in use, learning styles, configuration, location, and educational institution. They support a variety of pedagogies, including quiet study, passive or active learning, kinesthetic or physical learning, vocational learning, experiential learning, and others.
Learning theories
Learning theories are conceptual frameworks describing how knowledge is absorbed, processed, and retained during learning. Cognitive, emotional, and environmental influences, as well as prior experience, all play a part in how understanding, or a world view, is acquired or changed and knowledge and skills retained.
Distance learning
Distance education or long-distance learning is the education of students who may not always be physically present at a school. Traditionally, this usually involved correspondence courses wherein the student corresponded with the school via post. Today it involves online education. Courses that are conducted (51 percent or more) are either hybrid, blended or 100% distance learning. Massive open online courses (MOOCs), offering large-scale interactive participation and open access through the World Wide Web or other network technologies, are recent developments in distance education. A number of other terms (distributed learning, e-learning, online learning, etc.) are used roughly synonymously with distance education.
Teaching resource adaptation
Adapting the teaching resource should suit appropriate teaching and learning environments, national and local cultural norms, and make it accessible to different types of learners. Key adaptations in teaching resource include:
Classroom constraints
Large class size – consider smaller groups or have discussions in pairs;
Time available – shorten or lengthen the duration of activities;
Modifying materials needed – find, make or substitute required materials;
Space requirements – reorganize classroom, use a larger space, move indoors or outdoors.
Cultural familiarity
Change references to names, food and items to make them more familiar;
Substitute local texts or art (folklore, stories, songs, games, artwork and proverbs).
Local relevance
Use the names and processes for local institutions such as courts;
Be sensitive of local behavior norms (e.g. for genders and ages);
Ensure content is sensitive to the degree of rule of law in society (trust in authorities and institutions).
Inclusivity for diverse students
Appropriate reading level(s) of texts for student use;
Activities for different learning styles;
Accommodation for students with special educational needs;
Sensitivity to cultural, ethnic and linguistic diversity;
Sensitivity to students' socioeconomic status.
Pedagogical approaches
Evidence-based
Dialogic learning
Dialogic learning is learning that takes place through dialogue. It is typically the result of egalitarian dialogue; in other words, the consequence of a dialogue in which different people provide arguments based on validity claims and not on power claims.
Student-centered learning
Student-centered learning, also known as learner-centered education, broadly encompasses methods of teaching that shift the focus of instruction from the teacher to the student. In original usage, student-centered learning aims to develop learner autonomy and independence by putting responsibility for the learning path in the hands of students. Student-centered instruction focuses on skills and practices that enable lifelong learning and independent problem-solving.
Critical pedagogy
Critical pedagogy applies critical theory to pedagogy and asserts that educational practices are contested and shaped by history, that schools are not politically neutral spaces, and that teaching is political. Decisions regarding the curriculum, disciplinary practices, student testing, textbook selection, the language used by the teacher, and more can empower or disempower students. It asserts that educational practices favor some students over others and some practices harm all students. It also asserts that educational practices often favor some voices and perspectives while marginalizing or ignoring others.
Academic degrees
The academic degree Ped. D., Doctor of Pedagogy, is awarded honorarily by some US universities to distinguished teachers (in the US and UK, earned degrees within the instructive field are classified as an Ed.D., Doctor of Education, or a Ph.D., Doctor of Philosophy). The term is also used to denote an emphasis in education as a specialty in a field (for instance, a Doctor of Music degree in piano pedagogy).
Pedagogues around the world
The education of pedagogues, and their role in society, varies greatly from culture to culture.
Belgium
Important pedagogues in Belgium are Jan Masschelein and Maarten Simons (Catholic University of Leuven). According to these scholars, schools nowadays are often dismissed as outdated or ineffective. Deschoolers even argue that schools rest on the false premise that schools are necessary for learning but that people learn faster or better outside the classroom. Others critique the fact that some teachers stand before a classroom with only six weeks of teacher education. Against this background, Masschelein and Simons propose to look at school from a different point of view. Their educational morphology approaches the school as a particular scholastic 'form of gathering'. What the authors mean with that, is the following: school is a particular time-space-matter arrangement. This thus includes concretes architectures, technologies, practices and figures. This arrangement "deals in a specific way with the new generation, allows for a particular relation to the world, and for a particular experience of potentiality and of commonality (of making things public)".
Masschelein and Simons' most famous work is the book "Looking after school: a critical analysis of personalisation in Education". It takes a critical look at the main discourse of today's education. Education is seen through a socio-economic lens: education is aimed at mobilising talents and competencies (p23). This is seen in multiple texts from governing bodies, in Belgium and Europe. One of the most significant examples is quoted on page 23: "Education and training can only contribute to growth and job-creation if learning is focused on the knowledge, skills and competences to be acquired by students (learning outcomes) through the learning process, rather than on completing a specific stage or on time spent in school." (European Commission, 2012, p.7) This is, according to Masschelein and Simons a plea for learning outcomes and demonstrates a vision of education in which the institution is no longer the point of departure. The main ambition in this discourse of education is the efficient and effective realisation of learning outcomes for all. Things like the place and time of learning, didactic and pedagogic support are means to an end: the acquisition of preplanned learning outcomes. And these outcomes are a direct input for the knowledge economy. Masschelein and Simons' main critique here is that the main concern is not the educational institution (anymore). Rather, the focus lies on the learning processes and mainly on the learning outcomes of the individual learner.
Brazil
In Brazil, a pedagogue is a multidisciplinary educator. Undergraduate education in Pedagogy qualifies students to become school administrators or coordinators at all educational levels, and also to become multidisciplinary teachers, such as pre-school, elementary and special teachers.
Denmark
In Scandinavia, a pedagogue (pædagog) is broadly speaking a practitioner of pedagogy, but the term is primarily reserved for individuals who occupy jobs in pre-school education (such as kindergartens and nurseries). A pedagogue can occupy various kinds of jobs, within this restrictive definition, e.g. in retirement homes, prisons, orphanages, and human resource management. When working with at-risk families or youths they are referred to as social pedagogues (socialpædagog).
The pedagogue's job is usually distinguished from a teacher's by primarily focusing on teaching children life-preparing knowledge such as social or non-curriculum skills, and cultural norms. There is also a very big focus on the care and well-being of the child. Many pedagogical institutions also practice social inclusion. The pedagogue's work also consists of supporting the child in their mental and social development.
In Denmark all pedagogues are educated at a series of national institutes for social educators located in all major cities. The education is a 3.5-year academic course, giving the student the title of a Bachelor in Social Education (Danish: Professionsbachelor som pædagog).
It is also possible to earn a master's degree in pedagogy/educational science from the University of Copenhagen. This BA and MA program has a more theoretical focus compared to the more vocational Bachelor in Social Education.
Hungary
In Hungary, the word pedagogue (pedagógus) is synonymous with the teacher (tanár); therefore, teachers of both primary and secondary schools may be referred to as pedagogues, a word that appears also in the name of their lobbyist organizations and labor unions (e.g. Labor Union of Pedagogues, Democratic Labor Union of Pedagogues). However, undergraduate education in Pedagogy does not qualify students to become teachers in primary or secondary schools but makes them able to apply to be educational assistants. As of 2013, the six-year training period was re-installed in place of the undergraduate and postgraduate division which characterized the previous practice.
Modern pedagogy
An article from Kathmandu Post published on 3 June 2018 described the usual first day of school in an academic calendar. Teachers meet their students with distinct traits. The diversity of attributions among children or teens exceeds similarities. Educators have to teach students with different cultural, social, and religious backgrounds. This situation entails a differentiated strategy in pedagogy and not the traditional approach for teachers to accomplish goals efficiently.
American author and educator Carol Ann Tomlinson defined Differentiated Instruction as "teachers' efforts in responding to inconsistencies among students in the classroom." Differentiation refers to methods of teaching. She explained that Differentiated Instruction gives learners a variety of alternatives for acquiring information. Primary principles comprising the structure of Differentiated Instruction include formative and ongoing assessment, group collaboration, recognition of students' diverse levels of knowledge, problem-solving, and choice in reading and writing experiences.
Howard Gardner gained prominence in the education sector for his Multiple Intelligences Theory. He named seven of these intelligences in 1983: Linguistic, Logical and Mathematical, Visual and Spatial, Body and Kinesthetic, Musical and Rhythmic, Intrapersonal, and Interpersonal. Critics say the theory is based only on Gardner's intuition instead of empirical data. Another criticism is that the intelligence is too identical for types of personalities. The theory of Howard Gardner came from cognitive research and states these intelligences help people to "know the world, understand themselves, and other people." Said differences dispute an educational system that presumes students can "understand the same materials in the same manner and that a standardized, collective measure is very much impartial towards linguistic approaches in instruction and assessment as well as to some extent logical and quantitative styles."
Educational research
See also
Outline of education
References
Sources
See also
List of important publications in philosophy
List of important publications in anthropology
List of important publications in economics
Further reading
Bruner, J. S. (1960). The Process of Education, Cambridge, Massachusetts: Harvard University Press.
Bruner, J. S. (1971). The Relevance of Education. New York, NY: Norton
Bruner, J. S. (1966). Toward a Theory of Instruction. Cambridge, Massachusetts: Belkapp Press.
John Dewey, Experience and Education, 1938
Paulo Freire, Pedagogy of the Oppressed, 1968 (English translation: 1970)
Ivan Illich, Deschooling Society, 1971
David L. Kirp, The Sandbox Investment, 2007
Montessori, M. (1910). Antropologia Pedagogica.
Montessori, M. (1921). Manuale di Pedagogia Scientifica.
Montessori, M. (1934). Psico Aritmética.
Montessori, M. (1934). Psico Geométria.
Piaget, J. (1926). The Language and Thought of the Child. London: Routledge & Kegan.
Karl Rosenkranz (1848). Pedagogics as a System. Translated 1872 by Anna C. Brackett, R.P. Studley Company
Karl Rosenkranz (1899). The philosophy of education. D. Appleton and Co.
Friedrich Schiller, On the Aesthetic Education of Man, 1794
Vygotsky, L. (1962). Thought and Language. Cambridge, Massachusetts: MIT Press.
Didactics
Educational psychology
Teaching | 0.782306 | 0.999131 | 0.781626 |
Work (human activity) | Work or labor (or labour in British English) is the intentional activity people perform to support the needs and wants of themselves, others, or a wider community. In the context of economics, work can be viewed as the human activity that contributes (along with other factors of production) towards the goods and services within an economy.
Work is fundamental to all societies but can vary widely within and between them, from gathering natural resources by hand to operating complex technologies that substitute for physical or even mental effort by many human beings. All but the simplest tasks also require specific skills, equipment or tools, and other resources, such as material for manufacturing goods. Cultures and individuals across history have expressed a wide range of attitudes towards work. Outside of any specific process or industry, humanity has developed a variety of institutions for situating work in society. As humans are diurnal, they work mainly during the day.
Besides objective differences, one culture may organize or attach social status to work roles differently from another. Throughout history, work has been intimately connected with other aspects of society and politics, such as power, class, tradition, rights, and privileges. Accordingly, the division of labour is a prominent topic across the social sciences as both an abstract concept and a characteristic of individual cultures.
Some people have also engaged in critique of work and expressed a wish to abolish it, e.g. Paul Lafargue in his book The Right to Be Lazy.
Related terms include occupation and job; related concepts are job title and profession.
Description
Work can take many different forms, as varied as the environments, tools, skills, goals, and institutions around a worker. This term refers to the general activity of performing tasks, whether they are paid or unpaid, formal or informal. Work encompasses all types of productive activities, including employment, household chores, volunteering, and creative pursuits. It is a broad term that encompasses any effort or activity directed towards achieving a particular goal.
Because sustained effort is a necessary part of many human activities, what qualifies as work is often a matter of context. Specialization is one common feature that distinguishes work from other activities. For example, a sport is a job for a professional athlete who earns their livelihood from it, but a hobby for someone playing for fun in their community. An element of advance planning or expectation is also common, such as when a paramedic provides medical care while on duty and fully equipped rather than performing first aid off-duty as a bystander in an emergency. Self-care and basic habits like personal grooming are also not typically considered work.
While a later gift, trade, or payment may retroactively affirm an activity as productive, this can exclude work like volunteering or activities within a family setting, like parenting or housekeeping. In some cases, the distinction between work and other activities is simply a matter of common sense within a community. However, an alternative view is that labeling any activity as work is somewhat subjective, as Mark Twain expressed in the "whitewashed fence" scene of The Adventures of Tom Sawyer.
History
Humans have varied their work habits and attitudes over time. Hunter-gatherer societies vary their "work" intensity according to the seasonal availability of plants and the periodic migration of prey animals. The development of agriculture led to more sustained work practices, but work still changed with the seasons, with intense sustained effort during harvests (for example) alternating with less focused periods such as winters. In the early modern era, Protestantism and proto-capitalism emphasized the moral and personal advantages of hard work.
The periodic re-invention of slavery encouraged more consistent work activity in the working class, and capitalist industrialization intensified demands on workers to keep up with the pace of machines. Restrictions on the hours of work and the ages of workers followed, with worker demands for time off increasing, but modern office work retains traces of expectations of sustained, concentrated work, even in affluent societies.
Kinds of work
There are several ways to categorize and compare different kinds of work. In economics, one popular approach is the three-sector model or variations of it. In this view, an economy can be separated into three broad categories:
Primary sector, which extracts food, raw materials, and other resources from the environment
Secondary sector, which manufactures physical products, refines materials, and provides utilities
Tertiary sector, which provides services and helps administer the economy
In complex economies with high specialization, these categories are further subdivided into industries that produce a focused subset of products or services. Some economists also propose additional sectors such as a "knowledge-based" quaternary sector, but this division is neither standardized nor universally accepted.
Another common way of contrasting work roles is ranking them according to a criterion, such as the amount of skill, experience, or seniority associated with a role. The progression from apprentice through journeyman to master craftsman in the skilled trades is one example with a long history and analogs in many cultures.
Societies also commonly rank different work roles by perceived status, but this is more subjective and goes beyond clear progressions within a single industry. Some industries may be seen as more prestigious than others overall, even if they include roles with similar functions. At the same time, a wide swathe of roles across all industries may be afforded more status (e.g. managerial roles) or less (like manual labor) based on characteristics such as a job being low-paid or dirty, dangerous and demeaning.
Other social dynamics, like how labor is compensated, can even exclude meaningful tasks from a society's conception of work. For example, in modern market-economies where wage labor or piece work predominates, unpaid work may be omitted from economic analysis or even cultural ideas of what qualifies as work.
At a political level, different roles can fall under separate institutions where workers have qualitatively different power or rights. In the extreme, the least powerful members of society may be stigmatized (as in untouchability) or even violently forced (via slavery) into performing the least desirable work. Complementary to this, elites may have exclusive access to the most prestigious work, largely symbolic sinecures, or even a "life of leisure".
Unusual Occupations
In the diverse world of work, there exist some truly bizarre and unusual occupations that often defy conventional expectations. These unique jobs showcase the creativity and adaptability of humans in their pursuit of livelihood.
Workers
Individual workers require sufficient health and resources to succeed in their tasks.
Physiology
As living beings, humans require a baseline of good health, nutrition, rest, and other physical needs in order to reliably exert themselves. This is particularly true of physical labor that places direct demands on the body, but even largely mental work can cause stress from problems like long hours, excessive demands, or a hostile workplace.
Particularly intense forms of manual labor often lead workers to develop physical strength necessary for their job. However, this activity does not necessarily improve a worker's overall physical fitness like exercise, due to problems like overwork or a small set of repetitive motions. In these physical jobs, maintaining good posture or movements with proper technique is also a crucial skill for avoiding injury. Ironically, white-collar workers who are sedentary throughout the workday may also suffer from long-term health problems due to a lack of physical activity.
Training
Learning the necessary skills for work is often a complex process in its own right, requiring intentional training. In traditional societies, know-how for different tasks can be passed to each new generation through oral tradition and working under adult guidance. For work that is more specialized and technically complex, however, a more formal system of education is usually necessary. A complete curriculum ensures that a worker in training has some exposure to all major aspects of their specialty, in both theory and practice.
Equipment and technology
Tool use has been a central aspect of human evolution and is also an essential feature of work. Even in technologically advanced societies, many workers' toolsets still include a number of smaller hand-tools, designed to be held and operated by a single person, often without supplementary power. This is especially true when tasks can be handled by one or a few workers, do not require significant physical power, and are somewhat self-paced, like in many services or handicraft manufacturing.
For other tasks needing large amounts of power, such as in the construction industry, or involving a highly-repetitive set of simple actions, like in mass manufacturing, complex machines can carry out much of the effort. The workers present will focus on more complex tasks, operating controls, or performing maintenance. Over several millennia, invention, scientific discovery, and engineering principles have allowed humans to proceed from creating simple machines that merely redirect or amplify force, through engines for harnessing supplementary power sources, to today's complex, regulated systems that automate many steps within a work process.
In the 20th century, the development of electronics and new mathematical insights led to the creation and widespread adoption of fast, general-purpose computers. Just as mechanization can substitute for the physical labor of many human beings, computers allow for the partial automation of mental work previously carried out by human workers, such as calculations, document transcription, and basic customer service requests. Research and development of related technologies like machine learning and robotics continues into the 21st century.
Beyond tools and machines used to actively perform tasks, workers benefit when other passive elements of their work and environment are designed properly. This includes everything from personal items like workwear and safety gear to features of the workspace itself like furniture, lighting, air quality, and even the underlying architecture.
In society
Organizations
Even if workers are personally ready to perform their jobs, coordination is required for any effort outside of individual subsistence to succeed. At the level of a small team working on a single task, only cooperation and good communication may be necessary. As the complexity of a work process increases though, requiring more planning or more workers focused on specific tasks, a reliable organization becomes more critical.
Economic organizations often reflect social thought common to their time and place, such as ideas about human nature or hierarchy. These unique organizations can also be historically significant, even forming major pillars of an economic system. In European history, for instance, the decline of guilds and rise of joint-stock companies goes hand-in-hand with other changes, like the growth of centralized states and capitalism.
In industrialized economies, labor unions are another significant organization. In isolation, a worker that is easily replaceable in the labor market has little power to demand better wages or conditions. By banding together and interacting with business owners as a corporate entity, the same workers can claim a larger share of the value created by their labor. While a union does require workers to sacrifice some autonomy in relation to their coworkers, it can grant workers more control over the work process itself in addition to material benefits.
Institutions
The need for planning and coordination extends beyond individual organizations to society as a whole too. Every successful work project requires effective resource allocation to provide necessities, materials, and investment (such as equipment and facilities). In smaller, traditional societies, these aspects can be mostly regulated through custom, though as societies grow, more extensive methods become necessary.
These complex institutions, however, still have roots in common human activities. Even the free markets of modern capitalist societies rely fundamentally on trade, while command economies, such as in many communist states during the 20th century, rely on a highly bureaucratic and hierarchical form of redistribution.
Other institutions can affect workers even more directly by delimiting practical day-to-day life or basic legal rights. For example, a caste system may restrict families to a narrow range of jobs, inherited from parent to child. In serfdom, a peasant has more rights than a slave but is attached to a specific piece of land and largely under the power of the landholder, even requiring permission to physically travel outside the land-holding. How institutions play out in individual workers' lives can be complex too; in most societies where wage-labor predominates, workers possess equal rights by law and mobility in theory. Without social support or other resources, however, the necessity of earning a livelihood may force a worker to cede some rights and freedoms in fact.
Values
Societies and subcultures may value work in general, or specific kinds of it, very differently. When social status or virtue is strongly associated with leisure and opposed to tedium, then work itself can become indicative of low social rank and be devalued. In the opposite case, a society may hold strongly to a work ethic where work itself is seen as virtuous. For example, German sociologist Max Weber hypothesized that European capitalism originated in a Protestant work ethic, which emerged with the Reformation. Many Christian theologians appeal to the Old Testament's Book of Genesis in regards to work. According to Genesis 1, human beings were created in the image of God, and according to Genesis 2, Adam was placed in the Garden of Eden to "work it and keep it". Dorothy L. Sayers has argued that "work is the natural exercise and function of man – the creature who is made in the image of his Creator." Likewise, John Paul II said in that by his work, man shares in the image of his creator.
Christian theologians see the fall of man as profoundly affecting human work. In Genesis 3:17, God said to Adam, "cursed is the ground because of you; in pain you shall eat of it all the days of your life". Leland Ryken said out that, because of the fall, "many of the tasks we perform in a fallen world are inherently distasteful and wearisome." Christian theologians interpret that through the fall, work has become toil, but John Paul II says that work is a good thing for man in spite of this toil, and that "perhaps, in a sense, because of it", because work is something that corresponds to man's dignity and through it, he achieves fulfilment as a human being. The fall also means that a work ethic is needed. As a result of the fall, work has become subject to the abuses of idleness on the one hand, and overwork on the other. Drawing on Aristotle, Ryken suggests that the moral ideal is the golden mean between the two extremes of being lazy and being a workaholic.
Some Christian theologians also draw on the doctrine of redemption to discuss the concept of work. Oliver O'Donovan said that although work is a gift of creation, it is "ennobled into mutual service in the fellowship of Christ."
Pope Francis is critical of the hope that technological progress might eliminate or diminish the need for work: "the goal should not be that technological progress increasingly replace human work, for this would be detrimental to humanity", and McKinsey consultants suggest that work will change, but not end, as a result of automation and the increasing adoption of artificial intelligence.
For some, work may hold a spiritual value in addition to any secular notions. Especially in some monastic or mystical strands of several religions, simple manual labor may be held in high regard as a way to maintain the body, cultivate self-discipline and humility, and focus the mind.
Current issues
The contemporary world economy has brought many changes, overturning some previously widespread labor issues. At the same time, some longstanding issues remain relevant, and other new ones have emerged. One issue that continues despite many improvements is slave labor and human trafficking. Though ideas about universal rights and the economic benefits of free labor have significantly diminished the prevalence of outright slavery, it continues in lawless areas, or in attenuated forms on the margins of many economies.
Another difficulty, which has emerged in most societies as a result of urbanization and industrialization, is unemployment. While the shift from a subsistence economy usually increases the overall productivity of society and lifts many out of poverty, it removes a baseline of material security from those who cannot find employment or other support. Governments have tried a range of strategies to mitigate the problem, such as improving the efficiency of job matching, conditionally providing welfare benefits or unemployment insurance, or even directly overriding the labor market through work-relief programs or a job guarantee. Since a job forms a major part of many workers' self-identity, unemployment can have severe psychological and social consequences beyond the financial insecurity it causes.
One more issue, which may not directly interfere with the functioning of an economy but can have significant indirect effects, is when governments fail to account for work occurring out-of-view from the public sphere. This may be important, uncompensated work occurring everyday in private life; or it may be criminal activity that involves clear but furtive economic exchanges. By ignoring or failing to understand these activities, economic policies can have counter-intuitive effects and cause strains on the community and society.
Child labour
Due to various reasons such as the cheap labour, the poor economic situation of the deprived classes, the weakness of laws and legal supervision, the migration existence of child labour is very much observed in different parts of the world.
According to the World Bank Globally rate of child labour have decreased from 25% to 10% between 60s to the early years of the 21st century. Nevertheless, giving the population of the world also increased the total number of child labourers remains high, with UNICEF and ILO acknowledging an estimated 168 million children aged 5–17 worldwide were involved in some sort of child labour in 2013.
Some scholars like Jean-Marie Baland and James A. Robinson suggests any labour by children aged 18 years or less is wrong since this encourages illiteracy, inhumane work and lower investment in human capital. In other words, there are moral and economic reasons that justify a blanket ban on labour from children aged 18 years or less, everywhere in the world. On the other hand, some scholars like Christiaan Grootaert and Kameel Ahmady believe that child labour is the symptom of poverty. If laws ban most lawful work that enables the poor to survive, informal economy, illicit operations and underground businesses will thrive.
Workplace
See also
In modern market-economies:
Career
Employment
Job guarantee
Labour economics
Profession
Trade union
Volunteering
Wage slavery
Workaholic
Labor issues:
Annual leave
Informal economy
Job strain
Karoshi
Labor rights
Leave of absence
Minimum wage
Occupational safety and health
Paid time off
Sick leave
Unemployment
Unfree labor
Unpaid work
Working poor
Workplace safety standards
Related concepts:
Critique of work
Effects of overtime
Ergonomics
Flow (psychology)
Helping behavior
Occupational burnout
Occupational stress
Post-work society
Problem solving
Refusal of work
References
Employment
Labour economics
Sociological terminology | 0.786116 | 0.994103 | 0.78148 |
Green politics | Green politics, or ecopolitics, is a political ideology that aims to foster an ecologically sustainable society often, but not always, rooted in environmentalism, nonviolence, social justice and grassroots democracy. It began taking shape in the western world in the 1970s; since then green parties have developed and established themselves in many countries around the globe and have achieved some electoral success.
The political term green was used initially in relation to die Grünen (German for "the Greens"), a green party formed in the late 1970s. The term political ecology is sometimes used in academic circles, but it has come to represent an interdisciplinary field of study as the academic discipline offers wide-ranging studies integrating ecological social sciences with political economy in topics such as degradation and marginalization, environmental conflict, conservation and control and environmental identities and social movements.
Supporters of green politics share many ideas with the conservation, environmental, feminist and peace movements. In addition to democracy and ecological issues, green politics is concerned with civil liberties, social justice, nonviolence, sometimes variants of localism and tends to support social progressivism. Green party platforms are largely considered left in the political spectrum. The green ideology has connections with various other ecocentric political ideologies, including ecofeminism, eco-socialism and green anarchism, but to what extent these can be seen as forms of green politics is a matter of debate. As the left-wing green political philosophy developed, there also came into separate existence opposite movements on the right-wing that include ecological components such as eco-capitalism and green conservatism.
History
Influences
Adherents to green politics tend to consider it to be part of a higher worldview and not simply a political ideology. Green politics draws its ethical stance from a variety of sources, from the values of indigenous peoples, to the ethics of Mahatma Gandhi, Baruch Spinoza, and Jakob von Uexküll. These people influenced green thought in their advocacy of long-term seventh generation foresight, and on the personal responsibility of every individual to make moral choices.
Unease about adverse consequences of human actions on nature predates the modern concept of environmentalism. Social commentators as far apart as ancient Rome and China complained of air, water and noise pollution.
The philosophical roots of environmentalism can be traced back to enlightenment thinkers such as Rousseau in France, and later the author and naturalist Thoreau in America. Organised environmentalism began in late 19th-century Europe and the United States, as a reaction to the Industrial Revolution with its emphasis on unbridled economic expansion.
"Green politics" first began as conservation and preservation movements, such as the Sierra Club, founded in San Francisco in 1892.
Left-green platforms of the form that make up the green parties today draw terminology from the science of ecology, and policy from environmentalism, deep ecology, feminism, pacifism, anarchism, libertarian socialism, libertarian possibilism, social democracy, eco-socialism, and/or social ecology or green libertarianism. In the 1970s, as these movements grew in influence, green politics arose as a new philosophy which synthesized their goals. The Green Party political movement is not to be confused with the unrelated fact that in some far-right and fascist parties, nationalism has on occasion been tied into a sort of green politics which promotes environmentalism as a form of pride in the "motherland" according to a minority of authors.
Early development
In June 1970, a Dutch group called Kabouters won 5 of the 45 seats on the Amsterdam Gemeenteraad (City Council), as well as two seats each on councils in The Hague and Leeuwarden and one seat apiece in Arnhem, Alkmaar and Leiden. The Kabouters were an outgrowth of Provo's environmental White Plans and they proposed "Groene Plannen" ("Green Plans").
The first political party to be created with its basis in environmental issues was the United Tasmania Group, founded in Australia in March 1972 to fight against deforestation and the creation of a dam that would damage Lake Pedder; whilst it only gained three percent in state elections, it inspired the creation of Green parties all over the world. In May 1972, a meeting at Victoria University of Wellington launched the Values Party, the world's first countrywide green party to contest Parliamentary seats nationally. In November 1972, Europe's first green party, PEOPLE in the UK came into existence.
The German Green Party was not the first Green Party in Europe to have members elected nationally but the impression was created that they had been, because they attracted the most media attention: The German Greens, contended in their first national election in the 1980 federal election. They started as a provisional coalition of civic groups and political campaigns which, together, felt their interests were not expressed by the conventional parties. After contesting the 1979 European elections they held a conference which identified Four Pillars of the Green Party which all groups in the original alliance could agree as the basis of a common Party platform: welding these groups together as a single Party. This statement of principles has since been utilised by many Green Parties around the world. It was this party that first coined the term "Green" ("Grün" in German) and adopted the sunflower symbol. The term "Green" was coined by one of the founders of the German Green Party, Petra Kelly, after she visited Australia and saw the actions of the Builders Labourers Federation and their green ban actions. In the 1983 federal election, the Greens won 27 seats in the Bundestag.
Further developments
The first Canadian foray into green politics took place in the Maritimes when 11 independent candidates (including one in Montreal and one in Toronto) ran in the 1980 federal election under the banner of the Small Party. Inspired by Schumacher's Small is Beautiful, the Small Party candidates ran for the expressed purpose of putting forward an anti-nuclear platform in that election. It was not registered as an official party, but some participants in that effort went on to form the Green Party of Canada in 1983 (the Ontario Greens and British Columbia Greens were also formed that year). Green Party of Canada leader Elizabeth May was the instigator and one of the candidates of the Small Party and she was eventually elected as a member of the Green Party in 2011 Canadian federal election.
In Finland, the Green League became the first European Green Party to form part of a state-level Cabinet in 1995. The German Greens followed, forming a government with the Social Democratic Party of Germany (the "Red-Green Alliance") from 1998 to 2005. In 2001, they reached an agreement to end reliance on nuclear power in Germany, and agreed to remain in coalition and support the German government of Chancellor Gerhard Schröder in the 2001 Afghan War. This put them at odds with many Greens worldwide, but demonstrated that they were capable of difficult political tradeoffs.
In Latvia, Indulis Emsis, leader of the Green Party and part of the Union of Greens and Farmers, an alliance of a Nordic agrarian party and the Green Party, was Prime Minister of Latvia for ten months in 2004, making him the first Green politician to lead a country in the history of the world. In 2015, Emsis' party colleague, Raimonds Vējonis, was elected President of Latvia by the Latvian parliament. Vējonis became the first green head of state worldwide.
In the German state of Baden-Württenburg, the Green Party became the leader of the coalition with the Social Democrats after finishing second in the 2011 Baden-Württemberg state election. In the following state election, 2016, the Green Party became the strongest party for the first time in a German Landtag.
In 2016, the former leader of the Austrian Greens (1997 to 2008), Alexander Van der Bellen, officially running as an independent, won the 2016 Austrian presidential election, making him the second green head of state worldwide and the first directly elected by popular vote. Van der Bellen placed second in the election's first round with 21.3% of the vote, the best result for the Austrian Greens in their history. He won the second-round run-off against the far-right Freedom Party's Norbert Hofer with 53.8% of the votes, making him the first president of Austria who was not backed by either the People's Party or the Social Democratic Party.
Core tenets
According to Derek Wall, a prominent British green proponent, there are four pillars that define green politics:
Ecological wisdom
Social justice
Grassroots democracy
Nonviolence
In 1984, the Green Committees of Correspondence in the United States expanded the Four Pillars into Ten Key Values, which further included:
Decentralization
Community-based economics
Post-patriarchal values (later translated to ecofeminism and Ethics of care)
Respect for diversity
Global responsibility
Future focus
In 2001, the Global Greens were organized as an international green movement. The Global Greens Charter identified six guiding principles:
Ecological wisdom
Social justice
Participatory democracy
Nonviolence
Sustainability
Respect for diversity
Ecology
Economics
Green economics focuses on the importance of the health of the biosphere to human well-being. Consequently, most Greens distrust conventional capitalism, as it tends to emphasize economic growth while ignoring ecological health; the "full cost" of economic growth often includes damage to the biosphere, which is unacceptable according to green politics. Green economics considers such growth to be "uneconomic growth"— material increase that nonetheless lowers the overall quality of life. Green economics inherently takes a longer-term perspective than conventional economics, because such a loss in quality of life is often delayed. According to green economics, the present generation should not borrow from future generations, but rather attempt to achieve what Tim Jackson calls "prosperity without growth".
Some Greens refer to productivism, consumerism and scientism as "grey", as contrasted with "green", economic views. "Grey" approaches focus on behavioral changes.
Therefore, adherents to green politics advocate economic policies designed to safeguard the environment. Greens want governments to stop subsidizing companies that waste resources or pollute the natural world, subsidies that Greens refer to as "dirty subsidies". Some currents of green politics place automobile and agribusiness subsidies in this category, as they may harm human health. On the contrary, Greens look to a green tax shift that are seen to encourage both producers and consumers to make ecologically friendly choices.
Many aspects of green economics could be considered anti-globalist. According to many left-wing greens, economic globalization is considered a threat to well-being, which will replace natural environments and local cultures with a single trade economy, termed the global economic monoculture. This is not a universal policy of greens, as green liberals and green conservatives support a regulated free market economy with additional measures to advance sustainable development.
Since green economics emphasizes biospheric health and biodiversity, an issue outside the traditional left-right spectrum, different currents within green politics incorporate ideas from socialism and capitalism. Greens on the Left are often identified as eco-socialists, who merge ecology and environmentalism with socialism and Marxism and blame the capitalist system for environmental degradation, social injustice, inequality and conflict. eco-capitalists, on the other hand, believe that the free market system, with some modification, is capable of addressing ecological problems. This belief is documented in the business experiences of eco-capitalists in the book, The Gort Cloud that describes the gort cloud as the green community that supports eco-friendly businesses.
Participatory democracy
Since the beginning, green politics has emphasized local, grassroots-level political activity and decision-making. According to its adherents, it is crucial that citizens play a direct role in the decisions that influence their lives and their environment. Therefore, green politics seeks to increase the role of deliberative democracy, based on direct citizen involvement and consensus decision making, wherever it is feasible.
Green politics also encourages political action on the individual level, such as ethical consumerism, or buying things that are made according to environmentally ethical standards. Indeed, many green parties emphasize individual and grassroots action at the local and regional levels over electoral politics. Historically, green parties have grown at the local level, gradually gaining influence and spreading to regional or provincial politics, only entering the national arena when there is a strong network of local support.
In addition, many greens believe that governments should not levy taxes against strictly local production and trade. Some Greens advocate new ways of organizing authority to increase local control, including urban secession, bioregional democracy, and co-operative/local stakeholder ownership.
Other issues
Although Greens in the United States "call for an end to the 'War on Drugs'" and "for the decriminalization of victimless crimes", they also call for developing "a firm approach to law enforcement that directly addresses violent crime, including trafficking in hard drugs".
In Europe, some green parties have tended to support the creation of a democratic federal Europe, while others have opposed European integration.
In the spirit of nonviolence, green politics oppose the war on terrorism and the curtailment of civil rights, focusing instead on nurturing deliberative democracy in war-torn regions and the construction of a civil society with an increased role for women.
In keeping with their commitment to the preservation of diversity, greens are often committed to the maintenance and protection of indigenous communities, languages, and traditions. An example of this is the Irish Green Party's commitment to the preservation of the Irish Language. Some of the green movement has focused on divesting in fossil fuels. Academics Stand Against Poverty states "it is paradoxical for universities to remain invested in fossil fuel companies". Thomas Pogge says that the fossil fuel divestment movement can increase political pressure at events like the international climate change conference (COP). Alex Epstein of Forbes notes that it is hypocritical to ask for divestment without a boycott and that a boycott would be more effective. Some institutions that are leading by example in the academic area are Stanford University, Syracuse University, Sterling College and over 20 more. A number of cities, counties and religious institutions have also joined the movement to divest.
Green politics mostly opposes nuclear fission power and the buildup of persistent organic pollutants, supporting adherence to the precautionary principle, by which technologies are rejected unless they can be proven to not cause significant harm to the health of living things or the biosphere.
Green platforms generally favor tariffs on fossil fuels, restricting genetically modified organisms, and protections for ecoregions or communities.
The Green Party supports phasing out of nuclear power, coal, and incineration of waste. However, the Green Party in Finland has come out against its previous anti-nuclear stance and has stated that addressing global warming in the next 20 years is impossible without expanding nuclear power. These officials have proposed using nuclear-generated heat to heat buildings by replacing the use of coal and biomass to reach zero-emission outputs by 2040.
Organization
Local movements
Green ideology emphasizes participatory democracy and the principle of "thinking globally, acting locally." As such, the ideal Green Party is thought to grow from the bottom up, from neighborhood to municipal to (eco-)regional to national levels. The goal is to rule by a consensus decision making process.
Strong local coalitions are considered a prerequisite to higher-level electoral breakthroughs. Historically, the growth of Green parties has been sparked by a single issue where Greens can appeal to ordinary citizens' concerns. In Germany, for example, the Greens' early opposition to nuclear power won them their first successes in the federal elections.
Global organization
There is a growing level of global cooperation between Green parties. Global gatherings of Green Parties now happen. The first Planetary Meeting of Greens was held 30–31 May 1992, in Rio de Janeiro, immediately preceding the United Nations Conference on Environment and Development held there. More than 200 Greens from 28 nations attended. The first formal Global Greens Gathering took place in Canberra, in 2001, with more than 800 Greens from 72 countries in attendance. The second Global Green Congress was held in São Paulo, Brazil, in May 2008, when 75 parties were represented.
Global Green networking dates back to 1990. Following the Planetary Meeting of Greens in Rio de Janeiro, a Global Green Steering Committee was created, consisting of two seats for each continent. In 1993 this Global Steering Committee met in Mexico City and authorized the creation of a Global Green Network including a Global Green Calendar, Global Green Bulletin, and Global Green Directory. The Directory was issued in several editions in the next years. In 1996, 69 Green Parties from around the world signed a common declaration opposing French nuclear testing in the South Pacific, the first statement of global greens on a current issue. A second statement was issued in December 1997, concerning the Kyoto climate change treaty.
At the 2001 Canberra Global Gathering delegates for Green Parties from 72 countries decided upon a Global Greens Charter which proposes six key principles. Over time, each Green Party can discuss this and organize itself to approve it, some by using it in the local press, some by translating it for their web site, some by incorporating it into their manifesto, some by incorporating it into their constitution. This process is taking place gradually, with online dialogue enabling parties to say where they are up to with this process.
The Gatherings also agree on organizational matters. The first Gathering voted unanimously to set up the Global Green Network (GGN). The GGN is composed of three representatives from each Green Party. A companion organization was set up by the same resolution: Global Green Coordination (GGC). This is composed of three representatives from each Federation (Africa, Europe, The Americas, Asia/Pacific, see below). Discussion of the planned organization took place in several Green Parties prior to the Canberra meeting. The GGC communicates chiefly by email. Any agreement by it has to be by unanimity of its members. It may identify possible global campaigns to propose to Green Parties worldwide. The GGC may endorse statements by individual Green Parties. For example, it endorsed a statement by the US Green Party on the Israel-Palestine conflict.
Thirdly, Global Green Gatherings are an opportunity for informal networking, from which joint campaigning may arise. For example, a campaign to protect the New Caledonian coral reef, by getting it nominated for World Heritage Status: a joint campaign by the New Caledonia Green Party, New Caledonian indigenous leaders, the French Green Party, and the Australian Greens. Another example concerns Ingrid Betancourt, the leader of the Green Party in Colombia, the Green Oxygen Party (Partido Verde Oxigeno). Ingrid Betancourt and the party's Campaign Manager, Claire Rojas, were kidnapped by a hard-line faction of FARC on 7 March 2002, while travelling in FARC-controlled territory. Betancourt had spoken at the Canberra Gathering, making many friends. As a result, Green Parties all over the world have organized, pressing their governments to bring pressure to bear. For example, Green Parties in African countries, Austria, Canada, Brazil, Peru, Mexico, France, Scotland, Sweden and other countries have launched campaigns calling for Betancourt's release. Bob Brown, the leader of the Australian Greens, went to Colombia, as did an envoy from the European Federation, Alain Lipietz, who issued a report. The four Federations of Green Parties issued a message to FARC. Ingrid Betancourt was rescued by the Colombian military in Operation Jaque in 2008.
Global Green meetings
Separately from the Global Green Gatherings, Global Green Meetings take place. For instance, one took place on the fringe of the World Summit on Sustainable Development in Johannesburg. Green Parties attended from Australia, Taiwan, Korea, South Africa, Mauritius, Uganda, Cameroon, Republic of Cyprus, Italy, France, Belgium, Germany, Finland, Sweden, Norway, the US, Mexico and Chile.
The Global Green Meeting discussed the situation of Green Parties on the African continent; heard a report from Mike Feinstein, former mayor of Santa Monica, about setting up a web site of the GGN; discussed procedures for the better working of the GGC; and decided two topics on which the Global Greens could issue statements in the near future: Iraq and the 2003 WTO meeting in Cancun.
Green federations
Affiliated members in Asia, Pacific and Oceania form the Asia-Pacific Green Network.
The member parties of the Global Greens are organised into four continental federations:
Federation of Green Parties of Africa
Federation of the Green Parties of the Americas / Federación de los Partidos Verdes de las Américas
Asia-Pacific Green Network
European Green Party
The European Federation of Green Parties formed itself as the European Green Party on 22 February 2004, in the run-up to European Parliament elections in June 2004, a further step in trans-national integration.
Green political parties
Green movements are calling for social change to reduce the misuse of natural resources. These include grassroots non-governmental organizations like Greenpeace and green parties:
Alliance 90/The Greens
Australian Greens
Austrian Green Party
Belarusian Green Party
Democratic Renewal of Macedonia
Dialogue for Hungary, LMP – Hungary's Green Party
Ecologist Greens (Greece)
The Ecologists (France)
Green Europe (Italy)
Green League (Finland)
Greens of Andorra
Green Party of Aotearoa New Zealand
Green Party of Armenia
Green Party (Brazil)
Green Party of Canada
Green Party (Czech Republic)
Green Party of England and Wales
Green Party (Ireland)
Green Party (Israel)
Green Party of Lebanon
Green Party (Norway)
Green Party (Romania)
Green Party (Sweden)
Green Party of Taiwan
Green Party (Turkey)
Green Party of the United States
Groen, Ecolo
GroenLinks
Hariyali Nepal Party
Latvian Green Party
Left-Green Movement
Red–Green Alliance (Denmark)
Scottish Green Party
Socialist People's Party (Denmark)
The Alternative (Denmark)
See also
Outline of green politics (list of related articles, organized for easy browsing)
Political colour for a list and summary of all political colours
Notes
References
Further reading
Dobson, Andrew (2007). Green Political Thought. 4. Edition (1. Edition 1980), London/ New York: Routledge. (Hardcover)
Spretnak, Charlene (1986). The Spiritual Dimension of Green Politics. Santa Fe, N.M.: Bear & Co. 95 p.
External links
Global Greens Charter, Canberra 2001
Ecology and Society – book on politics and sociology of environmentalism
Environmentalism
Nonviolence
Political ideologies
Progressivism
Social justice
Anti-globalization movement
History of environmentalism
Articles containing video clips | 0.784251 | 0.996461 | 0.781476 |
Agroecology | Agroecology (IPA: ) is an academic discipline that studies ecological processes applied to agricultural production systems. Bringing ecological principles to bear can suggest new management approaches in agroecosystems. The term can refer to a science, a movement, or an agricultural practice. Agroecologists study a variety of agroecosystems. The field of agroecology is not associated with any one particular method of farming, whether it be organic, regenerative, integrated, or industrial, intensive or extensive, although some use the name specifically for alternative agriculture.
Definition
Agroecology is defined by the OECD as "the study of the relation of agricultural crops and environment." Dalgaard et al. refer to agroecology as the study of the interactions between plants, animals, humans and the environment within agricultural systems. Francis et al. also use the definition in the same way, but thought it should be restricted to growing food.
Agroecology is a holistic approach that seeks to reconcile agriculture and local communities with natural processes for the common benefit of nature and livelihoods.
Agroecology is inherently multidisciplinary, including sciences such as agronomy, ecology, environmental science, sociology, economics, history and others. Agroecology uses different sciences to understand elements of ecosystems such as soil properties and plant-insect interactions, as well as using social sciences to understand the effects of farming practices on rural communities, economic constraints to developing new production methods, or cultural factors determining farming practices. The system properties of agroecosystems studied may include: productivity, stability, sustainability and equitability. Agroecology is not limited to any one scale; it can range from an individual gene to an entire population, or from a single field in a given farm to global systems.
Wojtkowski differentiates the ecology of natural ecosystems from agroecology inasmuch as in natural ecosystems there is no role for economics, whereas in agroecology, focusing as it does on organisms within planned and managed environments, it is human activities, and hence economics, that are the primary governing forces that ultimately control the field. Wojtkowski discusses the application of agroecology in agriculture, forestry and agroforestry in his 2002 book.
Varieties
Buttel identifies four varieties of agroecology in a 2003 conference paper. The main varieties he calls ecosystem agroecology which he claims derives from the ecosystem ecology of Howard T. Odum and focuses less on the rural sociology, and agronomic agroecology which he identifies as being oriented towards developing knowledge and practices to agriculture more sustainable. The third long-standing variety Buttel calls ecological political economy which he defines as critiquing the politics and economy of agriculture and weighted to radical politics. The smallest and newest variety Buttel coins agro-population ecology, which he says is very similar to the first, but is derived from the science of ecology primarily based on the more modern theories of population ecology such as population dynamics of constituent species, and their relationships to climate and biogeochemistry, and the role of genetics.
Dalgaard et al. identify different points of view: what they call early "integrative" agroecology, such as the investigations of Henry Gleason or Frederic Clements. The second version they cite Hecht (1995) as coining "hard" agroecology which they identify as more reactive to environmental politics but rooted in measurable units and technology. They themselves name "soft" agroecology which they define as trying to measure agroecology in terms of "soft capital" such as culture or experience.
The term agroecology may used by people for a science, movement or practice. Using the name as a movement became more common in the 1990s, especially in the Americas. Miguel Altieri, whom Buttel groups with the "political" agroecologists, has published prolifically in this sense. He has applied agroecology to sustainable agriculture, alternative agriculture and traditional knowledge.
History
Overview
The history of agroecology depends on whether you are referring to it as a body of thought or a method of practice, as many indigenous cultures around the world historically used and currently use practices we would now consider utilizing knowledge of agroecology. Examples include Maori, Nahuatl, and many other indigenous peoples.
The Mexica people that inhabited Tenochtitlan pre-colonization of the Americas used a process called chinampas that in many ways mirrors the use of composting in sustainable agriculture today. The use of agroecological practices such as nutrient cycling and intercropping occurs across hundreds of years and many different cultures. Indigenous peoples also currently make up a large proportion of people using agroecological practices, and those involved in the movement to move more farming into an agroecological paradigm.
Pre-WWII academic thought
According to Gliessman and Francis et al., agronomy and ecology were first linked with the study of crop ecology by Klages in 1928. This work is a study of where crops can best be grown.
Wezel et al. say the first mention of the term agroecology was in 1928, with the publication of the term by Basil Bensin. Dalgaard et al. claim the German zoologist Friederichs was the first to use the name in 1930 in his book on the zoology of agriculture and forestry, followed by American crop physiologist Hansen in 1939, both using the word for the application of ecology within agriculture.
Post-WWII academic thought
Tischler's 1965 book Agrarökologie may be the first to be titled 'agroecology'. He analyzed the different components (plants, animals, soils and climate) and their interactions within an agroecosystem as well as the impact of human agricultural management on these components.
Gliessman describes that post-WWII ecologists gave more focus to experiments in the natural environment, while agronomists dedicated their attention to the cultivated systems in agriculture, but in the 1970s agronomists saw the value of ecology, and ecologists began to use the agricultural systems as study plots, studies in agroecology grew more rapidly. More books and articles using the concept of agroecosystems and the word agroecology started to appear in 1970s. According to Dalgaard et al., it probably was the concept of "process ecology" such as studied by Arthur Tansley in the 1930s which inspired Harper's 1974 concept of agroecosystems, which they consider the foundation of modern agroecology. Dalgaard et al. claim Frederic Clements's investigations on ecology using social sciences, community ecology and a "landscape perspective" is agroecology, as well as Henry Gleason's investigations of the population ecology of plants using different scientific disciplines. Ethnobotanist Efraim Hernandez X.'s work on traditional knowledge in Mexico in the 1970s led to new education programs in agroecology.
Works such as Silent Spring and The Limits to Growth caused the public to be aware of the environmental costs of agricultural production, which caused more research in sustainability starting in the 1980s. The view that the socio-economic context are fundamental was used in the 1982 article Agroecologia del Tropico Americano by Montaldo, who argues that this context cannot be separated from agriculture when designing agricultural practices. In 1985 Miguel Altieri studied how the consolidation of the farms and cropping systems impact pest populations, and Gliessman how socio-economic, technological, and ecological components gave rise to producer choices of food production systems.
In 1995, Edens et al. in Sustainable Agriculture and Integrated Farming Systems considered the economics of systems, ecological impacts, and ethics and values in agriculture.
Social movements
Several social movements have adopted agroecology as part of their larger organizing strategy. Groups like La Via Campesina have used agroecology as a method for achieving food sovereignty. Agroecology has also been utilized by farmers to resist global agricultural development patterns associated with the green revolution.
By region
Latin America
Africa
Garí wrote two papers for the FAO in the early 2000s about using an agroecological approach which he called "agrobiodiversity" to empower farmers to cope with the impacts of the AIDS on rural areas in Africa.
In 2011, the first encounter of agroecology trainers took place in Zimbabwe and issued the Shashe Declaration.
Europe
The European Commission supports the use of sustainable practices, such as precision agriculture, organic farming, agroecology, agroforestry and stricter animal welfare standards through the Green Deal and the Farm to Fork Strategy.
Debate
Within academic research areas that focus on topics related to agriculture or ecology, such as agronomy, veterinarian science, environmental science, and others, there is much debate regarding what model of agriculture or agroecology should be supported through policy. Agricultural departments of different countries support agroecology to varying degrees, with the UN perhaps its biggest proponent.
See also
References
Further reading
Buttel, F.H. and M.E. Gertler 1982. Agricultural structure, agricultural policy and environmental quality. Agriculture and Environment 7: 101–119.
Carrol, C. R., J.H. Vandermeer and P.M. Rosset. 1990. Agroecology. McGraw Hill Publishing Company, New York.
Paoletti, M.G., B.R. Stinner, and G.G. Lorenzoni, ed. Agricultural Ecology and Environment. New York: Elsevier Science Publisher B.V., 1989.
Robertson, Philip, and Scott M Swinton. "Reconciling agricultural productivity and environmental integrity: a grand challenge for agriculture." Frontiers in Ecology and the Environment 3.1 (2005): 38–46.
Monbiot, George. 2022. "Regenesis: Feeding the World without Devouring the Planet."
Advances in Agroecology Book Series
Soil Organic Matter in Sustainable Agriculture (Advances in Agroecology) by Fred Magdoff and Ray R. Weil (Hardcover - May 27, 2004)
Agroforestry in Sustainable Agricultural Systems (Advances in Agroecology) by Louise E. Buck, James P. Lassoie, and Erick C.M. Fernandes (Hardcover - Oct 1, 1998)
Agroecosystem Sustainability: Developing Practical Strategies (Advances in Agroecology) by Stephen R. Gliessman (Hardcover - Sep 25, 2000)
Interactions Between Agroecosystems and Rural Communities (Advances in Agroecology) by Cornelia Flora (Hardcover - Feb 5, 2001)
Landscape Ecology in Agroecosystems Management (Advances in Agroecology) by Lech Ryszkowski (Hardcover - Dec 27, 2001)
Integrated Assessment of Health and Sustainability of Agroecosystems (Advances in Agroecology) by Thomas Gitau, Margaret W. Gitau, David Waltner-ToewsClive A. Edwards June 2008 | Hardback: 978-1-4200-7277-8 (CRC Press)
Multi-Scale Integrated Analysis of Agroecosystems (Advances in Agroecology) by Mario Giampietro 2003 | Hardback: 978-0-8493-1067-6 (CRC Press)
Soil Tillage in Agroecosystems (Advances in Agroecology) edited by Adel El Titi 2002 | Hardback: 978-0-8493-1228-1 (CRC Press)
Tropical Agroecosystems (Advances in Agroecology) edited by John H. Vandermeer 2002 | Hardback: 978-0-8493-1581-7 (CRC Press)
Structure and Function in Agroecosystem Design and Management (Advances in Agroecology) edited by Masae Shiyomi, Hiroshi Koizumi 2001 | Hardback: 978-0-8493-0904-5 (CRC Press)
Biodiversity in Agroecosystems (Advances in Agroecology) edited by Wanda W. Collins, Calvin O. Qualset 1998 | Hardback: 978-1-56670-290-4 (CRC Press)
Sustainable Agroecosystem Management: Integrating Ecology, Economics and Society. (Advances in Agroecology) edited by Patrick J. Bohlen and Gar House 2009 | Hardback: 978-1-4200-5214-5 (CRC Press)
External links
Topic
Agroecology
Agroecology by Project Regeneration
International Agroecology Action Network
Spain
The 10 elements of Agroecology
Organisations
Agroecology Europe - A European association for Agroecology
Agroecology Map
One Million Voices of Agroecology
Courses
University of Wisconsin–Madison
Montpellier, France
University of Illinois at Urbana-Champaign
European Master Agroecology
Norwegian University of Life Sciences
UC Santa Cruz Center for Agroecology & Sustainable Food Systems
Sustainable agriculture
Agronomy
Agriculture
Agricultural soil science
Environmental social science
Organic farming
Habitat management equipment and methods
Sustainable food system
Environmental conservation | 0.790443 | 0.98863 | 0.781456 |
Anthropic principle | The anthropic principle, also known as the observation selection effect, is the hypothesis that the range of possible observations that could be made about the universe is limited by the fact that observations are only possible in the type of universe that is capable of developing intelligent life. Proponents of the anthropic principle argue that it explains why the universe has the age and the fundamental physical constants necessary to accommodate intelligent life. If either had been significantly different, no one would have been around to make observations. Anthropic reasoning has been used to address the question as to why certain measured physical constants take the values that they do, rather than some other arbitrary values, and to explain a perception that the universe appears to be finely tuned for the existence of life.
There are many different formulations of the anthropic principle. Philosopher Nick Bostrom counts thirty, but the underlying principles can be divided into "weak" and "strong" forms, depending on the types of cosmological claims they entail.
Definition and basis
The principle was formulated as a response to a series of observations that the laws of nature and parameters of the universe have values that are consistent with conditions for life as it is known rather than values that would not be consistent with life on Earth. The anthropic principle states that this is an a posteriori necessity, because if life were impossible, no living entity would be there to observe it, and thus it would not be known. That is, it must be possible to observe some universe, and hence, the laws and constants of any such universe must accommodate that possibility.
The term anthropic in "anthropic principle" has been argued to be a misnomer. While singling out the currently observable kind of carbon-based life, none of the finely tuned phenomena require human life or some kind of carbon chauvinism. Any form of life or any form of heavy atom, stone, star, or galaxy would do; nothing specifically human or anthropic is involved.
The anthropic principle has given rise to some confusion and controversy, partly because the phrase has been applied to several distinct ideas. All versions of the principle have been accused of discouraging the search for a deeper physical understanding of the universe. The anthropic principle is often criticized for lacking falsifiability and therefore its critics may point out that the anthropic principle is a non-scientific concept, even though the weak anthropic principle, "conditions that are observed in the universe must allow the observer to exist", is "easy" to support in mathematics and philosophy (i.e., it is a tautology or truism). However, building a substantive argument based on a tautological foundation is problematic. Stronger variants of the anthropic principle are not tautologies and thus make claims considered controversial by some and that are contingent upon empirical verification.
Anthropic observations
In 1961, Robert Dicke noted that the age of the universe, as seen by living observers, cannot be random. Instead, biological factors constrain the universe to be more or less in a "golden age", neither too young nor too old. If the universe was one tenth as old as its present age, there would not have been sufficient time to build up appreciable levels of metallicity (levels of elements besides hydrogen and helium) especially carbon, by nucleosynthesis. Small rocky planets did not yet exist. If the universe were 10 times older than it actually is, most stars would be too old to remain on the main sequence and would have turned into white dwarfs, aside from the dimmest red dwarfs, and stable planetary systems would have already come to an end. Thus, Dicke explained the coincidence between large dimensionless numbers constructed from the constants of physics and the age of the universe, a coincidence that inspired Dirac's varying-G theory.
Dicke later reasoned that the density of matter in the universe must be almost exactly the critical density needed to prevent the Big Crunch (the "Dicke coincidences" argument). The most recent measurements may suggest that the observed density of baryonic matter, and some theoretical predictions of the amount of dark matter, account for about 30% of this critical density, with the rest contributed by a cosmological constant. Steven Weinberg gave an anthropic explanation for this fact: he noted that the cosmological constant has a remarkably low value, some 120 orders of magnitude smaller than the value particle physics predicts (this has been described as the "worst prediction in physics"). However, if the cosmological constant were only several orders of magnitude larger than its observed value, the universe would suffer catastrophic inflation, which would preclude the formation of stars, and hence life.
The observed values of the dimensionless physical constants (such as the fine-structure constant) governing the four fundamental interactions are balanced as if fine-tuned to permit the formation of commonly found matter and subsequently the emergence of life. A slight increase in the strong interaction (up to 50% for some authors) would bind the dineutron and the diproton and convert all hydrogen in the early universe to helium; likewise, an increase in the weak interaction also would convert all hydrogen to helium. Water, as well as sufficiently long-lived stable stars, both essential for the emergence of life as it is known, would not exist. More generally, small changes in the relative strengths of the four fundamental interactions can greatly affect the universe's age, structure, and capacity for life.
Origin
The phrase "anthropic principle" first appeared in Brandon Carter's contribution to a 1973 Kraków symposium honouring Copernicus's 500th birthday. Carter, a theoretical astrophysicist, articulated the Anthropic Principle in reaction to the Copernican Principle, which states that humans do not occupy a privileged position in the Universe. Carter said: "Although our situation is not necessarily central, it is inevitably privileged to some extent." Specifically, Carter disagreed with using the Copernican principle to justify the Perfect Cosmological Principle, which states that all large regions and times in the universe must be statistically identical. The latter principle underlies the steady-state theory, which had recently been falsified by the 1965 discovery of the cosmic microwave background radiation. This discovery was unequivocal evidence that the universe has changed radically over time (for example, via the Big Bang).
Carter defined two forms of the anthropic principle, a "weak" one which referred only to anthropic selection of privileged spacetime locations in the universe, and a more controversial "strong" form that addressed the values of the fundamental constants of physics.
Roger Penrose explained the weak form as follows:
One reason this is plausible is that there are many other places and times in which humans could have evolved. But when applying the strong principle, there is only one universe, with one set of fundamental parameters, so what exactly is the point being made? Carter offers two possibilities: First, humans can use their own existence to make "predictions" about the parameters. But second, "as a last resort", humans can convert these predictions into explanations by assuming that there is more than one universe, in fact a large and possibly infinite collection of universes, something that is now called the multiverse ("world ensemble" was Carter's term), in which the parameters (and perhaps the laws of physics) vary across universes. The strong principle then becomes an example of a selection effect, exactly analogous to the weak principle. Postulating a multiverse is certainly a radical step, but taking it could provide at least a partial answer to a question seemingly out of the reach of normal science: "Why do the fundamental laws of physics take the particular form we observe and not another?"
Since Carter's 1973 paper, the term anthropic principle has been extended to cover a number of ideas that differ in important ways from his. Particular confusion was caused by the 1986 book The Anthropic Cosmological Principle by John D. Barrow and Frank Tipler, which distinguished between a "weak" and "strong" anthropic principle in a way very different from Carter's, as discussed in the next section.
Carter was not the first to invoke some form of the anthropic principle. In fact, the evolutionary biologist Alfred Russel Wallace anticipated the anthropic principle as long ago as 1904: "Such a vast and complex universe as that which we know exists around us, may have been absolutely required [...] in order to produce a world that should be precisely adapted in every detail for the orderly development of life culminating in man." In 1957, Robert Dicke wrote: "The age of the Universe 'now' is not random but conditioned by biological factors [...] [changes in the values of the fundamental constants of physics] would preclude the existence of man to consider the problem."
Ludwig Boltzmann may have been one of the first in modern science to use anthropic reasoning. Prior to knowledge of the Big Bang Boltzmann's thermodynamic concepts painted a picture of a universe that had inexplicably low entropy. Boltzmann suggested several explanations, one of which relied on fluctuations that could produce pockets of low entropy or Boltzmann universes. While most of the universe is featureless in this model, to Boltzmann, it is unremarkable that humanity happens to inhabit a Boltzmann universe, as that is the only place where intelligent life could be.
Variants
Weak anthropic principle (WAP) (Carter): "... our location in the universe is necessarily privileged to the extent of being compatible with our existence as observers." For Carter, "location" refers to our location in time as well as space.
Strong anthropic principle (SAP) (Carter): "[T]he universe (and hence the fundamental parameters on which it depends) must be such as to admit the creation of observers within it at some stage. To paraphrase Descartes, cogito ergo mundus talis est."The Latin tag ("I think, therefore the world is such [as it is]") makes it clear that "must" indicates a deduction from the fact of our existence; the statement is thus a truism.
In their 1986 book, The anthropic cosmological principle, John Barrow and Frank Tipler depart from Carter and define the WAP and SAP as follows:
Weak anthropic principle (WAP) (Barrow and Tipler): "The observed values of all physical and cosmological quantities are not equally probable but they take on values restricted by the requirement that there exist sites where carbon-based life can evolve and by the requirements that the universe be old enough for it to have already done so."Unlike Carter they restrict the principle to carbon-based life, rather than just "observers". A more important difference is that they apply the WAP to the fundamental physical constants, such as the fine-structure constant, the number of spacetime dimensions, and the cosmological constant—topics that fall under Carter's SAP.
Strong anthropic principle (SAP) (Barrow and Tipler): "The Universe must have those properties which allow life to develop within it at some stage in its history."This looks very similar to Carter's SAP, but unlike the case with Carter's SAP, the "must" is an imperative, as shown by the following three possible elaborations of the SAP, each proposed by Barrow and Tipler:
"There exists one possible Universe 'designed' with the goal of generating and sustaining 'observers'."
This can be seen as simply the classic design argument restated in the garb of contemporary cosmology. It implies that the purpose of the universe is to give rise to intelligent life, with the laws of nature and their fundamental physical constants set to ensure that life emerges and evolves.
"Observers are necessary to bring the Universe into being."
Barrow and Tipler believe that this is a valid conclusion from quantum mechanics, as John Archibald Wheeler has suggested, especially via his idea that information is the fundamental reality (see It from bit) and his Participatory anthropic principle (PAP) which is an interpretation of quantum mechanics associated with the ideas of John von Neumann and Eugene Wigner.
"An ensemble of other different universes is necessary for the existence of our Universe."
By contrast, Carter merely says that an ensemble of universes is necessary for the SAP to count as an explanation.
The philosophers John Leslie and Nick Bostrom reject the Barrow and Tipler SAP as a fundamental misreading of Carter. For Bostrom, Carter's anthropic principle just warns us to make allowance for anthropic bias—that is, the bias created by anthropic selection effects (which Bostrom calls "observation" selection effects)—the necessity for observers to exist in order to get a result. He writes:
Strong self-sampling assumption (SSSA) (Bostrom): "Each observer-moment should reason as if it were randomly selected from the class of all observer-moments in its reference class." Analysing an observer's experience into a sequence of "observer-moments" helps avoid certain paradoxes; but the main ambiguity is the selection of the appropriate "reference class": for Carter's WAP this might correspond to all real or potential observer-moments in our universe; for the SAP, to all in the multiverse. Bostrom's mathematical development shows that choosing either too broad or too narrow a reference class leads to counter-intuitive results, but he is not able to prescribe an ideal choice.
According to Jürgen Schmidhuber, the anthropic principle essentially just says that the conditional probability of finding yourself in a universe compatible with your existence is always 1. It does not allow for any additional nontrivial predictions such as "gravity won't change tomorrow". To gain more predictive power, additional assumptions on the prior distribution of alternative universes are necessary.
Playwright and novelist Michael Frayn describes a form of the strong anthropic principle in his 2006 book The Human Touch, which explores what he characterises as "the central oddity of the Universe":
Character of anthropic reasoning
Carter chose to focus on a tautological aspect of his ideas, which has resulted in much confusion. In fact, anthropic reasoning interests scientists because of something that is only implicit in the above formal definitions, namely that humans should give serious consideration to there being other universes with different values of the "fundamental parameters"—that is, the dimensionless physical constants and initial conditions for the Big Bang. Carter and others have argued that life would not be possible in most such universes. In other words, the universe humans live in is fine tuned to permit life. Collins & Hawking (1973) characterized Carter's then-unpublished big idea as the postulate that "there is not one universe but a whole infinite ensemble of universes with all possible initial conditions". If this is granted, the anthropic principle provides a plausible explanation for the fine tuning of our universe: the "typical" universe is not fine-tuned, but given enough universes, a small fraction will be capable of supporting intelligent life. Ours must be one of these, and so the observed fine tuning should be no cause for wonder.
Although philosophers have discussed related concepts for centuries, in the early 1970s the only genuine physical theory yielding a multiverse of sorts was the many-worlds interpretation of quantum mechanics. This would allow variation in initial conditions, but not in the truly fundamental constants. Since that time a number of mechanisms for producing a multiverse have been suggested: see the review by Max Tegmark. An important development in the 1980s was the combination of inflation theory with the hypothesis that some parameters are determined by symmetry breaking in the early universe, which allows parameters previously thought of as "fundamental constants" to vary over very large distances, thus eroding the distinction between Carter's weak and strong principles. At the beginning of the 21st century, the string landscape emerged as a mechanism for varying essentially all the constants, including the number of spatial dimensions.
The anthropic idea that fundamental parameters are selected from a multitude of different possibilities (each actual in some universe or other) contrasts with the traditional hope of physicists for a theory of everything having no free parameters. As Albert Einstein said: "What really interests me is whether God had any choice in the creation of the world." In 2002, some proponents of the leading candidate for a "theory of everything", string theory, proclaimed "the end of the anthropic principle" since there would be no free parameters to select. In 2003, however, Leonard Susskind stated: "... it seems plausible that the landscape is unimaginably large and diverse. This is the behavior that gives credence to the anthropic principle."
The modern form of a design argument is put forth by intelligent design. Proponents of intelligent design often cite the fine-tuning observations that (in part) preceded the formulation of the anthropic principle by Carter as a proof of an intelligent designer. Opponents of intelligent design are not limited to those who hypothesize that other universes exist; they may also argue, anti-anthropically, that the universe is less fine-tuned than often claimed, or that accepting fine tuning as a brute fact is less astonishing than the idea of an intelligent creator. Furthermore, even accepting fine tuning, Sober (2005) and Ikeda and Jefferys, argue that the anthropic principle as conventionally stated actually undermines intelligent design.
Paul Davies's book The Goldilocks Enigma (2006) reviews the current state of the fine-tuning debate in detail, and concludes by enumerating the following responses to that debate:
The absurd universe: Our universe just happens to be the way it is.
The unique universe: There is a deep underlying unity in physics that necessitates the Universe being the way it is. A Theory of Everything will explain why the various features of the Universe must have exactly the values that have been recorded.
The multiverse: Multiple universes exist, having all possible combinations of characteristics, and humans inevitably find themselves within a universe that allows us to exist.
Intelligent design: A creator designed the Universe with the purpose of supporting complexity and the emergence of intelligence.
The life principle: There is an underlying principle that constrains the Universe to evolve towards life and mind.
The self-explaining universe: A closed explanatory or causal loop: "perhaps only universes with a capacity for consciousness can exist". This is Wheeler's participatory anthropic principle (PAP).
The fake universe: Humans live inside a virtual reality simulation.
Omitted here is Lee Smolin's model of cosmological natural selection, also known as fecund universes, which proposes that universes have "offspring" that are more plentiful if they resemble our universe. Also see Gardner (2005).
Clearly each of these hypotheses resolve some aspects of the puzzle, while leaving others unanswered. Followers of Carter would admit only option 3 as an anthropic explanation, whereas 3 through 6 are covered by different versions of Barrow and Tipler's SAP (which would also include 7 if it is considered a variant of 4, as in Tipler 1994).
The anthropic principle, at least as Carter conceived it, can be applied on scales much smaller than the whole universe. For example, Carter (1983) inverted the usual line of reasoning and pointed out that when interpreting the evolutionary record, one must take into account cosmological and astrophysical considerations. With this in mind, Carter concluded that given the best estimates of the age of the universe, the evolutionary chain culminating in Homo sapiens probably admits only one or two low probability links.
Observational evidence
No possible observational evidence bears on Carter's WAP, as it is merely advice to the scientist and asserts nothing debatable. The obvious test of Barrow's SAP, which says that the universe is "required" to support life, is to find evidence of life in universes other than ours. Any other universe is, by most definitions, unobservable (otherwise it would be included in our portion of this universe). Thus, in principle Barrow's SAP cannot be falsified by observing a universe in which an observer cannot exist.
Philosopher John Leslie states that the Carter SAP (with multiverse) predicts the following:
Physical theory will evolve so as to strengthen the hypothesis that early phase transitions occur probabilistically rather than deterministically, in which case there will be no deep physical reason for the values of fundamental constants;
Various theories for generating multiple universes will prove robust;
Evidence that the universe is fine tuned will continue to accumulate;
No life with a non-carbon chemistry will be discovered;
Mathematical studies of galaxy formation will confirm that it is sensitive to the rate of expansion of the universe.
Hogan has emphasised that it would be very strange if all fundamental constants were strictly determined, since this would leave us with no ready explanation for apparent fine tuning. In fact, humans might have to resort to something akin to Barrow and Tipler's SAP: there would be no option for such a universe not to support life.
Probabilistic predictions of parameter values can be made given:
a particular multiverse with a "measure", i.e. a well defined "density of universes" (so, for parameter X, one can calculate the prior probability P(X0) dX that X is in the range ), and
an estimate of the number of observers in each universe, N(X) (e.g., this might be taken as proportional to the number of stars in the universe).
The probability of observing value X is then proportional to . A generic feature of an analysis of this nature is that the expected values of the fundamental physical constants should not be "over-tuned", i.e. if there is some perfectly tuned predicted value (e.g. zero), the observed value need be no closer to that predicted value than what is required to make life possible. The small but finite value of the cosmological constant can be regarded as a successful prediction in this sense.
One thing that would not count as evidence for the anthropic principle is evidence that the Earth or the Solar System occupied a privileged position in the universe, in violation of the Copernican principle (for possible counterevidence to this principle, see Copernican principle), unless there was some reason to think that that position was a necessary condition for our existence as observers.
Applications of the principle
The nucleosynthesis of carbon-12
Fred Hoyle may have invoked anthropic reasoning to predict an astrophysical phenomenon. He is said to have reasoned, from the prevalence on Earth of life forms whose chemistry was based on carbon-12 nuclei, that there must be an undiscovered resonance in the carbon-12 nucleus facilitating its synthesis in stellar interiors via the triple-alpha process. He then calculated the energy of this undiscovered resonance to be 7.6 million electronvolts. Willie Fowler's research group soon found this resonance, and its measured energy was close to Hoyle's prediction.
However, in 2010 Helge Kragh argued that Hoyle did not use anthropic reasoning in making his prediction, since he made his prediction in 1953 and anthropic reasoning did not come into prominence until 1980. He called this an "anthropic myth", saying that Hoyle and others made an after-the-fact connection between carbon and life decades after the discovery of the resonance.
Cosmic inflation
Don Page criticized the entire theory of cosmic inflation as follows. He emphasized that initial conditions that made possible a thermodynamic arrow of time in a universe with a Big Bang origin, must include the assumption that at the initial singularity, the entropy of the universe was low and therefore extremely improbable. Paul Davies rebutted this criticism by invoking an inflationary version of the anthropic principle. While Davies accepted the premise that the initial state of the visible universe (which filled a microscopic amount of space before inflating) had to possess a very low entropy value—due to random quantum fluctuations—to account for the observed thermodynamic arrow of time, he deemed this fact an advantage for the theory. That the tiny patch of space from which our observable universe grew had to be extremely orderly, to allow the post-inflation universe to have an arrow of time, makes it unnecessary to adopt any "ad hoc" hypotheses about the initial entropy state, hypotheses other Big Bang theories require.
String theory
String theory predicts a large number of possible universes, called the "backgrounds" or "vacua". The set of these vacua is often called the "multiverse" or "anthropic landscape" or "string landscape". Leonard Susskind has argued that the existence of a large number of vacua puts anthropic reasoning on firm ground: only universes whose properties are such as to allow observers to exist are observed, while a possibly much larger set of universes lacking such properties go unnoticed.
Steven Weinberg believes the anthropic principle may be appropriated by cosmologists committed to nontheism, and refers to that principle as a "turning point" in modern science because applying it to the string landscape "may explain how the constants of nature that we observe can take values suitable for life without being fine-tuned by a benevolent creator". Others—most notably David Gross but also Lubos Motl, Peter Woit, and Lee Smolin—argue that this is not predictive. Max Tegmark, Mario Livio, and Martin Rees argue that only some aspects of a physical theory need be observable and/or testable for the theory to be accepted, and that many well-accepted theories are far from completely testable at present.
Jürgen Schmidhuber (2000–2002) points out that Ray Solomonoff's theory of universal inductive inference and its extensions already provide a framework for maximizing our confidence in any theory, given a limited sequence of physical observations, and some prior distribution on the set of possible explanations of the universe.
Zhi-Wei Wang and Samuel L. Braunstein proved that life's existence in the universe depends on various fundamental constants. It suggests that without a complete understanding of these constants, one might incorrectly perceive the universe as being intelligently designed for life. This perspective challenges the view that our universe is unique in its ability to support life.
Dimensions of spacetime
There are two kinds of dimensions: spatial (bidirectional) and temporal (unidirectional). Let the number of spatial dimensions be N and the number of temporal dimensions be T. That and , setting aside the compactified dimensions invoked by string theory and undetectable to date, can be explained by appealing to the physical consequences of letting N differ from 3 and T differ from 1. The argument is often of an anthropic character and possibly the first of its kind, albeit before the complete concept came into vogue.
The implicit notion that the dimensionality of the universe is special is first attributed to Gottfried Wilhelm Leibniz, who in the Discourse on Metaphysics suggested that the world is "the one which is at the same time the simplest in hypothesis and the richest in phenomena". Immanuel Kant argued that 3-dimensional space was a consequence of the inverse square law of universal gravitation. While Kant's argument is historically important, John D. Barrow said that it "gets the punch-line back to front: it is the three-dimensionality of space that explains why we see inverse-square force laws in Nature, not vice-versa" (Barrow 2002:204).
In 1920, Paul Ehrenfest showed that if there is only a single time dimension and more than three spatial dimensions, the orbit of a planet about its Sun cannot remain stable. The same is true of a star's orbit around the center of its galaxy. Ehrenfest also showed that if there are an even number of spatial dimensions, then the different parts of a wave impulse will travel at different speeds. If there are spatial dimensions, where k is a positive whole number, then wave impulses become distorted. In 1922, Hermann Weyl claimed that Maxwell's theory of electromagnetism can be expressed in terms of an action only for a four-dimensional manifold. Finally, Tangherlini showed in 1963 that when there are more than three spatial dimensions, electron orbitals around nuclei cannot be stable; electrons would either fall into the nucleus or disperse.
Max Tegmark expands on the preceding argument in the following anthropic manner. If T differs from 1, the behavior of physical systems could not be predicted reliably from knowledge of the relevant partial differential equations. In such a universe, intelligent life capable of manipulating technology could not emerge. Moreover, if , Tegmark maintains that protons and electrons would be unstable and could decay into particles having greater mass than themselves. (This is not a problem if the particles have a sufficiently low temperature.) Lastly, if , gravitation of any kind becomes problematic, and the universe would probably be too simple to contain observers. For example, when , nerves cannot cross without intersecting. Hence anthropic and other arguments rule out all cases except and , which describes the world around us.
On the other hand, in view of creating black holes from an ideal monatomic gas under its self-gravity, Wei-Xiang Feng showed that -dimensional spacetime is the marginal dimensionality. Moreover, it is the unique dimensionality that can afford a "stable" gas sphere with a "positive" cosmological constant. However, a self-gravitating gas cannot be stably bound if the mass sphere is larger than ~1021 solar masses, due to the small positivity of the cosmological constant observed.
In 2019, James Scargill argued that complex life may be possible with two spatial dimensions. According to Scargill, a purely scalar theory of gravity may enable a local gravitational force, and 2D networks may be sufficient for complex neural networks.
Metaphysical interpretations
Some of the metaphysical disputes and speculations include, for example, attempts to back Pierre Teilhard de Chardin's earlier interpretation of the universe as being Christ centered (compare Omega Point), expressing a creatio evolutiva instead the elder notion of creatio continua. From a strictly secular, humanist perspective, it allows as well to put human beings back in the center, an anthropogenic shift in cosmology. Karl W. Giberson has laconically stated that
William Sims Bainbridge disagreed with de Chardin's optimism about a future Omega point at the end of history, arguing that logically, humans are trapped at the Omicron point, in the middle of the Greek alphabet rather than advancing to the end, because the universe does not need to have any characteristics that would support our further technical progress, if the anthropic principle merely requires it to be suitable for our evolution to this point.
The anthropic cosmological principle
A thorough extant study of the anthropic principle is the book The anthropic cosmological principle by John D. Barrow, a cosmologist, and Frank J. Tipler, a cosmologist and mathematical physicist. This book sets out in detail the many known anthropic coincidences and constraints, including many found by its authors. While the book is primarily a work of theoretical astrophysics, it also touches on quantum physics, chemistry, and earth science. An entire chapter argues that Homo sapiens is, with high probability, the only intelligent species in the Milky Way.
The book begins with an extensive review of many topics in the history of ideas the authors deem relevant to the anthropic principle, because the authors believe that principle has important antecedents in the notions of teleology and intelligent design. They discuss the writings of Fichte, Hegel, Bergson, and Alfred North Whitehead, and the Omega Point cosmology of Teilhard de Chardin. Barrow and Tipler carefully distinguish teleological reasoning from eutaxiological reasoning; the former asserts that order must have a consequent purpose; the latter asserts more modestly that order must have a planned cause. They attribute this important but nearly always overlooked distinction to an obscure 1883 book by L. E. Hicks.
Seeing little sense in a principle requiring intelligent life to emerge while remaining indifferent to the possibility of its eventual extinction, Barrow and Tipler propose the final anthropic principle (FAP): Intelligent information-processing must come into existence in the universe, and, once it comes into existence, it will never die out.
Barrow and Tipler submit that the FAP is both a valid physical statement and "closely connected with moral values". FAP places strong constraints on the structure of the universe, constraints developed further in Tipler's The Physics of Immortality. One such constraint is that the universe must end in a Big Crunch, which seems unlikely in view of the tentative conclusions drawn since 1998 about dark energy, based on observations of very distant supernovas.
In his review of Barrow and Tipler, Martin Gardner ridiculed the FAP by quoting the last two sentences of their book as defining a completely ridiculous anthropic principle (CRAP):
Reception and controversies
Carter has frequently expressed regret for his own choice of the word "anthropic", because it conveys the misleading impression that the principle involves humans in particular, to the exclusion of non-human intelligence more broadly. Others have criticised the word "principle" as being too grandiose to describe straightforward applications of selection effects.
A common criticism of Carter's SAP is that it is an easy deus ex machina that discourages searches for physical explanations. To quote Penrose again: "It tends to be invoked by theorists whenever they do not have a good enough theory to explain the observed facts."
Carter's SAP and Barrow and Tipler's WAP have been dismissed as truisms or trivial tautologies—that is, statements true solely by virtue of their logical form and not because a substantive claim is made and supported by observation of reality. As such, they are criticized as an elaborate way of saying, "If things were different, they would be different", which is a valid statement, but does not make a claim of some factual alternative over another.
Critics of the Barrow and Tipler SAP claim that it is neither testable nor falsifiable, and thus is not a scientific statement but rather a philosophical one. The same criticism has been leveled against the hypothesis of a multiverse, although some argue that it does make falsifiable predictions. A modified version of this criticism is that humanity understands so little about the emergence of life, especially intelligent life, that it is effectively impossible to calculate the number of observers in each universe. Also, the prior distribution of universes as a function of the fundamental constants is easily modified to get any desired result.
Many criticisms focus on versions of the strong anthropic principle, such as Barrow and Tipler's anthropic cosmological principle, which are teleological notions that tend to describe the existence of life as a necessary prerequisite for the observable constants of physics. Similarly, Stephen Jay Gould, Michael Shermer, and others claim that the stronger versions of the anthropic principle seem to reverse known causes and effects. Gould compared the claim that the universe is fine-tuned for the benefit of our kind of life to saying that sausages were made long and narrow so that they could fit into modern hotdog buns, or saying that ships had been invented to house barnacles. These critics cite the vast physical, fossil, genetic, and other biological evidence consistent with life having been fine-tuned through natural selection to adapt to the physical and geophysical environment in which life exists. Life appears to have adapted to the universe, and not vice versa.
Some applications of the anthropic principle have been criticized as an argument by lack of imagination, for tacitly assuming that carbon compounds and water are the only possible chemistry of life (sometimes called "carbon chauvinism"; see also alternative biochemistry). The range of fundamental physical constants consistent with the evolution of carbon-based life may also be wider than those who advocate a fine-tuned universe have argued. For instance, Harnik et al. propose a Weakless Universe in which the weak nuclear force is eliminated. They show that this has no significant effect on the other fundamental interactions, provided some adjustments are made in how those interactions work. However, if some of the fine-tuned details of our universe were violated, that would rule out complex structures of any kind—stars, planets, galaxies, etc.
Lee Smolin has offered a theory designed to improve on the lack of imagination that has been ascribed to anthropic principles. He puts forth his fecund universes theory, which assumes universes have "offspring" through the creation of black holes whose offspring universes have values of physical constants that depend on those of the mother universe.
The philosophers of cosmology John Earman, Ernan McMullin, and Jesús Mosterín contend that "in its weak version, the anthropic principle is a mere tautology, which does not allow us to explain anything or to predict anything that we did not already know. In its strong version, it is a gratuitous speculation". A further criticism by Mosterín concerns the flawed "anthropic" inference from the assumption of an infinity of worlds to the existence of one like ours:
See also
(discussing the anthropic principle)
(an immediate precursor of the idea)
(work of Alejandro Jenkins)
Notes
Footnotes
References
5 chapters available online.
Stenger, Victor J. (1999), "Anthropic design", The skeptical inquirer 23 (August 31, 1999): 40–43
Mosterín, Jesús (2005). "Anthropic explanations in cosmology". In P. Háyek, L. Valdés and D. Westerstahl (ed.), Logic, methodology and philosophy of science, Proceedings of the 12th international congress of the LMPS. London: King's college publications, pp. 441–473. .
A simple anthropic argument for why there are 3 spatial and 1 temporal dimensions.
Shows that some of the common criticisms of anthropic principle based on its relationship with numerology or the theological design argument are wrong.
External links
Nick Bostrom: web site devoted to the anthropic principle.
Friederich, Simon. Fine-tuning, review article of the discussion about fine-tuning, highlighting the role of the anthropic principles.
Gijsbers, Victor. (2000). Theistic anthropic principle refuted – Positive atheism magazine.
Chown, Marcus, Anything Goes, New scientist, 6 June 1998. On Max Tegmark's work.
Stephen Hawking, Steven Weinberg, Alexander Vilenkin, David Gross and Lawrence Krauss: Debate on anthropic reasoning Kavli-CERCA conference video archive.
Sober, Elliott R. 2009, "Absence of evidence and evidence of absence – Evidential transitivity in connection with fossils, fishing, fine-tuning, and firing squads." Philosophical Studies, 2009, 143: 63–90.
"Anthropic coincidence" – The anthropic controversy as a segue to Lee Smolin's theory of cosmological natural selection.
Leonard Susskind and Lee Smolin debate the anthropic principle.
Debate among scientists on arxiv.org.
Evolutionary probability and fine tuning
Benevolent design and the anthropic principle at MathPages
Critical review of "The privileged planet"
The anthropic principle – a review.
Berger, Daniel, 2002, "An impertinent résumé of the Anthropic cosmological principle. " A critique of Barrow & Tipler.
Jürgen Schmidhuber: Papers on algorithmic theories of everything and the anthropic principle's lack of predictive power.
Paul Davies: Cosmic jackpot – Interview about the anthropic principle (starts at 40 min), 15 May 2007.
Astronomical hypotheses
Concepts in epistemology
Physical cosmology
Principles
Religion and science | 0.782508 | 0.998542 | 0.781367 |
Ecological engineering | Ecological engineering uses ecology and engineering to predict, design, construct or restore, and manage ecosystems that integrate "human society with its natural environment for the benefit of both".
Origins, key concepts, definitions, and applications
Ecological engineering emerged as a new idea in the early 1960s, but its definition has taken several decades to refine. Its implementation is still undergoing adjustment, and its broader recognition as a new paradigm is relatively recent. Ecological engineering was introduced by Howard Odum and others as utilizing natural energy sources as the predominant input to manipulate and control environmental systems. The origins of ecological engineering are in Odum's work with ecological modeling and ecosystem simulation to capture holistic macro-patterns of energy and material flows affecting the efficient use of resources.
Mitsch and Jorgensen summarized five basic concepts that differentiate ecological engineering from other approaches to addressing problems to benefit society and nature: 1) it is based on the self-designing capacity of ecosystems; 2) it can be the field (or acid) test of ecological theories; 3) it relies on system approaches; 4) it conserves non-renewable energy sources; and 5) it supports ecosystem and biological conservation.
Mitsch and Jorgensen were the first to define ecological engineering as designing societal services such that they benefit society and nature, and later noted the design should be systems based, sustainable, and integrate society with its natural environment.
Bergen et al. defined ecological engineering as: 1) utilizing ecological science and theory; 2) applying to all types of ecosystems; 3) adapting engineering design methods; and 4) acknowledging a guiding value system.
Barrett (1999) offers a more literal definition of the term: "the design, construction, operation and management (that is, engineering) of landscape/aquatic structures and associated plant and animal communities (that is, ecosystems) to benefit humanity and, often, nature." Barrett continues: "other terms with equivalent or similar meanings include ecotechnology and two terms most often used in the erosion control field: soil bioengineering and biotechnical engineering. However, ecological engineering should not be confused with 'biotechnology' when describing genetic engineering at the cellular level, or 'bioengineering' meaning construction of artificial body parts."
The applications in ecological engineering can be classified into 3 spatial scales: 1) mesocosms (~0.1 to hundreds of meters); 2) ecosystems (~one to tens of km); and 3) regional systems (>tens of km). The complexity of the design likely increases with the spatial scale. Applications are increasing in breadth and depth, and likely impacting the field's definition, as more opportunities to design and use ecosystems as interfaces between society and nature are explored. Implementation of ecological engineering has focused on the creation or restoration of ecosystems, from degraded wetlands to multi-celled tubs and greenhouses that integrate microbial, fish, and plant services to process human wastewater into products such as fertilizers, flowers, and drinking water. Applications of ecological engineering in cities have emerged from collaboration with other fields such as landscape architecture, urban planning, and urban horticulture, to address human health and biodiversity, as targeted by the UN Sustainable Development Goals, with holistic projects such as stormwater management. Applications of ecological engineering in rural landscapes have included wetland treatment and community reforestation through traditional ecological knowledge. Permaculture is an example of broader applications that have emerged as distinct disciplines from ecological engineering, where David Holmgren cites the influence of Howard Odum in development of permaculture.
Design guidelines, functional classes, and design principles
Ecological engineering design will combine systems ecology with the process of engineering design. Engineering design typically involves problem formulation (goal), problem analysis (constraints), alternative solutions search, decision among alternatives, and specification of a complete solution. A temporal design framework is provided by Matlock et al., stating the design solutions are considered in ecological time. In selecting between alternatives, the design should incorporate ecological economics in design evaluation and acknowledge a guiding value system which promotes biological conservation, benefiting society and nature.
Ecological engineering utilizes systems ecology with engineering design to obtain a holistic view of the interactions within and between society and nature. Ecosystem simulation with Energy Systems Language (also known as energy circuit language or energese) by Howard Odum is one illustration of this systems ecology approach. This holistic model development and simulation defines the system of interest, identifies the system's boundary, and diagrams how energy and material moves into, within, and out of, a system in order to identify how to use renewable resources through ecosystem processes and increase sustainability. The system it describes is a collection (i.e., group) of components (i.e., parts), connected by some type of interaction or interrelationship, that collectively responds to some stimulus or demand and fulfills some specific purpose or function. By understanding systems ecology the ecological engineer can more efficiently design with ecosystem components and processes within the design, utilize renewable energy and resources, and increase sustainability.
Mitsch and Jorgensen identified five Functional Classes for ecological engineering designs:
Ecosystem utilized to reduce/solve pollution problem. Example: phytoremediation, wastewater wetland, and bioretention of stormwater to filter excess nutrients and metals pollution
Ecosystem imitated or copied to address resource problem. Example: forest restoration, replacement wetlands, and installing street side rain gardens to extend canopy cover to optimize residential and urban cooling
Ecosystem recovered after disturbance. Example: mine land restoration, lake restoration, and channel aquatic restoration with mature riparian corridors
Ecosystem modified in ecologically sound way. Example: selective timber harvest, biomanipulation, and introduction of predator fish to reduce planktivorous fish, increase zooplankton, consume algae or phytoplankton, and clarify the water.
Ecosystems used for benefit without destroying balance. Example: sustainable agro-ecosystems, multispecies aquaculture, and introducing agroforestry plots into residential property to generate primary production at multiple vertical levels.
Mitsch and Jorgensen identified 19 Design Principles for ecological engineering, yet not all are expected to contribute to any single design:
Ecosystem structure & function are determined by forcing functions of the system;
Energy inputs to the ecosystems and available storage of the ecosystem is limited;
Ecosystems are open and dissipative systems (not thermodynamic balance of energy, matter, entropy, but spontaneous appearance of complex, chaotic structure);
Attention to a limited number of governing/controlling factors is most strategic in preventing pollution or restoring ecosystems;
Ecosystem have some homeostatic capability that results in smoothing out and depressing the effects of strongly variable inputs;
Match recycling pathways to the rates of ecosystems and reduce pollution effects;
Design for pulsing systems wherever possible;
Ecosystems are self-designing systems;
Processes of ecosystems have characteristic time and space scales that should be accounted for in environmental management;
Biodiversity should be championed to maintain an ecosystem's self design capacity;
Ecotones, transition zones, are as important for ecosystems as membranes for cells;
Coupling between ecosystems should be utilized wherever possible;
The components of an ecosystem are interconnected, interrelated, and form a network; consider direct as well as indirect efforts of ecosystem development;
An ecosystem has a history of development;
Ecosystems and species are most vulnerable at their geographical edges;
Ecosystems are hierarchical systems and are parts of a larger landscape;
Physical and biological processes are interactive, it is important to know both physical and biological interactions and to interpret them properly;
Eco-technology requires a holistic approach that integrates all interacting parts and processes as far as possible;
Information in ecosystems is stored in structures.
Mitsch and Jorgensen identified the following considerations prior implementing an ecological engineering design:
Create conceptual model of determine the parts of nature connected to the project;
Implement a computer model to simulate the impacts and uncertainty of the project;
Optimize the project to reduce uncertainty and increase beneficial impacts.
Academic curriculum (colleges)
An academic curriculum has been proposed for ecological engineering, and institutions around the world are starting programs. Key elements of this curriculum are: environmental engineering; systems ecology; restoration ecology; ecological modeling; quantitative ecology; economics of ecological engineering, and technical electives. The world's first B.S. Ecological Engineering program was formalized in 2009 at Oregon State University.
Complementing this set of courses are prerequisites courses in physical, biological, and chemical subject areas, and integrated design experiences. According to Matlock et al., the design should identify constraints, characterize solutions in ecological time, and incorporate ecological economics in design evaluation. Economics of ecological engineering has been demonstrated using energy principles for a wetland., and using nutrient valuation for a dairy farm <ref>C. Pizarro and others, An Economic Assessment of Algal Turf Scrubber Technology for Treatment of Dairy Manure Effluent. Ecological Engineering, 26(12): 321-327.</ref>
See also
Afforestation
Agroecology
Agroforestry
Analog forestry
Biomass (ecology)
Buffer strip
Constructed wetland
Energy-efficient landscaping
Environmental engineering
Forest farming
Forest gardening
Great Green Wall
Great Plains Shelterbelt (1934- )
Great Plan for the Transformation of Nature - an example of applied ecological engineering in the 1940s and 1950s
Hedgerow
Home gardens
Human ecology
Macro-engineering
Sand fence
Seawater greenhouse
Sustainable agriculture
Terra preta
Three-North Shelter Forest Program
Wildcrafting
Windbreak
Literature
Howard T. Odum (1963), "Man and Ecosystem" Proceedings, Lockwood Conference on the Suburban Forest and Ecology, in: Bulletin Connecticut Agric. Station.
W.J. Mitsch (1993), Ecological engineering—"a cooperative role with the planetary life–support systems. Environmental Science & Technology'' 27:438-445.
H.D. van Bohemen (2004), Ecological Engineering and Civil Engineering works, Doctoral thesis TU Delft, The Netherlands.
References
External links
What is "ecological engineering"? Webtext, Ecological Engineering Group, 2007.
Ecological Engineering Student Society Website, EESS, Oregon State University, 2011.
Ecological Engineering webtext by Howard T.Odum Center for Wetlands at the University of Florida, 2007.
Organizations
American Ecological Engineering Society, homepage.
Ecological Engineering Student Society Website, EESS, Oregon State University, 2011.
American Society of Professional Wetland Engineers, homepage, wiki.
Ecological Engineering Group, homepage.
International Ecological Engineering Society homepage.
Scientific journals
Ecological Engineering since 1992, with a general description of the field.
Landscape and Ecological Engineering since 2005.
Journal of Ecological Engineering Design Officially launched in 2021, this journal offers a diamond open access format (free to the reader, free to the authors). This is the official journal of the American Ecological Engineering Society with production support from the University of Vermont Libraries.
Ecological restoration
Environmental terminology
Environmental engineering
Environmental social science
Engineering disciplines
Climate change policy | 0.803096 | 0.972933 | 0.781359 |
Gender and development | Gender and development is an interdisciplinary field of research and applied study that implements a feminist approach to understanding and addressing the disparate impact that economic development and globalization have on people based upon their location, gender, class background, and other socio-political identities. A strictly economic approach to development views a country's development in quantitative terms such as job creation, inflation control, and high employment – all of which aim to improve the ‘economic wellbeing’ of a country and the subsequent quality of life for its people. In terms of economic development, quality of life is defined as access to necessary rights and resources including but not limited to quality education, medical facilities, affordable housing, clean environments, and low crime rate. Gender and development considers many of these same factors; however, gender and development emphasizes efforts towards understanding how multifaceted these issues are in the entangled context of culture, government, and globalization. Accounting for this need, gender and development implements ethnographic research, research that studies a specific culture or group of people by physically immersing the researcher into the environment and daily routine of those being studied, in order to comprehensively understand how development policy and practices affect the everyday life of targeted groups or areas.
The history of this field dates back to the 1950s, when studies of economic development first brought women into its discourse, focusing on women only as subjects of welfare policies – notably those centered on food aid and family planning. The focus of women in development increased throughout the decade, and by 1962, the United Nations General Assembly called for the Commission on the Status of Women to collaborate with the Secretary General and a number of other UN sectors to develop a longstanding program dedicated to women's advancement in developing countries. A decade later, feminist economist Ester Boserup’s pioneering book Women’s Role in Economic Development (1970) was published, radically shifting perspectives of development and contributing to the birth of what eventually became the gender and development field.
Since Boserup's consider that development affects men and women differently, the study of gender's relation to development has gathered major interest amongst scholars and international policymakers. The field has undergone major theoretical shifts, beginning with Women in Development (WID), shifting to Women and Development (WAD), and finally becoming the contemporary Gender and Development (GAD). Each of these frameworks emerged as an evolution of its predecessor, aiming to encompass a broader range of topics and social science perspectives. In addition to these frameworks, international financial institutions such as the World Bank and the International Monetary Fund (IMF) have implemented policies, programs, and research regarding gender and development, contributing a neoliberal and smart economics approach to the study. Examples of these policies and programs include Structural Adjustment Programs (SAPs), microfinance, outsourcing, and privatizing public enterprises, all of which direct focus towards economic growth and suggest that advancement towards gender equality will follow. These approaches have been challenged by alternative perspectives such as Marxism and ecofeminism, which respectively reject international capitalism and the gendered exploitation of the environment via science, technology, and capitalist production. Marxist perspectives of development advocate for the redistribution of wealth and power in efforts to reduce global labor exploitation and class inequalities, while ecofeminist perspectives confront industrial practices that accompany development, including deforestation, pollution, environmental degradation, and ecosystem destruction.
Gender Roles in Childhood Development
Introduction
Gender identity formation in early childhood is an important aspect of child development, shaping how individuals see themselves and others in terms of gender (Martin & Ruble, 2010). It encompasses the understanding and internalization of societal norms, roles, and expectations associated with a specific gender. As time progresses, there becomes more outlets for these gender roles to be influenced due to the increase outlets of new media. This developmental process begins early and is influenced by various factors, including socialization, cultural norms, and individual experiences. Understanding and addressing gender roles in childhood is essential for promoting healthy identity development and fostering gender equity (Martin & Ruble, 2010).
Observations of Gender Identity Formation
Educators have made abundant observations regarding children's expression of gender identity. From a earlier age, children absorb information about gender from various sources, including family, peers, media, and societal norms (Halim, Ruble, Tamis-LeMonda, & Shrout, 2010). These influences shape their perceptions and behaviors related to gender, leading them to either conform to or challenge gender stereotypes. An example could be when children may exhibit preferences for certain toys, activities, or clothing based on societal expectations associated with their perceived gender because that is what was handed to them or what was made okay from an authority figure, establishing a baseline.
Teacher Research
Teacher research plays a crucial role in understanding gender roles in childhood development. Educators often are able to see similarities in children's behavior that reflect societal gender norms, such as boys moving towards rough play or girls engaging in nurturing activities (Solomon, 2016). These observations prompt more investigation into the factors contributing to these behaviors, including the classroom materials, teacher expectations, and social interactions by examining these factors, educators can gain insights into how gender stereotypes are perpetuated and explore strategies to promote gender equity in the classroom. Since teachers have the educational background of learning about and seeing these developments, it allows them to be great researchers in this subject category.
Influence of Materials and Teacher Expectations
The materials provided in the classroom and the requirements established by teachers can influence children's behavior and interactions (Solomon, 2016). For instance, offering a diverse range of toys, books, and activities can help encourage these children to explore interests outside of traditional gender roles that are trying to be established by external sources (Martin & Ruble, 2013). Also, creating an environment where all children feel valued regardless of gender can help challenge stereotypes and promote ideal socialization experiences. By being aware of the materials and messages conveyed in the classroom, educators can create an environment that fosters gender diversity and empowers children to express themselves authentically (Solomon 2016).
Children's Desire and Search for Power
Children actively seek/express power in interactions with others, often coming upon their understanding of gender idealistic. For example, they may use knowledge of gender norms to assert authority or control over others, such as excluding others from being able to participate in a game because of a gender stereotype like girls cannot play sports game or games that include rough play. These behaviors show children's attempts to sift through social hierarchies and establish identities within the context of expectations. By recognizing and addressing these dynamics, educators can promote more inclusive and equitable interactions among children.
Early Acquisition of Gender Roles
Children begin to internalize gender roles from a young age, often as early as infancy. By preschool age, many children have developed some form of understanding on gender stereotypes and expectations (King, 2021). These stereotypes are established through various sources, including family, friends, media outlets, and cultural ideals, shaping children's understanding and behaviors related to gender. Education systems, parental influence, and media and store influence can contribute as many of these influences associated different colors with different genders, different influential figures, as well as different toys that are supposed to cater to a specific gender.
Expressions and Behavior Reflecting Gender Development
Children's expressions provide insights into their changing understanding of gender roles and relationships. However, it is necessary to be able to demonstrate processes of emotional regulation in situations where the individual needs an adjustment of the emotional response of larger intensity (Sanchis et. al 2020). Some children can develop stern understandings about gender stereotypes, showing a bias or discrimination towards those who do not conform to these norms. Educators play a role in counteracting these beliefs by providing opportunities for reflection and promoting empathy and respect for diverse gender identities (Martin & Ruble, 2010).
Educational Strategies
In conclusion, promoting gender equity and challenging traditional gender roles in early childhood takes additional intentional educational strategies. This includes implementing multi-gendered activities, giving examples diverse role models, and offering open-ended materials for activity that encourage creativity (Martin & Ruble, 2010). By creating inclusive learning environments that affirm and celebrate gender diversity, researchers and individuals can support children in developing healthy and positive identities that transcend narrow stereotypes and promote social justice.
Early approaches
Women in development (WID)
Theoretical approach
The term “women in development” was originally coined by a Washington-based network of female development professionals in the early 1970s who sought to question trickle down existing theories of development by contesting that economic development had identical impacts on men and women. The Women in Development movement (WID) gained momentum in the 1970s, driven by the resurgence of women's movements in developed countries, and particularly through liberal feminists striving for equal rights and labour opportunities in the United States. Liberal feminism, postulating that women's disadvantages in society may be eliminated by breaking down customary expectations of women by offering better education to women and introducing equal opportunity programmes, had a notable influence on the formulation of the WID approaches.
The focus of the 1970s feminist movements and their repeated calls for employment opportunities in the development agenda meant that particular attention was given to the productive labour of women, leaving aside reproductive concerns and social welfare. This approach was pushed forward by WID advocates, reacting to the general policy environment maintained by early colonial authorities and post-war development authorities, wherein inadequate reference to the work undertook by women as producers was made, as they were almost solely identified as their roles as wives and mothers. The WID's opposition to this “welfare approach” was in part motivated by the work of Danish economist Ester Boserup in the early 1970s, who challenged the assumptions of the said approach and highlighted the role women by women in the agricultural production and economy.
Reeves and Baden (2000) point out that the WID approach stresses the need for women to play a greater role in the development process. According to this perspective, women's active involvement in policymaking will lead to more successful policies overall. Thus, a dominant strand of thinking within WID sought to link women's issues with development, highlighting how such issues acted as impediments to economic growth; this “relevance” approach stemmed from the experience of WID advocates which illustrated that it was more effective if demands of equity and social justice for women were strategically linked to mainstream development concerns, in an attempt to have WID policy goals taken up by development agencies. The Women in Development approach was the first contemporary movement to specifically integrate women in the broader development agenda and acted as the precursor to later movements such as the Women and Development (WAD), and ultimately, the Gender and Development approach, departing from some of the criticized aspects imputed to the WID.
Criticism
The WID movement faced a number of criticisms; such an approach had in some cases the unwanted consequence of depicting women as a unit whose claims are conditional on its productive value, associating increased female status with the value of cash income in women's lives. The WID view and similar classifications based on Western feminism, applied a general definition to the status, experiences and contributions of women and the solutions for women in Third World countries. Furthermore, the WID, although it advocated for greater gender equality, did not tackle the unequal gender relations and roles at the basis of women's exclusion and gender subordination rather than addressing the stereotyped expectations entertained by men. Moreover, the underlying assumption behind the call for the integration of the Third World women with their national economy was that women were not already participating in development, thus downplaying women's roles in household production and informal economic and political activities. The WID was also criticized for its views on the fact that women's status will improve by moving into “productive employment”, implying that the move to the “modern sector” need to be made from the “traditional” sector to achieve self-advancement, further implying that “traditional” work roles often occupied by women in the developing world were inhibiting to self-development.
Women and development (WAD)
Women and development (WAD) is a theoretical and practical approach to development. It was introduced into gender studies scholarship in the second half of the 1970s, following its origins, which can be traced to the First World Conference on Women in Mexico City in 1975, organized by the UN. It is a departure from the previously predominant theory, WID (Women in Development) and is often mistaken for WID, but has many distinct characteristics.
Theoretical approach
WAD arose out of a shift in thinking about women's role in development, and concerns about the explanatory limitations of modernization theory. While previous thinking held that development was a vehicle to advance women, new ideas suggested that development was only made possible by the involvement of women, and rather than being simply passive recipients of development aid, they should be actively involved in development projects. WAD took this thinking a step further and suggested that women have always been an integral part of development, and did not suddenly appear in the 1970s as a result of exogenous development efforts. The WAD approach suggests that there be women-only development projects that were theorized to remove women from the patriarchal hegemony that would exist if women participated in development alongside men in a patriarchal culture, though this concept has been heavily debated by theorists in the field. In this sense, WAD is differentiated from WID by way of the theoretical framework upon which it was built. Rather than focus specifically on women's relationship to development, WAD focuses on the relationship between patriarchy and capitalism. This theory seeks to understand women's issues from the perspectives of neo-Marxism and dependency theory, though much of the theorizing about WAD remains undocumented due to the persistent and pressing nature of development work in which many WAD theorists engage.
Practical approach
The WAD paradigm stresses the relationship between women, and the work that they perform in their societies as economic agents in both the public and domestic spheres. It also emphasizes the distinctive nature of the roles women play in the maintenance and development of their societies, with the understanding that purely the integration of women into development efforts would serve to reinforce the existing structures of inequality present in societies overrun by patriarchal interests. In general, WAD is thought to offer a more critical conceptualization of women's position compared to WID.
The WAD approach emphasizes the distinctive nature of women's knowledge, work, goals, and responsibilities, as well as advocating for the recognition of their distinctiveness. This fact, combined with a recognized tendency for development agencies to be dominated by patriarchal interests, is at the root of the women-only initiatives introduced by WAD subscribers.
Criticism
Some of the common critiques of the WAD approach include concerns that the women-only development projects would struggle, or ultimately fail, due to their scale, and the marginalized status of these women. Furthermore, the WAD perspective suffers from a tendency to view women as a class, and pay little attention to the differences among women (such as feminist concept of intersectionality), including race and ethnicity, and prescribe development endeavors that may only serve to address the needs of a particular group. While an improvement on WID, WAD fails to fully consider the relationships between patriarchy, modes of production, and the marginalization of women. It also presumes that the position of women around the world will improve when international conditions become more equitable. Additionally, WAD has been criticized for its singular preoccupation with the productive side of women's work, while it ignores the reproductive aspect of women's work and lives. Therefore, WID/WAD intervention strategies have tended to concentrate on the development of income-generating activities without taking into account the time burdens that such strategies place on women. Value is placed on income-generating activities, and none is ascribed to social and cultural reproduction.
Gender and development (GAD)
Theoretical approach
The Gender and Development (GAD) approach focuses on the socially constructed differences between men and women, the need to challenge existing gender roles and relations, and the creation and effects of class differences on development. This approach was majorly influenced by the writings of academic scholars such as Oakley (1972) and Rubin (1975), who argue the social relationship between men and women have systematically subordinated women, along with economist scholars Lourdes Benería and Amartya Sen (1981), who assess the impact of colonialism on development and gender inequality. They state that colonialism imposed more than a 'value system' upon developing nations, it introduced a system of economics 'designed to promote capital accumulation which caused class differentiation'.
GAD departs from WID, which discussed women's subordination and lack of inclusion in discussions of international development without examining broader systems of gender relations. Influenced by this work, by the late 1970s, some practitioners working in the development field questioned focusing on women in isolation. GAD challenged the WID focus on women as an important ‘target group’ and ‘untapped resources’ for development. GAD marked a shift in thinking about the need to understand how women and men are socially constructed and how ‘those constructions are powerfully reinforced by the social activities that both define and are defined by them.’ GAD focuses primarily on the gendered division of labor and gender as a relation of power embedded in institutions. Consequently, two major frameworks, ‘Gender roles’ and ‘social relations analysis’, are used in this approach. 'Gender roles' focuses on the social construction of identities within the household; it also reveals the expectations from ‘maleness and femaleness’ in their relative access to resources. 'Social relations analysis' exposes the social dimensions of hierarchical power relations embedded in social institutions, as well as its determining influence on ‘the relative position of men and women in society.’ This relative positioning tends to discriminate against women.
Unlike WID, the GAD approach is not concerned specifically with women, but with the way in which a society assigns roles, responsibilities and expectations to both women and men. GAD applies gender analysis to uncover the ways in which men and women work together, presenting results in neutral terms of economics and efficiency. In an attempt to create gender equality (denoting women having the same opportunities as men, including ability to participate in the public sphere), GAD policies aim to redefine traditional gender role expectations. Women are expected to fulfill household management tasks, home-based production as well as bearing and raising children and caring for family members. In terms of children, they develop social constructions through observations at a younger age than most people think. Children tend to learn about the differences between male and female actions and objects of use in a specific culture of their environment through observation (Chung & Huang 2021). Around three years old, children learn about stability of gender and demonstrate stereotyping similar to adults regarding toys, clothes, activities, games, colors, and even specific personality descriptions. (2021). By five years of age, they begin to develop identity and to possess stereotyping of personal–social attributes (2021). At that age of their life, children think that they are more similar to their same-gender peers and are likely to compare themselves with characteristics that fit the gender stereotype. After entering primary school, children’s gender stereotyping extends to more dimensions, such as career choices, sports, motives to learn subjects which has an impact on the cognition of individuals (2021). The role of a wife is largely interpreted as 'the responsibilities of motherhood.' Men, however, are expected to be breadwinners, associated with paid work and market production. In the labor market, women tend to earn less than men. For instance, 'a study by the Equality and Human Rights Commission found massive pay inequities in some United Kingdom's top finance companies, women received around 80 percent less performance-related pay than their male colleagues.' In response to pervasive gender inequalities, Beijing Platform for Action established gender mainstreaming in 1995 as a strategy across all policy areas at all levels of governance for achieving gender equality.
GAD has been largely utilized in debates regarding development but this trend is not seen in the actual practice of developmental agencies and plans for development. Caroline Moser claims WID persists due to the challenging nature of GAD, but Shirin M. Rai counters this claim noting that the real issue lies in the tendency to overlap WID and GAD in policy. Therefore, it would only be possible if development agencies fully adopted GAD language exclusively. Caroline Moser developed the Moser Gender Planning Framework for GAD-oriented development planning in the 1980s while working at the Development Planning Unit of the University of London. Working with Caren Levy, she expanded it into a methodology for gender policy and planning.
The Moser framework follows the Gender and Development approach in emphasizing the importance of gender relations.
As with the WID-based Harvard Analytical Framework, it includes a collection of quantitative empirical facts. Going further, it investigates the reasons and processes that lead to conventions of access and control.
The Moser Framework includes gender roles identification, gender needs assessment, disaggregating control of resources and decision making within the household, planning for balancing work and household responsibilities, distinguishing between different aims in interventions and involving women and gender-aware organizations in planning.
Criticism
GAD has been criticized for emphasizing the social differences between men and women while neglecting the bonds between them and also the potential for changes in roles. Another criticism is that GAD does not dig deeply enough into social relations and so may not explain how these relations can undermine programs directed at women. It also does not uncover the types of trade-offs that women are prepared to make for the sake of achieving their ideals of marriage or motherhood. Another criticism is that the GAD perspective is theoretically distinct from WID, but in practice, programs seem to have elements of both. Whilst many development agencies are now committed to a gender approach, in practice, the primary institutional perspective remain focused on a WID approach. Specifically, the language of GAD has been incorporated into WID programs. There is a slippage in reality where gender mainstreaming is often based in a single normative perspective as synonymous to women. Development agencies still advance gender transformation to mean economic betterment for women. Further criticisms of GAD is its insufficient attention to culture, with a new framework being offered instead: Women, Culture and Development (WCD). This framework, unlike GAD, wouldn't look at women as victims but would rather evaluate the Third World life of women through the context of the language and practice of gender, the Global South, and culture.
Neoliberal approaches
Gender and neoliberal development institutions
Neoliberalism consists of policies that will privatize public industry, deregulate any laws or policies that interfere with the free flow of the market and cut back on all social services. These policies were often introduced to many low-income countries through structural adjustment programs (SAPs) by the World Bank and the International Monetary Fund (IMF). Neoliberalism was cemented as the dominant global policy framework in the 1980s and 1990s. Among development institutions, gender issues have increasingly become part of economic development agendas, as the examples of the World Bank shows. Awareness by international organizations of the need to address gender issues evolved over the past decades. The World Bank, and regional development banks, donor agencies, and government ministries have provided many examples of instrumental arguments for gender equality, for instance by emphasizing the importance of women's education as a way of increasing productivity in the household and the market. Their concerns have often focused on women's contributions to economic growth rather than the importance of women's education as a means for empowering women and enhancing their capabilities. The World Bank, for example, started focusing on gender in 1977 with the appointment of a first Women in Development Adviser. In 1984 the bank mandated that its programs consider women's issues. In 1994 the bank issued a policy paper on Gender and Development, reflecting current thinking on the subject. This policy aims to address policy and institutional constraints that maintain disparities between the genders and thus limit the effectiveness of development program. Thirty years after the appointment of a first Women in Development Adviser, a so-called Gender Action Plan was launched to underline the importance of the topic within development strategies and to introduce the new Smart Economics strategy.
Gender mainstreaming mandated by the 1995 Beijing Platform for action integrates gender in all aspects of individuals lives in regards to policy development on gender equality. The World Bank's Gender Action Plan of 2007-10 is built upon the Bank's gender mainstreaming strategy for gender equality. The Gender Action Plan's objective was advance women's economic empowerment through their participation in land, labor, financial and product markets. In 2012, the World Development Report was the first report of the series examining Gender Equality and Development. Florika Fink-Hooijer, head of the European Commission's Directorate-General for European Civil Protection and Humanitarian Aid Operations introduced cash-based aid as well as gender and age sensitive aid.
An argument made on the functions behind institutional financial institutions such as the International Monetary Fund (IMF) and the World Bank are that they support capitalist ideals through their means of economic growth of countries globally and their participation in the global economy and capitalist systems. The roles of banks as institutions and the creation of new workers’ economy reflect neoliberal developing ideals is also present in the criticisms on neoliberal developing institutions. Another critique made on the market and institutions is that it contributes to the creation of policies and aid with gender-related outcomes. An argument made on the European Bank for Reconstruction and Development is that it creates a neoliberal dominance that continues the construction and reconstruction of gender norms by homogenously category women rather than the gender disparities within its policies.
Gender and outsourcing
One of the features of development encouraged in neoliberal approaches is outsourcing. Outsourcing is when companies from the western world moves some of their business to another country. The reasons these companies make the decision to move is often because of cheap labor costs. Although outsourcing is about businesses it is directly related to gender because it has greatly affected women. The reason it is related to gender is that women are mainly the people that are being hired for these cheap labor jobs and why they are being hired.
One example of a popular place for factories to relocate is to China. In China the main people who work in these factories are women, these women move from their home towns to cities far away for the factory jobs. The reasons these women move is to be able to make a wage to take care of not only themselves but their families as well. Oftentimes these women are expected to get these jobs.
Another example of a country the garment industry outsources work to is Bangladesh, which has one of the lowest costs of labor compared to other third world countries (see the ILO data provided in figure 1). With low labor costs, there is also poor compliance with labor standards in the factories. The factory workers in Bangladesh can experience several types of violations of their rights. These violations include: long working hours with no choice but to work overtime, deductions to wages, as well as dangerous and unsanitary working conditions.
Although the discussions made around outsourcing do not often involve the effects on women, women daily endure constant results from it. Women in countries and areas that may not have been able to work and make their own income now have the opportunity to provide for themselves and their kids. Gender is brought to attention because unemployment is sometimes a threat to women. The reason for it being a threat is because without jobs and their own income women may fall victim to discrimination or abuse. It is very valuable to many women to be able to obtain their own source of income, outsourcing allows women in countries that may not easily obtain a job the opportunity to obtain jobs. Many times factory owners discuss how many women want the jobs they have to offer.
With the availability of jobs and the seeming benefits comes a concern for the work conditions in these outsourced jobs. Although some women have acquired a job the work conditions may not be safe or ideal. As mentioned above the jobs are in extreme demand because of how limited opportunities for employment is in certain regions. This leads to the idea of women being disposable at the workplace. As a result of this the workers in these factories do not have room to complain. They also are not able to expect safe working conditions in their work environments. Women have to move far from their hometowns and families to work at these factory jobs. The hours are long and because they are not home they typically also move into dormitories and live at their jobs.
Gender and microfinance
Women have been identified by some development institutions as a key to successful development, for example through financial inclusion. Microcredit is giving small loans to people in poverty without collateral. This was first started by Muhammad Yunus, who formed the Grameen Bank in Bangladesh. Studies have showed that women are more likely to repay their debt than men, and the Grameen Bank focuses on aiding women. This financial opportunity allows women to start their own businesses for a steady income. Women have been the focus of microcredit for their subsequent increased status as well as the overall well-being of the home being improved when given to women rather than men.
There were numerous case studies done in Tanzania about the correlation of the role of SACCoS (savings and credit cooperative organization) and the economic development of the country. The research showed that the microfinance policies were not being carried out in the most efficient ways due to exploitation. One case study went a step further to claim that this financial service could provide a more equal society for women in Tanzania.
While there are such cases in which women were able to lift themselves out of poverty, there are also cases in which women fell into a poverty trap as they were unable to repay their loans. It is even said that microcredit is actually an "anti-developmental" approach. There is little evidence of significant development for these women within the 30 years that the microfinance has been around. In South Africa, unemployment is high due to the introduction of microfinance, more so than it was under apartheid. Microcredit intensified poverty in Johannesburg, South Africa as poor communities, mostly women, who needed to repay debt were forced to work in the informal sector.
Some arguments that microcredit is not effective insist that the structure of the economy, with large informal and agriculture sectors, do not provide a system in which borrowers can be successful. In Nigeria, where the informal economy is approximately 45–60% of economy, women working within it could not attain access to microcredit because of the high demand for loans triggered by high unemployment rates in the formal sector. This study found Nigerian woman are forced into “the hustle” and enhanced risk of the informal economy, which is unpredictable and contributes to women's inability to repay the loans. Another example from a study conducted in Arampur, Bangladesh, found that microcredit programs within the agrarian community do not effectively help the borrower pay their loan because the terms of the loan are not compatible with farm work. If was found that MFIs force borrowers to repay before the harvesting season starts and in some cases endure the struggles of sharecropping work that is funded by the loan.
Although there is debate on how effective microcredit is in alleviating poverty in general, there is an argument that microcredit enables women to participate and fulfill their capabilities in society. For example, a study conducted in Malayasia showed that their version of microcredit, AIM, had a positive effect on Muslim women's empowerment in terms of allowing them to have more control over family planning and over decisions that were made in the home.
In contrast, out of a study conducted on 205 different MFIs, they concluded that there is still gender discrimination within microfinance institutions themselves and microcredit which impact the existing discrimination within communities as well. In Bangladesh, another outcome seen for some of the Grameen recipients was that they faced domestic abuse as a result of their husbands feeling threatened about women bringing in more income. A study in Uganda also noted that men felt threatened through increased female financial dominance, increasing women's vulnerability at home.
Through the “constructivist feminist standpoint,” women can understand that the limitations they face are not inherent and in fact, are “constructed” by traditional gender roles, which they have the ability to challenge through owning their own small business. Through this focus, a study focused on the Foundation for International Community Assistance's (FINCA) involvement and impact in Peru, where women are made aware of the “machismo” patriarchal culture in which they live through their experiences with building small enterprises. In Rajasthan, India, another study found mixed results for women participating in a microlending program. Though many women were not able to pay back their loans, many were still eager to take on debt because their microfinance participation created a platform to address other inequities within the community.
Another example is the Women's Development Business (WDB) in South Africa, a Grameen Bank microfinance replicator. According to WDB, the goal is to ensure “[…] that rural women are given the tools to free themselves from the chains of poverty […]” through allocation of financial resources directly to women including enterprise development programs. The idea is to use microfinance as a market-oriented tool to ensure access to financial services for disadvantaged and low-income people and therefore fostering economic development through financial inclusion.
Diving into another example regarding Microfinance and women from Women Entrepreneurship Promotion in Developing Countries: What explains the gender gap in entrepreneurship and how to close it?is Vossenberg (2013) describes how although there has been an increase in entrepreneurship for women, the gender gap still persists. The author states “The gender gap is commonly defined as the difference between men and women in terms of numbers engaged in entrepreneurial activity, motives to start or run a business, industry choice and business performance and growth” (Vossenberg, 2). The article dives into how in Eastern Europe there is a low rate of women entrepreneurs. Although the author discusses how in Africa nearly fifty percent of women make up entrepreneurs.
As a reaction, a current topic in the feminist literature on economic development is the ‘gendering’ of microfinance, as women have increasingly become the target borrowers for rural microcredit lending. This, in turn, creates the assumption of a “rational economic woman” which can exacerbate existing social hierarchies).
Therefore, the critique is that the assumption of economic development through microfinance does not take into account all possible outcomes, especially the ones affecting women.
The impact of programs of the Bretton Woods Institutions and other similar organizations on gender are being monitored by Gender Action, a watchdog group founded in 2002 by Elaine Zuckerman who is a former World Bank economist.
Gender, financial crises, and neoliberal economic policy
The Great Recession and the following politics of austerity have opened up a wide range of gender and feminist debates on neoliberalism and the impact of the crisis on women. One view is that the crisis has affected women disproportionately and that there is a need for alternative economic structures in which investment in social reproduction needs to be given more weight. The International Labour Organization (ILO) assessed the impact of the Great Recession on workers and concluded that while the crisis initially affected industries that were dominated by male workers (such as finance, construction and manufacturing) it then spread over to sectors in which female workers are predominantly active. Examples for these sectors are the service sector or wholesale-retail trade.
There are different views among feminists on whether neoliberal economic policies have more positive or negative impacts on women. In the post-war era, feminist scholars such as Elizabeth Wilson criticized state capitalism and the welfare state as a tool to oppress women. Therefore, neoliberal economic policies featuring privatization and deregulation, hence a reduction of the influence of the state and more individual freedom was argued to improve conditions for women. This anti-welfare state thinking arguably led to feminist support for neoliberal ideas embarking on a macroeconomic policy level deregulation and a reduced role of the state.
Therefore, some scholars in the field argue that feminism, especially during its second wave, has contributed key ideas to Neoliberalism that, according to these authors, creates new forms of inequality and exploitation.
As a reaction to the phenomenon that some forms of feminism are increasingly interwoven with capitalism, many suggestions on how to name these movements have emerged in the feminist literature. Examples are ‘free market feminism’ or even ‘faux-feminism’.
Smart economics
Theoretical approaches
Advocated chiefly by the World Bank, smart economics is an approach to define gender equality as an integral part of economic development and it aims to spur development through investing more efficiently in women and girls. It stresses that the gap between men and women in human capital, economic opportunities, and voice/agency is a chief obstacle in achieving more efficient development. As an approach, it is a direct descendant of the efficiency approach taken by WID which “rationalizes ‘investing’ in women and girls for more effective development outcomes.” As articulated in the section of WID, the efficiency approach to women in development was chiefly articulated by Caroline Moser in the late 1980s. Continuing the stream of WID, smart economics’ key unit of analysis is women as individual and it particularly focuses on measures that promote to narrow down the gender gap. Its approach identifies women are relatively underinvested source of development and it defines gender equality an opportunity of higher return investment. “Gender equality itself is here depicted as smart economics, in that it enables women to contribute their utmost skills and energies to the project of world economic development.” In this term, smart economics champions neoliberal perspective in seeing business as a vital vehicle for change and it takes a stance of liberal feminism.
The thinking behind smart economics dates back, at least, to the lost decade of the Structural Adjustment Policies (SAPs) in the 1980s. In 1995, World Bank issued its flagship publication on gender matters of the year Enhancing Women's Participation in Economic Development (World Bank 1995). This report marked a critical foundation to the naissance of Smart Economics; in a chapter entitled ‘The Pay-offs to Investing in Women,’ the Bank proclaimed that investing in women “speeds economic development by raising productivity and promoting the more efficient use of resources; it produces significant social returns, improving child survival and reducing fertility, and it has considerable intergenerational pay-offs.” The Bank also emphasized its associated social benefits generated by investing in women. For example, the Bank turned to researches of Whitehead that evidenced a greater female-control of household income is associated with better outcomes for children's welfare and Jeffery and Jeffery who analyzed the positive correlation between female education and lower fertility rates. In the 2000s, the approach of smart economics came to be further crystallized through various frameworks and initiatives. A first step was World Bank's Gender Action Plan (GAP) 2007-/2010, followed by the “Three Year Road Map for Gender Mainstreaming 2010-13.” The 2010-13 framework responded to criticisms for its precursor and incorporated some shifts in thematic priorities. Lastly but not least, the decisive turning point was 2012 marked by its publication of “World Development Report 2012: Gender Equality and Development.” This Bank's first comprehensive focus on the gender issues was welcomed by various scholars and practitioners, as an indicator of its seriousness. For example, Shahra Razavi appraised the report as ‘a welcome opportunity for widening the intellectual space’.
Other international organizations, particular UN families, have so far endorsed the approach of smart economics. Examining the relationship between child well-being and gender equality, for example, UNICEF also referred to the “Double Dividend of Gender Equality.” Its explicit link to a wider framework of the Millennium Development Goals (where the Goal 3 is Promoting Gender Equality and Women's Empowerment) claimed a wider legitimacy beyond economic efficiency. In 2007, the Bank proclaimed that “The business case for investing in MDG 3 is strong; it is nothing more than smart economics.” In addition, “Development organisations and governments have been joined in this focus on the ‘business case’ for gender equality and the empowerment of women, by businesses and enterprises which are interested in contributing to social good.” A good example is “Girl Effect initiative” taken by Nike Foundation. Its claim for economic imperative and a broader socio-economic impact also met a strategic need of NGOs and community organizations that seeks justification for their program funding. Thus, some NGOs, for example Plan International, captured this trend to further their program. The then-president of the World Bank Robert B. Zoellick was quoted by Plan International in stating “Investing in adolescent girls is precisely the catalyst poor countries need to break intergenerational poverty and to create a better distribution of income. Investing in them is not only fair, it is a smart economic move.” The Great Recession and austerity measures taken by major donor counties further supported this approach, since international financial institutions and international NGOs received a greater pressure from donors and from global public to design and implement maximally cost-effective programs.
Criticisms
From the mid-2000s, the approach of smart economics and its chief proponent –World Bank– met a wide range of criticisms and denouncements. These discontents can be broadly categorized into three major claims; Subordination of Intrinsic Value; Ignorance for the need of systemic transformation; Feminisation of responsibility; Overemphasized efficiency; and Opportunistic pragmatism. This is not exhaustive list of criticisms, but the list aims to highlight different emphasis among existing criticisms.
The World Bank's gender policy aims to eliminate poverty and enhance economic growth by addressing gender disparities and inequalities that hinders development. A critique on the World Bank's gender policy is it being ‘gender-blind’ and not properly addressing gender inequity. Rather a critique made is that the World Bank's gender policy utilizes gender equality as an ends means rather than analyzing root causes for economic disparities and gender equity.
Smart economics’ subordination of women under the justification of development invited fierce criticisms. Chant expresses her grave concern that “Smart economics is concerned with building women’s capacities in the interests of development rather than promoting women’s rights for their own sake.” She disagrees that investment in women should be promoted by its instrumental utility: “it is imperative to ask whether the goal of female investment is primarily to promote gender equality and women’s ‘empowerment’, or to facilitate development ‘on the cheap’, and/or to promote further economic liberalization.” Although smart economics outlines that gender equality has intrinsic value (realizing gender equality is an end itself) and instrumental value (realizing gender equality is a means to a more efficient development), many points out that the Bank pays almost exclusive attentions to the latter in defining its framework and strategy. Zuckerman also echoed this point by stating “business case [which] ignores the moral imperative of empowering women to achieve women’s human rights and full equal rights with men.” In short, Chant casts a doubt that if it is not “possible to promote rights through utilitarianism.”
A wide range of scholars and practitioners has criticized that smart economics rather endorse the current status-quo of gender inequality and keep silence for the demand of institutional reform. Its approach “[d]oes not involves public action to transform the laws, policies, and practices which constrain personal and group agency.” Naila Kabeer also posits that “attention to collective action to enable women to challenge structural discrimination has been downplayed.” Simply, smart economics assumes that women are entirely capable of increasingly contributing for economic growth amid the ongoing structural barriers to realize their capabilities.
Sylvia Chant (2008) discredited its approach as ‘feminisation of responsibility and/or obligation’ where the smart economics intends to spur growth simply by demanding more from women in terms of time, labour, energy, and other resources. She also agrees that “Smart economics seeks to use women and girls to fix the world.” She further goes by clarifying that “It is less welcome to women who are already contributing vast amounts to both production and unpaid reproduction to be romanticised and depicted as the salvation of the world.”
Chant is concerned that “An efficiency-driven focus on young women and girls as smart economics leaves this critical part of the global population out.” Smart economics assumes that all women are at their productive stage and fallaciously neglects lives of the elderly women, or women with handicaps. Thus she calls for recognition of “equal rights of all women and girls -regardless of age, or the extent of nature of their economic contribution.” Also, its approach does not talk about cooperation and collaboration between males and females thus leaving men and boys completely out of picture.
Chant emphasize that “The smart economics approach represents, at best, pragmatism in a time of economic restructuring and austerity.” Smart economics can have a wider acceptance and legitimacy because now is the time when efficiency is most demanded, not because its utilitarianism has universal appeal. She further warns that feminists should be very cautious about "supporting, and working in coalition with, individuals and institutions who approach gender equality through the lens of smart economics. This may have attractions in strategic terms, enabling us to access resources for work focusing on supporting the individual agency of women and girls, but risks aggravating many of the complex problems that gender and development seeks to transform."
Alternative Approaches
Other approaches with different paradigms have also played a historically important role in advancing theories and practices in gender and development.
Marxism and Neo-Marxism
The structuralist debate was first triggered by Marxist and socialist feminists. Marxism, particularly through alternative models of state socialist development practiced in China and Cuba, challenged the dominant liberal approach over time. Neo-Marxist proponents focused on the role of the post-colonial state in development in general and also on localized class struggles. Marxist feminists advanced these criticisms towards liberal approaches and made significant contribution to the contemporary debate.
Dependency theory
Dependency theorists opposed that liberal development models, including the attempt to incorporate women into the existing global capitalism, was, in fact, nothing more than the "development of underdevelopment." This view led them to propose that delinking from the structural oppression of global capitalism is the only way to achieve balanced human development.
In the 1980s, there also emerged "a sustained questioning by post-structuralist critics of the development paradigm as a narrative of progress and as an achievable enterprise."
Basic Needs Approach, Capability Approach, and Ecofeminism
Within the liberal paradigm of women and development, various criticism have emerged. The Basic Needs (BN) approach began to pose questions to the focus on growth and income as indicators of development. It was heavily influenced by Sen and Nussbaum's capability approach, which was more gender sensitive than BN and focused on expanding human freedom. The BN particularly proposed a participatory approach to development and challenged the dominant discourse of trickle down effects. These approaches focused on the human freedom led to development of other important concepts such as human development and human security. From a perspective of sustainable development, ecofeminists articulated the direct link between colonialism and environmental degradation, which resulted in degradation of women's lives themselves.
References
Sources
Bertrand, Tietcheu (2006). Being Women and Men in Africa Today: Approaching Gender Roles in Changing African Societies.
Bradshaw, Sarah (May 2013). "Women’s role in economic development: Overcoming the constraints". UNSDSN. UNSDSN. Retrieved 22 November 2013.
Development Assistance Committee (DAC), 1998, p. 7
Eisenstein, Hester (2009). Feminism Seduced: How Global Elites Use Women's Labor and Ideas to Exploit the World. Boulder: Paradigm Publishers. . Retrieved 25 November 2013.
Elizabeth Wilson. Women and the Welfare State. Routledge.
Elson, Diane; Pearson, Ruth (27 September 2013). "Keynote of Diane Elson and Ruth Pearson at the Gender, Neoliberalism and Financial Crisis Conference at the University of York".Soundcloud. Retrieved 27 November 2013.
Frank, Andre Gunder (1969). Capitalism and underdevelopment in Latin America: historical studies of Chile and Brazil (Rev. and enl. ed. ed.). New York: Monthly Review P. .
Fraser, Nancy (2012). "Feminism, Capitalism, and the Cunning of History". Working paper. Fondation Maison des sciences de l'homme. p. 14. Retrieved 2 November 2013.
Harcourt, W. (2016). The Palgrave handbook of gender and development: critical engagements in feminist theory and practice. .
ILO. Employment, growth, and basic needs: a one-world problem: report of the Director-General of the International Labour Office. Geneva: International Labour Office. 1976..
Irene Tinker (1990). Persistent Inequalities: Women and World Development. Oxford University Press. p. 30. .
Jackson, edited by Cecile; Pearson, Ruth (2002). Feminist visions of development: gender analysis and policy (1. publ. ed.). London: Routledge. p. Jeffrey, P., & Jeffrey, R. (1998). Silver Bullet or Passing Fancy? Girl's Schooling and Population Policy. .
Kabeer, Naila (2003). Gender mainstreaming in poverty eradication and the Millennium development goals a handbook for policy-makers and other stakeholders. London: Commonwealth secretariat. .
McRobbie, Angela (2009). The Aftermath of Feminism: Gender, Culture and Social Change. London: Sage. . Retrieved 25 November 2013.
Merchant, Carolyn (1980). The death of nature: women, ecology, and the scientific revolution: a feminist reappraisal of the scientific revolution(First edition. ed.). San Francisco: Harper & Row..
Mies, Maria; Bennholdt-Thomsen, Veronika; Werlhof, Claudia von (1988). Women: the last colony (1. publ. ed.). London: Zed Books..
Moser, Caroline (1993). Gender Planning and Development. Theory, Practice and Training. New York: Routledge. p. 3.
Moser, Caroline O.N. (1995). Gender planning and development: theory, practice and training(Reprint. ed.). London [u.a.]: Routledge..
Nalini Visvanathan ... [et. The women, gender and development reader (2nd ed. ed.). London: Zed Books. p. 29..
New York Times. "Nike Harnesses ‘Girl Effect’ Again.". New York Times, November 10, 2010. Retrieved 1 December 2013.
Pearce, Samir Amin. Transl. by Brian (1976).Unequal development: an essay on the social formations of peripheral capitalism (al-Ṭabʻah 4. ed.). Hassocks: Harvester Pr. .
Plan International.Summary_ENGLISH_lo_resolution.pdf ‘Because I Am a Girl: The State of the World’s Girls 2009. Girls in the Global Economy. Adding it All Up.’. Plan International. p. 11 and 28.
Rankin, Katharine N. (2001). "Governing Development: Neoliberalism, Microcredit, and Rational Economic Woman". Economy and Society (Fondation Maison des sciences de l'homme) 30: 20. Retrieved 2 November 2013.
Rathgeber, Eva M. 1990. “WID, WAD, GAD: Trends in Research and Practice.” The Journal of Developing Areas. 24(4) 289-502
Razavi, S. ‘World Development Report 2012: Gender Equality and Development: An Opportunity Both Welcome and Missed (An Extended Commentary)’. p. 2.
Razavi, Shahrashoub; Miller, Carol (1995)."From WID to GAD: Conceptual shifts in the Women and Development discourse". United Nations Research Institute Occasional Paper series (United Nations Research Institute for Social Development) 1: 2. Retrieved 22 November 2013.
Reeves, Hazel (2000). Gender and Development: Concepts and Definitions. Brighton. p. 8. .
Robert Connell (1987). Gender and power: society, the person, and sexual politics. Stanford University Press. .
Sen, Amartya (2001). Development as freedom(1. publ. as an Oxford Univ. Press paperback ed.). Oxford [u.a.]: Oxford Univ. Press..
Singh, Shweta. (2007). Deconstructing Gender and development for Identities of Women, International Journal of Social Welfare, Issue 16, pages. 100–109.
True, J (2012). Feminist Strategies in Global Governance: Gender Mainstreaming. New York: Routledge. p. 37.
UNICEF (2006). The state of the world's children 2007: women and children: the double dividend of gender equality. United Nations Children's Fund.
UNU. The quality of life a study prepared for the World Institute for Development Economics Research (WIDER) of the United Nations University (Repr. ed.). Oxford: Clarendon Press. 1995. .
"World Bank Gender Overview". World Bank. World Bank. 3 May 2013. Retrieved 5 November 2013.
WDB about page". Women's Development Business. WDB. 2013. Retrieved 28 November 2013.
World Bank (1995). Enhancing Women's Participation in Economic Development(Washington, DC: World Bank). p. 22.
World Bank. "Applying Gender Action Plan Lessons: A Three-Year Road Map for Gender Mainstreaming (2011- 2013).". World Bank Report. World Bank. Retrieved 1 December 2013.
World Bank. "World Development Report 2012: Gender Equality and Development.".World Development Report. World Bank. Retrieved 1 December 2013.
World Bank. Global Monitoring Report 2007: Millennium Development Goals: Confronting the Challenges of Gender Equality and Fragile States (Vol. 4). World Bank-free PDF. p. 145.
Young, edited by Kate; Wolkowitz, Carol; McCullagh, Roslyn (1984). Of marriage and the market: women's subordination internationally and its lessons (2nd ed.). London: Routledge & Kegan Paul. p. Whitehead, A. (1984) ‘I’m hungry, mum: the politics of domestic budgeting.’..
Further reading
Benería, L., Berik, G., & Floro, M. (2003). Gender, development, and globalization: Economics as if all people mattered. New York: Routledge.
Counts, Elad (2008). Small Loans, Big Dreams: How Nobel Prize Winner Muhammad Yunus and Microfinance Are Changing the World. John Wiley & Sons, Incorporated.
Visvanathan, N., Duggan, L., Nisonoff, L., & Wiegersma, N. (Eds.). (2011). The women, gender, and development reader. 2nd edition. New Africa Books.
Ruble, D. N., Martin, C. L., & Berenbaum, S. A. (1998). Gender development. Handbook of child psychology.
Golombok, S., & Fivush, R. (1994). Gender development. Cambridge University Press.
Gender and Development Resources (WIDNET)
Women's rights
Economic development
Feminist economics
Social constructionism | 0.787244 | 0.992508 | 0.781347 |
Usability | Usability can be described as the capacity of a system to provide a condition for its users to perform the tasks safely, effectively, and efficiently while enjoying the experience. In software engineering, usability is the degree to which a software can be used by specified consumers to achieve quantified objectives with effectiveness, efficiency, and satisfaction in a quantified context of use.
The object of use can be a software application, website, book, tool, machine, process, vehicle, or anything a human interacts with. A usability study may be conducted as a primary job function by a usability analyst or as a secondary job function by designers, technical writers, marketing personnel, and others. It is widely used in consumer electronics, communication, and knowledge transfer objects (such as a cookbook, a document or online help) and mechanical objects such as a door handle or a hammer.
Usability includes methods of measuring usability, such as needs analysis and the study of the principles behind an object's perceived efficiency or elegance. In human-computer interaction and computer science, usability studies the elegance and clarity with which the interaction with a computer program or a web site (web usability) is designed. Usability considers user satisfaction and utility as quality components, and aims to improve user experience through iterative design.
Introduction
The primary notion of usability is that an object designed with a generalized users' psychology and physiology in mind is, for example:
More efficient to use—takes less time to accomplish a particular task
Easier to learn—operation can be learned by observing the object
More satisfying to use
Complex computer systems find their way into everyday life, and at the same time the market is saturated with competing brands. This has made usability more popular and widely recognized in recent years, as companies see the benefits of researching and developing their products with user-oriented methods instead of technology-oriented methods. By understanding and researching the interaction between product and user, the usability expert can also provide insight that is unattainable by traditional company-oriented market research. For example, after observing and interviewing users, the usability expert may identify needed functionality or design flaws that were not anticipated. A method called contextual inquiry does this in the naturally occurring context of the users own environment. In the user-centered design paradigm, the product is designed with its intended users in mind at all times. In the user-driven or participatory design paradigm, some of the users become actual or de facto members of the design team.
The term user friendly is often used as a synonym for usable, though it may also refer to accessibility. Usability describes the quality of user experience across websites, software, products, and environments. There is no consensus about the relation of the terms ergonomics (or human factors) and usability. Some think of usability as the software specialization of the larger topic of ergonomics. Others view these topics as tangential, with ergonomics focusing on physiological matters (e.g., turning a door handle) and usability focusing on psychological matters (e.g., recognizing that a door can be opened by turning its handle). Usability is also important in website development (web usability). According to Jakob Nielsen, "Studies of user behavior on the Web find a low tolerance for difficult designs or slow sites. People don't want to wait. And they don't want to learn how to use a home page. There's no such thing as a training class or a manual for a Web site. People have to be able to grasp the functioning of the site immediately after scanning the home page—for a few seconds at most." Otherwise, most casual users simply leave the site and browse or shop elsewhere.
Usability can also include the concept of prototypicality, which is how much a particular thing conforms to the expected shared norm, for instance, in website design, users prefer sites that conform to recognised design norms.
Definition
ISO defines usability as "The extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency, and satisfaction in a specified context of use." The word "usability" also refers to methods for improving ease-of-use during the design process. Usability consultant Jakob Nielsen and computer science professor Ben Shneiderman have written (separately) about a framework of system acceptability, where usability is a part of "usefulness" and is composed of:
Learnability: How easy is it for users to accomplish basic tasks the first time they encounter the design?
Efficiency: Once users have learned the design, how quickly can they perform tasks?
Memorability: When users return to the design after a period of not using it, how easily can they re-establish proficiency?
Errors: How many errors do users make, how severe are these errors, and how easily can they recover from the errors?
Satisfaction: How pleasant is it to use the design?
Usability is often associated with the functionalities of the product (cf. ISO definition, below), in addition to being solely a characteristic of the user interface (cf. framework of system acceptability, also below, which separates usefulness into usability and utility). For example, in the context of mainstream consumer products, an automobile lacking a reverse gear could be considered unusable according to the former view, and lacking in utility according to the latter view. When evaluating user interfaces for usability, the definition can be as simple as "the perception of a target user of the effectiveness (fit for purpose) and efficiency (work or time required to use) of the Interface". Each component may be measured subjectively against criteria, e.g., Principles of User Interface Design, to provide a metric, often expressed as a percentage. It is important to distinguish between usability testing and usability engineering. Usability testing is the measurement of ease of use of a product or piece of software. In contrast, usability engineering (UE) is the research and design process that ensures a product with good usability. Usability is a non-functional requirement. As with other non-functional requirements, usability cannot be directly measured but must be quantified by means of indirect measures or attributes such as, for example, the number of reported problems with ease-of-use of a system.
Intuitive interaction or intuitive use
The term intuitive is often listed as a desirable trait in usable interfaces, sometimes used as a synonym for learnable. In the past, Jef Raskin discouraged using this term in user interface design, claiming that easy to use interfaces are often easy because of the user's exposure to previous similar systems, thus the term 'familiar' should be preferred. As an example: Two vertical lines "||" on media player buttons do not intuitively mean "pause"—they do so by convention. This association between intuitive use and familiarity has since been empirically demonstrated in multiple studies by a range of researchers across the world, and intuitive interaction is accepted in the research community as being use of an interface based on past experience with similar interfaces or something else, often not fully conscious, and sometimes involving a feeling of "magic" since the course of the knowledge itself may not be consciously available to the user . Researchers have also investigated intuitive interaction for older people, people living with dementia, and children.
Some have argued that aiming for "intuitive" interfaces (based on reusing existing skills with interaction systems) could lead designers to discard a better design solution only because it would require a novel approach and to stick with boring designs. However, applying familiar features into a new interface has been shown not to result in boring design if designers use creative approaches rather than simple copying. The throwaway remark that "the only intuitive interface is the nipple; everything else is learned." is still occasionally mentioned. Any breastfeeding mother or lactation consultant will tell you this is inaccurate and the nipple does in fact require learning on both sides. In 1992, Bruce Tognazzini even denied the existence of "intuitive" interfaces, since such interfaces must be able to intuit, i.e., "perceive the patterns of the user's behavior and draw inferences." Instead, he advocated the term "intuitable," i.e., "that users could intuit the workings of an application by seeing it and using it". However, the term intuitive interaction has become well accepted in the research community over the past 20 or so years and, although not perfect, it should probably be accepted and used.
ISO standards
ISO/TR 16982:2002 standard
ISO/TR 16982:2002 ("Ergonomics of human-system interaction—Usability methods supporting human-centered design") is an International Standards Organization (ISO) standard that provides information on human-centered usability methods that can be used for design and evaluation. It details the advantages, disadvantages, and other factors relevant to using each usability method. It explains the implications of the stage of the life cycle and the individual project characteristics for the selection of usability methods and provides examples of usability methods in context. The main users of ISO/TR 16982:2002 are project managers. It therefore addresses technical human factors and ergonomics issues only to the extent necessary to allow managers to understand their relevance and importance in the design process as a whole. The guidance in ISO/TR 16982:2002 can be tailored for specific design situations by using the lists of issues characterizing the context of use of the product to be delivered. Selection of appropriate usability methods should also take account of the relevant life-cycle process. ISO/TR 16982:2002 is restricted to methods that are widely used by usability specialists and project managers. It does not specify the details of how to implement or carry out the usability methods described.
ISO 9241 standard
ISO 9241 is a multi-part standard that covers a number of aspects of people working with computers. Although originally titled Ergonomic requirements for office work with visual display terminals (VDTs), it has been retitled to the more generic Ergonomics of Human System Interaction. As part of this change, ISO is renumbering some parts of the standard so that it can cover more topics, e.g. tactile and haptic interaction. The first part to be renumbered was part 10 in 2006, now part 110.
IEC 62366
IEC 62366-1:2015 + COR1:2016 & IEC/TR 62366-2 provide guidance on usability engineering specific to a medical device.
Designing for usability
Any system or device designed for use by people should be easy to use, easy to learn, easy to remember (the instructions), and helpful to users. John Gould and Clayton Lewis recommend that designers striving for usability follow these three design principles
Early focus on end users and the tasks they need the system/device to do
Empirical measurement using quantitative or qualitative measures
Iterative design, in which the designers work in a series of stages, improving the design each time
Early focus on users and tasks
The design team should be user-driven and it should be in direct contact with potential users. Several evaluation methods, including personas, cognitive modeling, inspection, inquiry, prototyping, and testing methods may contribute to understanding potential users and their perceptions of how well the product or process works. Usability considerations, such as who the users are and their experience with similar systems must be examined. As part of understanding users, this knowledge must "...be played against the tasks that the users will be expected to perform." This includes the analysis of what tasks the users will perform, which are most important, and what decisions the users will make while using your system. Designers must understand how cognitive and emotional characteristics of users will relate to a proposed system. One way to stress the importance of these issues in the designers' minds is to use personas, which are made-up representative users. See below for further discussion of personas. Another more expensive but more insightful method is to have a panel of potential users work closely with the design team from the early stages.
Empirical measurement
Test the system early on, and test the system on real users using behavioral measurements. This includes testing the system for both learnability and usability. (See Evaluation Methods). It is important in this stage to use quantitative usability specifications such as time and errors to complete tasks and number of users to test, as well as examine performance and attitudes of the users testing the system. Finally, "reviewing or demonstrating" a system before the user tests it can result in misleading results. The emphasis of empirical measurement is on measurement, both informal and formal, which can be carried out through a variety of evaluation methods.
Iterative design
Iterative design is a design methodology based on a cyclic process of prototyping, testing, analyzing, and refining a product or process. Based on the results of testing the most recent iteration of a design, changes and refinements are made. This process is intended to ultimately improve the quality and functionality of a design. In iterative design, interaction with the designed system is used as a form of research for informing and evolving a project, as successive versions, or iterations of a design are implemented. The key requirements for Iterative Design are: identification of required changes, an ability to make changes, and a willingness to make changes. When a problem is encountered, there is no set method to determine the correct solution. Rather, there are empirical methods that can be used during system development or after the system is delivered, usually a more inopportune time. Ultimately, iterative design works towards meeting goals such as making the system user friendly, easy to use, easy to operate, simple, etc.
Evaluation methods
There are a variety of usability evaluation methods. Certain methods use data from users, while others rely on usability experts. There are usability evaluation methods for all stages of design and development, from product definition to final design modifications. When choosing a method, consider cost, time constraints, and appropriateness. For a brief overview of methods, see Comparison of usability evaluation methods or continue reading below. Usability methods can be further classified into the subcategories below.
Cognitive modeling methods
Cognitive modeling involves creating a computational model to estimate how long it takes people to perform a given task. Models are based on psychological principles and experimental studies to determine times for cognitive processing and motor movements. Cognitive models can be used to improve user interfaces or predict problem errors and pitfalls during the design process. A few examples of cognitive models include:
Parallel design
With parallel design, several people create an initial design from the same set of requirements. Each person works independently, and when finished, shares concepts with the group. The design team considers each solution, and each designer uses the best ideas to further improve their own solution. This process helps generate many different, diverse ideas, and ensures that the best ideas from each design are integrated into the final concept. This process can be repeated several times until the team is satisfied with the final concept.
GOMS
GOMS stands for goals, operators, methods, and selection rules. It is a family of techniques that analyzes the user complexity of interactive systems. Goals are what the user must accomplish. An operator is an action performed in pursuit of a goal. A method is a sequence of operators that accomplish a goal. Selection rules specify which method satisfies a given goal, based on context.
Human processor model
Sometimes it is useful to break a task down and analyze each individual aspect separately. This helps the tester locate specific areas for improvement. To do this, it is necessary to understand how the human brain processes information. A model of the human processor is shown below.
Many studies have been done to estimate the cycle times, decay times, and capacities of each of these processors. Variables that affect these can include subject age, aptitudes, ability, and the surrounding environment. For a younger adult, reasonable estimates are:
Long-term memory is believed to have an infinite capacity and decay time.
Keystroke level modeling
Keystroke level modeling is essentially a less comprehensive version of GOMS that makes simplifying assumptions in order to reduce calculation time and complexity.
Inspection methods
These usability evaluation methods involve observation of users by an experimenter, or the testing and evaluation of a program by an expert reviewer. They provide more quantitative data as tasks can be timed and recorded.
Card sorts
Card sorting is a way to involve users in grouping information for a website's usability review. Participants in a card sorting session are asked to organize the content from a Web site in a way that makes sense to them. Participants review items from a Web site and then group these items into categories. Card sorting helps to learn how users think about the content and how they would organize the information on the Web site. Card sorting helps to build the structure for a Web site, decide what to put on the home page, and label the home page categories. It also helps to ensure that information is organized on the site in a way that is logical to users.
Tree tests
Tree testing is a way to evaluate the effectiveness of a website's top-down organization. Participants are given "find it" tasks, then asked to drill down through successive text lists of topics and subtopics to find a suitable answer. Tree testing evaluates the findability and labeling of topics in a site, separate from its navigation controls or visual design.
Ethnography
Ethnographic analysis is derived from anthropology. Field observations are taken at a site of a possible user, which track the artifacts of work such as Post-It notes, items on desktop, shortcuts, and items in trash bins. These observations also gather the sequence of work and interruptions that determine the user's typical day.
Heuristic evaluation
Heuristic evaluation is a usability engineering method for finding and assessing usability problems in a user interface design as part of an iterative design process. It involves having a small set of evaluators examining the interface and using recognized usability principles (the "heuristics"). It is the most popular of the usability inspection methods, as it is quick, cheap, and easy. Heuristic evaluation was developed to aid in the design of computer user-interface design. It relies on expert reviewers to discover usability problems and then categorize and rate them by a set of principles (heuristics.) It is widely used based on its speed and cost-effectiveness. Jakob Nielsen's list of ten heuristics is the most commonly used in industry. These are ten general principles for user interface design. They are called "heuristics" because they are more in the nature of rules of thumb than specific usability guidelines.
Visibility of system status: The system should always keep users informed about what is going on, through appropriate feedback within reasonable time.
Match between system and the real world: The system should speak the users' language, with words, phrases and concepts familiar to the user, rather than system-oriented terms. Follow real-world conventions, making information appear in a natural and logical order.
User control and freedom: Users often choose system functions by mistake and will need a clearly marked "emergency exit" to leave the unwanted state without having to go through an extended dialogue. Support undo and redo.
Consistency and standards: Users should not have to wonder whether different words, situations, or actions mean the same thing. Follow platform conventions.
Error prevention: Even better than good error messages is a careful design that prevents a problem from occurring in the first place. Either eliminate error-prone conditions or check for them and present users with a confirmation option before they commit to the action.
Recognition rather than recall: Minimize the user's memory load by making objects, actions, and options visible. The user should not have to remember information from one part of the dialogue to another. Instructions for use of the system should be visible or easily retrievable whenever appropriate.
Flexibility and efficiency of use: Accelerators—unseen by the novice user—may often speed up the interaction for the expert user such that the system can cater to both inexperienced and experienced users. Allow users to tailor frequent actions.
Aesthetic and minimalist design: Dialogues should not contain information that is irrelevant or rarely needed. Every extra unit of information in a dialogue competes with the relevant units of information and diminishes their relative visibility.
Help users recognize, diagnose, and recover from errors: Error messages should be expressed in plain language (no codes), precisely indicate the problem, and constructively suggest a solution.
Help and documentation: Even though it is better if the system can be used without documentation, it may be necessary to provide help and documentation. Any such information should be easy to search, focused on the user's task, list concrete steps to be carried out, and not be too large.
Thus, by determining which guidelines are violated, the usability of a device can be determined.
Usability inspection
Usability inspection is a review of a system based on a set of guidelines. The review is conducted by a group of experts who are deeply familiar with the concepts of usability in design. The experts focus on a list of areas in design that have been shown to be troublesome for users.
Pluralistic inspection
Pluralistic Inspections are meetings where users, developers, and human factors people meet together to discuss and evaluate step by step of a task scenario. As more people inspect the scenario for problems, the higher the probability to find problems. In addition, the more interaction in the team, the faster the usability issues are resolved.
Consistency inspection
In consistency inspection, expert designers review products or projects to ensure consistency across multiple products to look if it does things in the same way as their own designs.
Activity Analysis
Activity analysis is a usability method used in preliminary stages of development to get a sense of situation. It involves an investigator observing users as they work in the field. Also referred to as user observation, it is useful for specifying user requirements and studying currently used tasks and subtasks. The data collected are qualitative and useful for defining the problem. It should be used when you wish to frame what is needed, or "What do we want to know?"
Inquiry methods
The following usability evaluation methods involve collecting qualitative data from users. Although the data collected is subjective, it provides valuable information on what the user wants.
Task analysis
Task analysis means learning about users' goals and users' ways of working. Task analysis can also mean figuring out what more specific tasks users must do to meet those goals and what steps they must take to accomplish those tasks. Along with user and task analysis, a third analysis is often used: understanding users' environments (physical, social, cultural, and technological environments).
Focus groups
A focus group is a focused discussion where a moderator leads a group of participants through a set of questions on a particular topic. Although typically used as a marketing tool, focus groups are sometimes used to evaluate usability. Used in the product definition stage, a group of 6 to 10 users are gathered to discuss what they desire in a product. An experienced focus group facilitator is hired to guide the discussion to areas of interest for the developers. Focus groups are typically videotaped to help get verbatim quotes, and clips are often used to summarize opinions. The data gathered is not usually quantitative, but can help get an idea of a target group's opinion.
Questionnaires/surveys
Surveys have the advantages of being inexpensive, require no testing equipment, and results reflect the users' opinions. When written carefully and given to actual users who have experience with the product and knowledge of design, surveys provide useful feedback on the strong and weak areas of the usability of a design. This is a very common method and often does not appear to be a survey, but just a warranty card.
Prototyping methods
It is often very difficult for designers to conduct usability tests with the exact system being designed. Cost constraints, size, and design constraints usually lead the designer to creating a prototype of the system. Instead of creating the complete final system, the designer may test different sections of the system, thus making several small models of each component of the system. Prototyping is an attitude and an output, as it is a process for generating and reflecting on tangible ideas by allowing failure to occur early. prototyping helps people to see what could be of communicating a shared vision, and of giving shape to the future. The types of usability prototypes may vary from using paper models, index cards, hand drawn models, or storyboards. Prototypes are able to be modified quickly, often are faster and easier to create with less time invested by designers and are more apt to change design; although sometimes are not an adequate representation of the whole system, are often not durable and testing results may not be parallel to those of the actual system.
The Tool Kit Approach
This tool kit is a wide library of methods that used the traditional programming language and it is primarily developed for computer programmers. The code created for testing in the tool kit approach can be used in the final product. However, to get the highest benefit from the tool, the user must be an expert programmer.
The Parts Kit Approach
The two elements of this approach include a parts library and a method used for identifying the connection between the parts. This approach can be used by almost anyone and it is a great asset for designers with repetitive tasks.
Animation Language Metaphor
This approach is a combination of the tool kit approach and the part kit approach. Both the dialogue designers and the programmers are able to interact with this prototyping tool.
Rapid prototyping
Rapid prototyping is a method used in early stages of development to validate and refine the usability of a system. It can be used to quickly and cheaply evaluate user-interface designs without the need for an expensive working model. This can help remove hesitation to change the design, since it is implemented before any real programming begins. One such method of rapid prototyping is paper prototyping.
Testing methods
These usability evaluation methods involve testing of subjects for the most quantitative data. Usually recorded on video, they provide task completion time and allow for observation of attitude. Regardless to how carefully a system is designed, all theories must be tested using usability tests. Usability tests involve typical users using the system (or product) in a realistic environment [see simulation]. Observation of the user's behavior, emotions, and difficulties while performing different tasks, often identify areas of improvement for the system.
Metrics
While conducting usability tests, designers must use usability metrics to identify what it is they are going to measure, or the usability metrics. These metrics are often variable, and change in conjunction with the scope and goals of the project. The number of subjects being tested can also affect usability metrics, as it is often easier to focus on specific demographics. Qualitative design phases, such as general usability (can the task be accomplished?), and user satisfaction are also typically done with smaller groups of subjects. Using inexpensive prototypes on small user groups provides more detailed information, because of the more interactive atmosphere, and the designer's ability to focus more on the individual user.
As the designs become more complex, the testing must become more formalized. Testing equipment will become more sophisticated and testing metrics become more quantitative. With a more refined prototype, designers often test effectiveness, efficiency, and subjective satisfaction, by asking the user to complete various tasks. These categories are measured by the percent that complete the task, how long it takes to complete the tasks, ratios of success to failure to complete the task, time spent on errors, the number of errors, rating scale of satisfactions, number of times user seems frustrated, etc. Additional observations of the users give designers insight on navigation difficulties, controls, conceptual models, etc. The ultimate goal of analyzing these metrics is to find/create a prototype design that users like and use to successfully perform given tasks. After conducting usability tests, it is important for a designer to record what was observed, in addition to why such behavior occurred and modify the model according to the results. Often it is quite difficult to distinguish the source of the design errors, and what the user did wrong. However, effective usability tests will not generate a solution to the problems, but provide modified design guidelines for continued testing.
Remote usability testing
Remote usability testing (also known as unmoderated or asynchronous usability testing) involves the use of a specially modified online survey, allowing the quantification of user testing studies by providing the ability to generate large sample sizes, or a deep qualitative analysis without the need for dedicated facilities. Additionally, this style of user testing also provides an opportunity to segment feedback by demographic, attitudinal and behavioral type. The tests are carried out in the user's own environment (rather than labs) helping further simulate real-life scenario testing. This approach also provides a vehicle to easily solicit feedback from users in remote areas. There are two types, quantitative or qualitative. Quantitative use large sample sized and task based surveys. These types of studies are useful for validating suspected usability issues. Qualitative studies are best used as exploratory research, in small sample sizes but frequent, even daily iterations. Qualitative usually allows for observing respondent's screens and verbal think aloud commentary (Screen Recording Video, SRV), and for a richer level of insight also include the webcam view of the respondent (Video-in-Video, ViV, sometimes referred to as Picture-in-Picture, PiP)
Remote usability testing for mobile devices
The growth in mobile and associated platforms and services (e.g.: Mobile gaming has experienced 20x growth in 2010–2012) has generated a need for unmoderated remote usability testing on mobile devices, both for websites but especially for app interactions. One methodology consists of shipping cameras and special camera holding fixtures to dedicated testers, and having them record the screens of the mobile smart-phone or tablet device, usually using an HD camera. A drawback of this approach is that the finger movements of the respondent can obscure the view of the screen, in addition to the bias and logistical issues inherent in shipping special hardware to selected respondents. A newer approach uses a wireless projection of the mobile device screen onto the computer desktop screen of the respondent, who can then be recorded through their webcam, and thus a combined Video-in-Video view of the participant and the screen interactions viewed simultaneously while incorporating the verbal think aloud commentary of the respondents.
Thinking aloud
The Think aloud protocol is a method of gathering data that is used in both usability and psychology studies. It involves getting a user to verbalize their thought processes (i.e. expressing their opinions, thoughts, anticipations, and actions) as they perform a task or set of tasks. As a widespread method of usability testing, think aloud provides the researchers with the ability to discover what user really think during task performance and completion.
Often an instructor is present to prompt the user into being more vocal as they work. Similar to the Subjects-in-Tandem method, it is useful in pinpointing problems and is relatively simple to set up. Additionally, it can provide insight into the user's attitude, which can not usually be discerned from a survey or questionnaire.
RITE method
Rapid Iterative Testing and Evaluation (RITE) is an iterative usability method similar to traditional "discount" usability testing. The tester and team must define a target population for testing, schedule participants to come into the lab, decide on how the users behaviors will be measured, construct a test script and have participants engage in a verbal protocol (e.g., think aloud). However it differs from these methods in that it advocates that changes to the user interface are made as soon as a problem is identified and a solution is clear. Sometimes this can occur after observing as few as 1 participant. Once the data for a participant has been collected the usability engineer and team decide if they will be making any changes to the prototype prior to the next participant. The changed interface is then tested with the remaining users.
Subjects-in-tandem or co-discovery
Subjects-in-tandem (also called co-discovery) is the pairing of subjects in a usability test to gather important information on the ease of use of a product. Subjects tend to discuss the tasks they have to accomplish out loud and through these discussions observers learn where the problem areas of a design are. To encourage co-operative problem-solving between the two subjects, and the attendant discussions leading to it, the tests can be designed to make the subjects dependent on each other by assigning them complementary areas of responsibility (e.g. for testing of software, one subject may be put in charge of the mouse and the other of the keyboard.)
Component-based usability testing
Component-based usability testing is an approach which aims to test the usability of elementary units of an interaction system, referred to as interaction components. The approach includes component-specific quantitative measures based on user interaction recorded in log files, and component-based usability questionnaires.
Other methods
Cognitive walkthrough
Cognitive walkthrough is a method of evaluating the user interaction of a working prototype or final product. It is used to evaluate the system's ease of learning. Cognitive walkthrough is useful to understand the user's thought processes and decision making when interacting with a system, specially for first-time or infrequent users.
Benchmarking
Benchmarking creates standardized test materials for a specific type of design. Four key characteristics are considered when establishing a benchmark: time to do the core task, time to fix errors, time to learn applications, and the functionality of the system. Once there is a benchmark, other designs can be compared to it to determine the usability of the system. Many of the common objectives of usability studies, such as trying to understand user behavior or exploring alternative designs, must be put aside. Unlike many other usability methods or types of labs studies, benchmark studies more closely resemble true experimental psychology lab studies, with greater attention to detail on methodology, study protocol and data analysis.
Meta-analysis
Meta-analysis is a statistical procedure to combine results across studies to integrate the findings. This phrase was coined in 1976 as a quantitative literature review. This type of evaluation is very powerful for determining the usability of a device because it combines multiple studies to provide very accurate quantitative support.
Persona
Personas are fictitious characters created to represent a site or product's different user types and their associated demographics and technographics. Alan Cooper introduced the concept of using personas as a part of interactive design in 1998 in his book The Inmates Are Running the Asylum, but had used this concept since as early as 1975. Personas are a usability evaluation method that can be used at various design stages. The most typical time to create personas is at the beginning of designing so that designers have a tangible idea of who the users of their product will be. Personas are the archetypes that represent actual groups of users and their needs, which can be a general description of person, context, or usage scenario. This technique turns marketing data on target user population into a few physical concepts of users to create empathy among the design team, with the final aim of tailoring a product more closely to how the personas will use it. To gather the marketing data that personas require, several tools can be used, including online surveys, web analytics, customer feedback forms, and usability tests, and interviews with customer-service representatives.
Benefits
The key benefits of usability are:
Higher revenues through increased sales
Increased user efficiency and user satisfaction
Reduced development costs
Reduced support costs
Corporate integration
An increase in usability generally positively affects several facets of a company's output quality. In particular, the benefits fall into several common areas:
Increased productivity
Decreased training and support costs
Increased sales and revenues
Reduced development time and costs
Reduced maintenance costs
Increased customer satisfaction
Increased usability in the workplace fosters several responses from employees: "Workers who enjoy their work do it better, stay longer in the face of temptation, and contribute ideas and enthusiasm to the evolution of enhanced productivity." To create standards, companies often implement experimental design techniques that create baseline levels. Areas of concern in an office environment include (though are not necessarily limited to):
Working posture
Design of workstation furniture
Screen displays
Input devices
Organization issues
Office environment
Software interface
By working to improve said factors, corporations can achieve their goals of increased output at lower costs, while potentially creating optimal levels of customer satisfaction. There are numerous reasons why each of these factors correlates to overall improvement. For example, making software user interfaces easier to understand reduces the need for extensive training. The improved interface tends to lower the time needed to perform tasks, and so would both raise the productivity levels for employees and reduce development time (and thus costs). Each of the aforementioned factors are not mutually exclusive; rather they should be understood to work in conjunction to form the overall workplace environment. In the 2010s, usability is recognized as an important software quality attribute, earning its place among more traditional attributes such as performance, robustness and aesthetic appearance. Various academic programs focus on usability. Several usability consultancy companies have emerged, and traditional consultancy and design firms offer similar services.
There is some resistance to integrating usability work in organisations. Usability is seen as a vague concept, it is difficult to measure and other areas are prioritised when IT projects run out of time or money.
Professional development
Usability practitioners are sometimes trained as industrial engineers, psychologists, kinesiologists, systems design engineers, or with a degree in information architecture, information or library science, or Human-Computer Interaction (HCI). More often though they are people who are trained in specific applied fields who have taken on a usability focus within their organization. Anyone who aims to make tools easier to use and more effective for their desired function within the context of work or everyday living can benefit from studying usability principles and guidelines. For those seeking to extend their training, the User Experience Professionals' Association offers online resources, reference lists, courses, conferences, and local chapter meetings. The UXPA also sponsors World Usability Day each November. Related professional organizations include the Human Factors and Ergonomics Society (HFES) and the Association for Computing Machinery's special interest groups in Computer Human Interaction (SIGCHI), Design of Communication (SIGDOC) and Computer Graphics and Interactive Techniques (SIGGRAPH). The Society for Technical Communication also has a special interest group on Usability and User Experience (UUX). They publish a quarterly newsletter called Usability Interface.
See also
Accessibility
Chief experience officer (CXO)
Design for All (inclusion)
Experience design
Fitts's law
Form follows function
Gemba or customer visit
GOMS
Gotcha (programming)
GUI
Human factors
Information architecture
Interaction design
Interactive systems engineering
Internationalization
Learnability
List of human-computer interaction topics
List of system quality attributes
Machine-Readable Documents
Natural mapping (interface design)
Non-functional requirement
RITE method
System Usability Scale
Universal usability
Usability goals
Usability testing
Usability engineering
User experience
User experience design
Web usability
World Usability Day
References
Further reading
R. G. Bias and D. J. Mayhew (eds) (2005), Cost-Justifying Usability: An Update for the Internet Age, Morgan Kaufmann
Donald A. Norman (2013), The Design of Everyday Things, Basic Books,
Donald A. Norman (2004), Emotional Design: Why we love (or hate) everyday things, Basic Books,
Jakob Nielsen (1994), Usability Engineering, Morgan Kaufmann Publishers,
Jakob Nielsen (1994), Usability Inspection Methods, John Wiley & Sons,
Ben Shneiderman, Software Psychology, 1980,
External links
Usability.gov
Human–computer interaction
Technical communication
Information architecture
Software quality | 0.788112 | 0.991262 | 0.781225 |
Environmental degradation | Environmental degradation is the deterioration of the environment through depletion of resources such as quality of air, water and soil; the destruction of ecosystems; habitat destruction; the extinction of wildlife; and pollution. It is defined as any change or disturbance to the environment perceived to be deleterious or undesirable. The environmental degradation process amplifies the impact of environmental issues which leave lasting impacts on the environment.
Environmental degradation is one of the ten threats officially cautioned by the High-level Panel on Threats, Challenges and Change of the United Nations. The United Nations International Strategy for Disaster Reduction defines environmental degradation as "the reduction of the capacity of the environment to meet social and ecological objectives, and needs".
Environmental degradation comes in many types. When natural habitats are destroyed or natural resources are depleted, the environment is degraded; direct environmental degradation, such as deforestation, which is readily visible; this can be caused by more indirect process, such as the build up of plastic pollution over time or the buildup of greenhouse gases that causes tipping points in the climate system. Efforts to counteract this problem include environmental protection and environmental resources management. Mismanagement that leads to degradation can also lead to environmental conflict where communities organize in opposition to the forces that mismanaged the environment.
Biodiversity loss
Scientists assert that human activity has pushed the earth into a sixth mass extinction event. The loss of biodiversity has been attributed in particular to human overpopulation, continued human population growth and overconsumption of natural resources by the world's wealthy. A 2020 report by the World Wildlife Fund found that human activity – specifically overconsumption, population growth and intensive farming – has destroyed 68% of vertebrate wildlife since 1970. The Global Assessment Report on Biodiversity and Ecosystem Services, published by the United Nation's IPBES in 2019, posits that roughly one million species of plants and animals face extinction from anthropogenic causes, such as expanding human land use for industrial agriculture and livestock rearing, along with overfishing.
Since the establishment of agriculture over 11,000 years ago, humans have altered roughly 70% of the Earth's land surface, with the global biomass of vegetation being reduced by half, and terrestrial animal communities seeing a decline in biodiversity greater than 20% on average. A 2021 study says that just 3% of the planet's terrestrial surface is ecologically and faunally intact, meaning areas with healthy populations of native animal species and little to no human footprint. Many of these intact ecosystems were in areas inhabited by indigenous peoples. With 3.2 billion people affected globally, degradation affects over 30% of the world's land area and 40% of land in developing countries.
The implications of these losses for human livelihoods and wellbeing have raised serious concerns. With regard to the agriculture sector for example, The State of the World's Biodiversity for Food and Agriculture, published by the Food and Agriculture Organization of the United Nations in 2019, states that "countries report that many species that contribute to vital ecosystem services, including pollinators, the natural enemies of pests, soil organisms and wild food species, are in decline as a consequence of the destruction and degradation of habitats, overexploitation, pollution and other threats" and that "key ecosystems that deliver numerous services essential to food and agriculture, including supply of freshwater, protection against hazards and provision of habitat for species such as fish and pollinators, are declining."
Impacts of environmental degradation on women's livelihoods
On the way biodiversity loss and ecosystem degradation impact livelihoods, the Food and Agriculture Organization of the United Nations finds also that in contexts of degraded lands and ecosystems in rural areas, both girls and women bear heavier workloads.
Women's livelihoods, health, food and nutrition security, access to water and energy, and coping abilities are all disproportionately affected by environmental degradation. Environmental pressures and shocks, particularly in rural areas, force women to deal with the aftermath, greatly increasing their load of unpaid care work. Also, as limited natural resources grow even scarcer due to climate change, women and girls must also walk further to collect food, water or firewood, which heightens their risk of being subjected to gender-based violence.
This implies, for example, longer journeys to get primary necessities and greater exposure to the risks of human trafficking, rape, and sexual violence.
Water degradation
One major component of environmental degradation is the depletion of the resource of fresh water on Earth. Approximately only 2.5% of all of the water on Earth is fresh water, with the rest being salt water. 69% of fresh water is frozen in ice caps located on Antarctica and Greenland, so only 30% of the 2.5% of fresh water
is available for consumption. Fresh water is an exceptionally important resource, since life on Earth is ultimately dependent on it. Water transports nutrients, minerals and chemicals within the biosphere to all forms of life, sustains both plants and animals, and moulds the surface of the Earth with transportation and deposition of materials.
The current top three uses of fresh water account for 95% of its consumption; approximately 85% is used for irrigation of farmland, golf courses, and parks, 6% is used for domestic purposes such as indoor bathing uses and outdoor garden and lawn use, and 4% is used for industrial purposes such as processing, washing, and cooling in manufacturing centres. It is estimated that one in three people over the entire globe are already facing water shortages, almost one-fifth of the world population live in areas of physical water scarcity, and almost one quarter of the world's population live in a developing country that lacks the necessary infrastructure to use water from available rivers and aquifers. Water scarcity is an increasing problem due to many foreseen issues in the future including population growth, increased urbanization, higher standards of living, and climate change.
Industrial and domestic sewage, pesticides, fertilizers, plankton blooms, silt, oils, chemical residues, radioactive material, and other pollutants are some of the most frequent water pollutants. These have a huge negative impact on the water and can cause degradation in various levels.
Climate change and temperature
Climate change affects the Earth's water supply in a large number of ways. It is predicted that the mean global temperature will rise in the coming years due to a number of forces affecting the climate. The amount of atmospheric carbon dioxide (CO2) will rise, and both of these will influence water resources; evaporation depends strongly on temperature and moisture availability which can ultimately affect the amount of water available to replenish groundwater supplies.
Transpiration from plants can be affected by a rise in atmospheric CO2, which can decrease their use of water, but can also raise their use of water from possible increases of leaf area. Temperature rise can reduce the snow season in the winter and increase the intensity of the melting snow leading to peak runoff of this, affecting soil moisture, flood and drought risks, and storage capacities depending on the area.
Warmer winter temperatures cause a decrease in snowpack, which can result in diminished water resources during summer. This is especially important at mid-latitudes and in mountain regions that depend on glacial runoff to replenish their river systems and groundwater supplies, making these areas increasingly vulnerable to water shortages over time; an increase in temperature will initially result in a rapid rise in water melting from glaciers in the summer, followed by a retreat in glaciers and a decrease in the melt and consequently the water supply every year as the size of these glaciers get smaller and smaller.
Thermal expansion of water and increased melting of oceanic glaciers from an increase in temperature gives way to a rise in sea level. This can affect the freshwater supply to coastal areas as well. As river mouths and deltas with higher salinity get pushed further inland, an intrusion of saltwater results in an increase of salinity in reservoirs and aquifers. Sea-level rise may also consequently be caused by a depletion of groundwater, as climate change can affect the hydrologic cycle in a number of ways. Uneven distributions of increased temperatures and increased precipitation around the globe results in water surpluses and deficits, but a global decrease in groundwater suggests a rise in sea level, even after meltwater and thermal expansion were accounted for, which can provide a positive feedback to the problems sea-level rise causes to fresh-water supply.
A rise in air temperature results in a rise in water temperature, which is also very significant in water degradation as the water would become more susceptible to bacterial growth. An increase in water temperature can also affect ecosystems greatly because of a species' sensitivity to temperature, and also by inducing changes in a body of water's self-purification system from decreased amounts of dissolved oxygen in the water due to rises in temperature.
Climate change and precipitation
A rise in global temperatures is also predicted to correlate with an increase in global precipitation but because of increased runoff, floods, increased rates of soil erosion, and mass movement of land, a decline in water quality is probable, because while water will carry more nutrients it will also carry more contaminants. While most of the attention about climate change is directed towards global warming and greenhouse effect, some of the most severe effects of climate change are likely to be from changes in precipitation, evapotranspiration, runoff, and soil moisture. It is generally expected that, on average, global precipitation will increase, with some areas receiving increases and some decreases.
Climate models show that while some regions should expect an increase in precipitation, such as in the tropics and higher latitudes, other areas are expected to see a decrease, such as in the subtropics. This will ultimately cause a latitudinal variation in water distribution. The areas receiving more precipitation are also expected to receive this increase during their winter and actually become drier during their summer, creating even more of a variation of precipitation distribution. Naturally, the distribution of precipitation across the planet is very uneven, causing constant variations in water availability in respective locations.
Changes in precipitation affect the timing and magnitude of floods and droughts, shift runoff processes, and alter groundwater recharge rates. Vegetation patterns and growth rates will be directly affected by shifts in precipitation amount and distribution, which will in turn affect agriculture as well as natural ecosystems. Decreased precipitation will deprive areas of water causing water tables to fall and reservoirs of wetlands, rivers, and lakes to empty. In addition, a possible increase in evaporation and evapotranspiration will result, depending on the accompanied rise in temperature. Groundwater reserves will be depleted, and the remaining water has a greater chance of being of poor quality from saline or contaminants on the land surface.
Climate change is resulting into a very high rate of land degradation causing enhanced desertification and nutrient deficient soils. The menace of land degradation is increasing by the day and has been characterized as a major global threat. According to Global Assessment of Land Degradation and Improvement (GLADA) a quarter of land area around the globe can now be marked as degraded. Land degradation is supposed to influence lives of 1.5 billion people and 15 billion tons of fertile soil is lost every year due to anthropogenic activities and climate change.
Population growth
The human population on Earth is expanding rapidly, which together with even more rapid economic growth is the main cause of the degradation of the environment. Humanity's appetite for resources is disrupting the environment's natural equilibrium. Production industries are venting smoke into the atmosphere and discharging chemicals that are polluting water resources. The smoke includes detrimental gases such as carbon monoxide and sulphur dioxide. The high levels of pollution in the atmosphere form layers that are eventually absorbed into the atmosphere. Organic compounds such as chlorofluorocarbons (CFCs) have generated an opening in the ozone layer, which admits higher levels of ultraviolet radiation, putting the globe at risk.
The available fresh water being affected by the climate is also being stretched across an ever-increasing global population. It is estimated that almost a quarter of the global population is living in an area that is using more than 20% of their renewable water supply; water use will rise with population while the water supply is also being aggravated by decreases in streamflow and groundwater caused by climate change. Even though some areas may see an increase in freshwater supply from an uneven distribution of precipitation increase, an increased use of water supply is expected.
An increased population means increased withdrawals from the water supply for domestic, agricultural, and industrial uses, the largest of these being agriculture, believed to be the major non-climate driver of environmental change and water deterioration. The next 50 years will likely be the last period of rapid agricultural expansion, but the larger and wealthier population over this time will demand more agriculture.
Population increase over the last two decades, at least in the United States, has also been accompanied by a shift to an increase in urban areas from rural areas, which concentrates the demand for water into certain areas, and puts stress on the fresh water supply from industrial and human contaminants. Urbanization causes overcrowding and increasingly unsanitary living conditions, especially in developing countries, which in turn exposes an increasingly number of people to disease. About 79% of the world's population is in developing countries, which lack access to sanitary water and sewer systems, giving rises to disease and deaths from contaminated water and increased numbers of disease-carrying insects.
Agriculture
Agriculture is dependent on available soil moisture, which is directly affected by climate dynamics, with precipitation being the input in this system and various processes being the output, such as evapotranspiration, surface runoff, drainage, and percolation into groundwater. Changes in climate, especially the changes in precipitation and evapotranspiration predicted by climate models, will directly affect soil moisture, surface runoff, and groundwater recharge.
In areas with decreasing precipitation as predicted by the climate models, soil moisture may be substantially reduced. With this in mind, agriculture in most areas already needs irrigation, which depletes fresh water supplies both by the physical use of the water and the degradation agriculture causes to the water. Irrigation increases salt and nutrient content in areas that would not normally be affected, and damages streams and rivers from damming and removal of water. Fertilizer enters both human and livestock waste streams that eventually enter groundwater, while nitrogen, phosphorus, and other chemicals from fertilizer can acidify both soils and water.
Certain agricultural demands may increase more than others with an increasingly wealthier global population, and meat is one commodity expected to double global food demand by 2050, which directly affects the global supply of fresh water. Cows need water to drink, more if the temperature is high and humidity is low, and more if the production system the cow is in is extensive, since finding food takes more effort. Water is needed in the processing of the meat, and also in the production of feed for the livestock. Manure can contaminate bodies of freshwater, and slaughterhouses, depending on how well they are managed, contribute waste such as blood, fat, hair, and other bodily contents to supplies of fresh water.
The transfer of water from agricultural to urban and suburban use raises concerns about agricultural sustainability, rural socioeconomic decline, food security, an increased carbon footprint from imported food, and decreased foreign trade balance. The depletion of fresh water, as applied to more specific and populated areas, increases fresh water scarcity among the population and also makes populations susceptible to economic, social, and political conflict in a number of ways; rising sea levels forces migration from coastal areas to other areas farther inland, pushing populations closer together breaching borders and other geographical patterns, and agricultural surpluses and deficits from the availability of water induce trade problems and economies of certain areas. Climate change is an important cause of involuntary migration and forced displacement According to the Food and Agriculture Organization of the United Nations, global greenhouse gas emissions from animal agriculture exceeds that of transportation.
Water management
Water management is the process of planning, developing, and managing water resources across all water applications, in terms of both quantity and quality." Water management is supported and guided by institutions, infrastructure, incentives, and information systems
The issue of the depletion of fresh water has stimulated increased efforts in water management. While water management systems are often flexible, adaptation to new hydrologic conditions may be very costly. Preventative approaches are necessary to avoid high costs of inefficiency and the need for rehabilitation of water supplies, and innovations to decrease overall demand may be important in planning water sustainability.
Water supply systems, as they exist now, were based on the assumptions of the current climate, and built to accommodate existing river flows and flood frequencies. Reservoirs are operated based on past hydrologic records, and irrigation systems on historical temperature, water availability, and crop water requirements; these may not be a reliable guide to the future. Re-examining engineering designs, operations, optimizations, and planning, as well as re-evaluating legal, technical, and economic approaches to manage water resources are very important for the future of water management in response to water degradation. Another approach is water privatization; despite its economic and cultural effects, service quality and overall quality of the water can be more easily controlled and distributed. Rationality and sustainability is appropriate, and requires limits to overexploitation and pollution and efforts in conservation.
Consumption increases
As the world's population increases, it is accompanied by an increase in population demand for natural resources. With the need for more production increases comes more damage to the environments and ecosystems in which those resources are housed. According to United Nations' population growth predictions, there could be up to 170 million more births by 2070. The need for more fuel, energy, food, buildings, and water sources grows with the number of people on the planet.
Deforestation
As the need for new agricultural areas and road construction increases, the deforestation processes stay in effect. Deforestation is the "removal of forest or stand of trees from land that is converted to non-forest use." (Wikipedia-Deforestation). Since the 1960s, nearly 50% of tropical forests have been destroyed, but this process is not limited to tropical forest areas. Europe's forests are also destroyed by livestock, insects, diseases, invasive species, and other human activities. Many of the world's terrestrial biodiversity can be found living in the different types of forests. Tearing down these areas for increased consumption directly decreases the world's biodiversity of plant and animal species native to those areas.
Along with destroying habitats and ecosystems, decreasing the world's forest contributes to the amount of in the atmosphere. By taking away forested areas, we are limiting the amount of carbon reservoirs, limiting it to the largest ones: the atmosphere and oceans. While one of the biggest reasons for deforestation is agriculture use for the world's food supply, removing trees from landscapes also increases erosion rates in areas, making it harder to produce crops in those soil types.
See also
Anthropocene
Environmental change
Environmental issues
Ecological collapse
Ecological crisis
Ecologically sustainable development
Eco-socialism
Exploitation of natural resources
Human impact on the environment
I=PAT
Restoration ecology
United Nations Decade on Biodiversity
United Nations Development Programme (UNDP)
United Nations Environment Programme (UNEP)
World Resources Institute (WRI)
Sources
References
External links
Ecology of Increasing Disease Population growth and environmental degradation
"Reintegrating Land and Livestock." Union of Concerned Scientists
"Deforestation and Forest Degradation." IUCN, 7 July 2022.
Environmental Change in the Kalahari: Integrated Land Degradation Studies for Nonequilibrium Dryland Environments in the Annals of the Association of American Geographers
Public Daily Brief Threat: Environmental Degradation
Focus: Environmental degradation is contributing to health threats worldwide
Environmental Degradation of Materials in Nuclear Systems-Water Reactors
Herndon and Gibbon Lieutenants United States Navy The First North American Explorers of the Amazon Valley, by Historian Normand E. Klare. Actual Reports from the explorers are compared with present Amazon Basin conditions.
World Population Prospects - Population Division - United Nations.
Environmental Degradation Index by Jha & Murthy (for 174 countries)
Environmental issues | 0.783267 | 0.997381 | 0.781216 |
Regulation | Regulation is the management of complex systems according to a set of rules and trends. In systems theory, these types of rules exist in various fields of biology and society, but the term has slightly different meanings according to context. For example:
in government, typically regulation (or its plural) refers to the delegated legislation which is adopted to enforce primary legislation; including land-use regulation
in economy: regulatory economics
in finance: Financial regulation
in business, industry self-regulation occurs through self-regulatory organizations and trade associations which allow industries to set and enforce rules with less government involvement; and,
in biology, gene regulation and metabolic regulation allow living organisms to adapt to their environment and maintain homeostasis;
in psychology, self-regulation theory is the study of how individuals regulate their thoughts and behaviors to reach goals.
Forms
Regulation in the social, political, psychological, and economic domains can take many forms: legal restrictions promulgated by a government authority, contractual obligations (for example, contracts between insurers and their insureds), self-regulation in psychology, social regulation (e.g. norms), co-regulation, third-party regulation, certification, accreditation or market regulation.
State-mandated regulation is government intervention in the private market in an attempt to implement policy and produce outcomes which might not otherwise occur, ranging from consumer protection to faster growth or technological advancement.
The regulations may prescribe or proscribe conduct ("command-and-control" regulation), calibrate incentives ("incentive" regulation), or change preferences ("preferences shaping" regulation). Common examples of regulation include limits on environmental pollution, laws against child labor or other employment regulations, minimum wages laws, regulations requiring truthful labelling of the ingredients in food and drugs, and food and drug safety regulations establishing minimum standards of testing and quality for what can be sold, and zoning and development approvals regulation. Much less common are controls on market entry, or price regulation.
One critical question in regulation is whether the regulator or government has sufficient information to make ex-ante regulation more efficient than ex-post liability for harm and whether industry self-regulation might be preferable. The economics of imposing or removing regulations relating to markets is analysed in empirical legal studies, law and economics, political science, environmental science, health economics, and regulatory economics.
Power to regulate should include the power to enforce regulatory decisions. Monitoring is an important tool used by national regulatory authorities in carrying out the regulated activities.
In some countries (in particular the Scandinavian countries) industrial relations are to a very high degree regulated by the labour market parties themselves (self-regulation) in contrast to state regulation of minimum wages etc.
Measurement
Regulation can be assessed for different countries through various quantitative measures. The Global Indicators of Regulatory Governance by World Bank's Global Indicators Group scores 186 countries on transparency around proposed regulations, consultation on their content, the use of regulatory impact assessments and the access to enacted laws on a scale from 0 to 5. The V-Dem Democracy indices include the regulatory quality indicator. The QuantGov project at the Mercatus Center tracks the count of regulations by topic for United States, Canada, and Australia.
History
Regulation of businesses existed in the ancient early Egyptian, Indian, Greek, and Roman civilizations. Standardized weights and measures existed to an extent in the ancient world, and gold may have operated to some degree as an international currency. In China, a national currency system existed and paper currency was invented. Sophisticated law existed in Ancient Rome. In the European Early Middle Ages, law and standardization declined with the Roman Empire, but regulation existed in the form of norms, customs, and privileges; this regulation was aided by the unified Christian identity and a sense of honor regarding contracts.
Modern industrial regulation can be traced to the Railway Regulation Act 1844 in the United Kingdom, and succeeding Acts. Beginning in the late 19th and 20th centuries, much of regulation in the United States was administered and enforced by regulatory agencies which produced their own administrative law and procedures under the authority of statutes. Legislators created these agencies to require experts in the industry to focus their attention on the issue. At the federal level, one of the earliest institutions was the Interstate Commerce Commission which had its roots in earlier state-based regulatory commissions and agencies. Later agencies include the Federal Trade Commission, Securities and Exchange Commission, Civil Aeronautics Board, and various other institutions. These institutions vary from industry to industry and at the federal and state level. Individual agencies do not necessarily have clear life-cycles or patterns of behavior, and they are influenced heavily by their leadership and staff as well as the organic law creating the agency. In the 1930s, lawmakers believed that unregulated business often led to injustice and inefficiency; in the 1960s and 1970s, concern shifted to regulatory capture, which led to extremely detailed laws creating the United States Environmental Protection Agency and Occupational Safety and Health Administration.
Regulatory economics
Regulatory state
Regulatory capture
Deregulation
See also
References
External links
Centre on Regulation in Europe (CERRE)
New Perspectives on Regulation (2009) and Government and Markets: Toward a New Theory of Regulation (2009)
US/Canadian Regulatory Cooperation: Schmitz on Lessons from the European Union, Canadian Privy Council Office Commissioned Study
A Comparative Bibliography: Regulatory Competition on Corporate Law
Wikibooks
Legal and Regulatory Issues in the Information Economy
Lawrence A. Cunningham, A Prescription to Retire the Rhetoric of 'Principles-Based Systems' in Corporate Law, Securities Regulation and Accounting (2007)
Economics of regulation
Public policy | 0.785719 | 0.993999 | 0.781004 |
Questionnaire | A questionnaire is a research instrument that consists of a set of questions (or other types of prompts) for the purpose of gathering information from respondents through survey or statistical study. A research questionnaire is typically a mix of close-ended questions and open-ended questions. Open-ended, long-term questions offer the respondent the ability to elaborate on their thoughts. The Research questionnaire was developed by the Statistical Society of London in 1838.
Although questionnaires are often designed for statistical analysis of the responses, this is not always the case.
Questionnaires have advantages over some other types of survey tools in that they are cheap, do not require as much effort from the questioner as verbal or telephone surveys, and often have standardized answers that make it simple to compile data. However, such standardized answers may frustrate users as the possible answers may not accurately represent their desired responses. Questionnaires are also sharply limited by the fact that respondents must be able to read the questions and respond to them. Thus, for some demographic groups conducting a survey by questionnaire may not be concretely feasible.
History
One of the earliest questionnaires was Dean Milles' Questionnaire
of 1753.
Types
A distinction can be made between questionnaires with questions that measure separate variables, and questionnaires with questions that are aggregated into either a scale or index.
Questionnaires with questions that measure separate variables, could, for instance, include questions on:
preferences (e.g. political party)
behaviors (e.g. food consumption)
facts (e.g. gender)
Questionnaires with questions that are aggregated into either a scale or index include for instance questions that measure:
latent traits
attitudes (e.g. towards immigration)
an index (e.g. Social Economic Status)
Examples
A food frequency questionnaire (FFQ) is a questionnaire the type of diet consumed in people, and may be used as a research instrument. Examples of usages include assessment of intake of vitamins or toxins such as acrylamide.
Questionnaire construction
Question type
Usually, a questionnaire consists of a number of questions (test items) that the respondent has to answer in a set format. A distinction is made between open-ended and closed-ended questions. An open-ended question asks the respondent to formulate his own answer, whereas a closed-ended question asks the respondent to pick an answer from a given number of options. The response options for a closed-ended question should be exhaustive and mutually exclusive. Four types of response scales for closed-ended questions are distinguished:
Dichotomous, where the respondent has two options. The dichotomous question is generally a "yes/no" close-ended question. This question is usually used in case of the need for necessary validation. It is the most natural form of a questionnaire.
Nominal-polytomous, where the respondent has more than two unordered options. The nominal scale, also called the categorical variable scale, is defined as a scale used for labeling variables into distinct classifications and does not involve a quantitative value or order.
Ordinal-polytomous, where the respondent has more than two ordered options
(Bounded)Continuous, where the respondent is presented with a continuous scale
A respondent's answer to an open-ended question is coded into a response scale afterward. An example of an open-ended question is a question where the testee has to complete a sentence (sentence completion item).
Question sequence
In general, questions should flow logically from one to the next. To achieve the best response rates, questions should flow from the least sensitive to the most sensitive, from the factual and behavioural to the attitudinal, and from the more general to the more specific.
There typically is a flow that should be followed when constructing a questionnaire in regards to the order that the questions are asked. The order is as follows:
Screens
Warm-ups
Transitions
Skips
Difficult
Classification
Screens are used as a screening method to find out early whether or not someone should complete the questionnaire.
Warm-ups are simple to answer, help capture interest in the survey, and may not even pertain to research objectives.
Transition questions are used to make different areas flow well together.
Skips include questions similar to "If yes, then answer question 3. If no, then continue to question 5."
Difficult questions are towards the end because the respondent is in "response mode." Also, when completing an online questionnaire, the progress bars lets the respondent know that they are almost done so they are more willing to answer more difficult questions.
Classification, or demographic question should be at the end because typically they can feel like personal questions which will make respondents uncomfortable and not willing to finish survey.
Basic rules for questionnaire item construction
Use statements that are interpreted in the same way by members of different subpopulations of the population of interest.
Use statements where persons that have different opinions or traits will give different answers.
Think of having an "open" answer category after a list of possible answers.
Use only one aspect of the construct you are interested in per item.
Use positive statements and avoid negatives or double negatives.
Do not make assumptions about the respondent.
Use clear and comprehensible wording, easily understandable for all educational levels
Use correct spelling, grammar and punctuation.
Avoid items that contain more than one question per item (e.g. Do you like strawberries and potatoes?).
Question should not be biased or even leading the participant towards an answer.
Incorporate research questions like MaxDiff and Conjoint to help collect actionable data.
Multi-item scales
Within social science research and practice, questionnaires are most frequently used to collect quantitative data using multi-item scales with the following characteristics:
Multiple statements or questions (minimum ≥3; usually ≥5) are presented for each variable being examined.
Each statement or question has an accompanying set of equidistant response-points (usually 5–7).
Each response point has an accompanying verbal anchor (e.g., "strongly agree") ascending from left to right.
Verbal anchors should be balanced to reflect equal intervals between response-points.
Collectively, a set of response-points and accompanying verbal anchors are referred to as a rating scale. One very frequently-used rating scale is a Likert scale.
Usually, for clarity and efficiency, a single set of anchors is presented for multiple rating scales in a questionnaire.
Collectively, a statement or question with an accompanying rating scale is referred to as an item.
When multiple items measure the same variable in a reliable and valid way, they are collectively referred to as a multi-item scale, or a psychometric scale.
The following types of reliability and validity should be established for a multi-item scale: internal reliability, test-retest reliability (if the variable is expected to be stable over time), content validity, construct validity, and criterion validity.
Factor analysis is used in the scale development process.
Questionnaires used to collect quantitative data usually comprise several multi-item scales, together with an introductory and concluding section.
Questionnaire administration modes
Main modes of questionnaire administration include:
Face-to-face questionnaire administration, where an interviewer presents the items orally.
Paper-and-pencil questionnaire administration, where the items are presented on paper.
Computerized questionnaire administration, where the items are presented on the computer.
Adaptive computerized questionnaire administration, where a selection of items is presented on the computer, and based on the answers on those items, the computer selects the following items optimized for the testee's estimated ability or trait.
Questionnaire translation
Questionnaires are translated from a source language into one or more target languages, such as translating from English into Spanish and German. The process is not a mechanical word placement process. Best practice includes parallel translation, team discussions, and pretesting with real-life people, and is integrated in the model TRAPD (Translation, Review, Adjudication, Pretest, and Documentation). A theoretical framework is also provided by sociolinguistics, which states that to achieve the equivalent communicative effect as the source language, the translation must be linguistically appropriate while incorporating the social practices and cultural norms of the target language.
Besides translators, a team approach is recommended in the questionnaire translation process to include subject-matter experts and persons helpful to the process. For example, even when project managers and researchers do not speak the language of the translation, they know the study objectives well and the intent behind the questions, and therefore have a key role in improving questionnaire translation.
Concerns with questionnaires
While questionnaires are inexpensive, quick, and easy to analyze, often the questionnaire can have more problems than benefits. For example, unlike interviews, the people conducting the research may never know if the respondent understood the question that was being asked. Also, because the questions are so specific to what the researchers are asking, the information gained can be minimal. Often, questionnaires such as the Myers-Briggs Type Indicator, give too few options to answer; respondents can answer either option but must choose only one response. Questionnaires also produce very low return rates, whether they are mail or online questionnaires. The other problem associated with return rates is that often the people who do return the questionnaire are those who have a very positive or a very negative viewpoint and want their opinion heard. The people who are most likely unbiased either way typically do not respond because it is not worth their time.
One key concern with questionnaires is that they may contain quite large measurement errors. These errors can be random or systematic. Random errors are caused by unintended mistakes by respondents, interviewers, and/or coders. Systematic error can occur if there is a systematic reaction of the respondents to the scale used to formulate the survey question. Thus, the exact formulation of a survey question and its scale is crucial, since they affect the level of measurement error.
Further, if the questionnaires are not collected using sound sampling techniques, often the results can be non-representative of the population—as such a good sample is critical to getting representative results based on questionnaires.
See also
Survey methodology
Behavioral Risk Factor Surveillance System
Computer-assisted personal interviewing
Enterprise Feedback Management
Quantitative marketing research
Questionnaire construction
Structured interviewing
Web-based experiments
Position analysis questionnaire
Abnormal construction
Further reading
Foddy, W. H. (1994). Constructing questions for interviews and questionnaires: Theory and practice in social research (New ed.). Cambridge, UK: Cambridge University Press.
Gillham, B. (2008). Developing a questionnaire (2nd ed.). London, UK: Continuum International Publishing Group Ltd.
Mellenbergh, G. J. (2008). Chapter 10: Tests and questionnaires: Construction and administration. In H. J. Adèr & G. J. Mellenbergh (Eds.) (with contributions by D. J. Hand), Advising on research methods: A consultant's companion (pp. 211–234). Huizen, The Netherlands: Johannes van Kessel Publishing.
Mellenbergh, G. J. (2008). Chapter 11: Tests and questionnaires: Analysis. In H. J. Adèr & G. J. Mellenbergh (Eds.) (with contributions by D. J. Hand), Advising on research methods: A consultant's companion (pp. 235–268). Huizen, The Netherlands: Johannes van Kessel Publishing.
Munn, P., & Drever, E. (2004). Using questionnaires in small-scale research: A beginner's guide. Glasgow, Scotland: Scottish Council for Research in Education.
Oppenheim, A. N. (2000). Questionnaire design, interviewing and attitude measurement (New ed.). London, UK: Continuum International Publishing Group Ltd.
Robinson, M. A. (2018). Using multi-item psychometric scales for research and practice in human resource management. Human Resource Management, 57(3), 739–750. https://dx.doi.org/10.1002/hrm.21852 (open-access)
Questionnaire are of different types as per Paul:
1)Structured Questionnaire.
2)Unstructured Questionnaire.
3)Open ended Questionnaire.
4)Close ended Questionnaire.
5)Mixed Questionnaire.
6)Pictorial Questionnaire.
References
External links
Harmonised questions from the UK Office for National Statistics
Hints for designing effective questionnaires - from the ERIC Clearinghouse on Assessment and Evaluation
Questionnaire construction
Types of polling | 0.783382 | 0.996933 | 0.780979 |
Microeconomics | Microeconomics is a branch of economics that studies the behavior of individuals and firms in making decisions regarding the allocation of scarce resources and the interactions among these individuals and firms. Microeconomics focuses on the study of individual markets, sectors, or industries as opposed to the economy as a whole, which is studied in macroeconomics.
One goal of microeconomics is to analyze the market mechanisms that establish relative prices among goods and services and allocate limited resources among alternative uses. Microeconomics shows conditions under which free markets lead to desirable allocations. It also analyzes market failure, where markets fail to produce efficient results.
While microeconomics focuses on firms and individuals, macroeconomics focuses on the total of economic activity, dealing with the issues of growth, inflation, and unemployment—and with national policies relating to these issues. Microeconomics also deals with the effects of economic policies (such as changing taxation levels) on microeconomic behavior and thus on the aforementioned aspects of the economy. Particularly in the wake of the Lucas critique, much of modern macroeconomic theories has been built upon microfoundations—i.e., based upon basic assumptions about micro-level behavior.
Assumptions and definitions
Microeconomic study historically has been performed according to general equilibrium theory, developed by Léon Walras in Elements of Pure Economics (1874) and partial equilibrium theory, introduced by Alfred Marshall in Principles of Economics (1890).
Microeconomic theory typically begins with the study of a single rational and utility maximizing individual. To economists, rationality means an individual possesses stable preferences that are both complete and transitive.
The technical assumption that preference relations are continuous is needed to ensure the existence of a utility function. Although microeconomic theory can continue without this assumption, it would make comparative statics impossible since there is no guarantee that the resulting utility function would be differentiable.
Microeconomic theory progresses by defining a competitive budget set which is a subset of the consumption set. It is at this point that economists make the technical assumption that preferences are locally non-satiated. Without the assumption of LNS (local non-satiation) there is no 100% guarantee but there would be a rational rise
in individual utility. With the necessary tools and assumptions in place the utility maximization problem (UMP) is developed.
The utility maximization problem is the heart of consumer theory. The utility maximization problem attempts to explain the action axiom by imposing rationality axioms on consumer preferences and then mathematically modeling and analyzing the consequences. The utility maximization problem serves not only as the mathematical foundation of consumer theory but as a metaphysical explanation of it as well. That is, the utility maximization problem is used by economists to not only explain what or how individuals make choices but why individuals make choices as well.
The utility maximization problem is a constrained optimization problem in which an individual seeks to maximize utility subject to a budget constraint. Economists use the extreme value theorem to guarantee that a solution to the utility maximization problem exists. That is, since the budget constraint is both bounded and closed, a solution to the utility maximization problem exists. Economists call the solution to the utility maximization problem a Walrasian demand function or correspondence.
The utility maximization problem has so far been developed by taking consumer tastes (i.e. consumer utility) as primitive. However, an alternative way to develop microeconomic theory is by taking consumer choice as primitive. This model of microeconomic theory is referred to as revealed preference theory.
The theory of supply and demand usually assumes that markets are perfectly competitive. This implies that there are many buyers and sellers in the market and none of them have the capacity to significantly influence prices of goods and services. In many real-life transactions, the assumption fails because some individual buyers or sellers have the ability to influence prices. Quite often, a sophisticated analysis is required to understand the demand-supply equation of a good model. However, the theory works well in situations meeting these assumptions.
Mainstream economics does not assume a priori that markets are preferable to other forms of social organization. In fact, much analysis is devoted to cases where market failures lead to resource allocation that is suboptimal and creates deadweight loss. A classic example of suboptimal resource allocation is that of a public good. In such cases, economists may attempt to find policies that avoid waste, either directly by government control, indirectly by regulation that induces market participants to act in a manner consistent with optimal welfare, or by creating "missing markets" to enable efficient trading where none had previously existed.
This is studied in the field of collective action and public choice theory. "Optimal welfare" usually takes on a Paretian norm, which is a mathematical application of the Kaldor–Hicks method. This can diverge from the Utilitarian goal of maximizing utility because it does not consider the distribution of goods between people. Market failure in positive economics (microeconomics) is limited in implications without mixing the belief of the economist and their theory.
The demand for various commodities by individuals is generally thought of as the outcome of a utility-maximizing process, with each individual trying to maximize their own utility under a budget constraint and a given consumption set.
Allocation of scarce resources
Individuals and firms need to allocate limited resources to ensure all agents in the economy are well off. Firms decide which goods and services to produce considering low costs involving labor, materials and capital as well as potential profit margins. Consumers choose the good and services they want that will maximize their happiness taking into account their limited wealth.
The government can make these allocation decisions or they can be independently made by the consumers and firms. For example, in the former Soviet Union, the government played a part in informing car manufacturers which cars to produce and which consumers will gain access to a car.
History
Economists commonly consider themselves microeconomists or macroeconomists. The difference between microeconomics and macroeconomics likely was introduced in 1933 by the Norwegian economist Ragnar Frisch, the co-recipient of the first Nobel Memorial Prize in Economic Sciences in 1969. However, Frisch did not actually use the word "microeconomics", instead drawing distinctions between "micro-dynamic" and "macro-dynamic" analysis in a way similar to how the words "microeconomics" and "macroeconomics" are used today. The first known use of the term "microeconomics" in a published article was from Pieter de Wolff in 1941, who broadened the term "micro-dynamics" into "microeconomics".
Microeconomic theory
Consumer demand theory
Consumer demand theory relates preferences for the consumption of both goods and services to the consumption expenditures; ultimately, this relationship between preferences and consumption expenditures is used to relate preferences to consumer demand curves. The link between personal preferences, consumption and the demand curve is one of the most closely studied relations in economics. It is a way of analyzing how consumers may achieve equilibrium between preferences and expenditures by maximizing utility subject to consumer budget constraints.
Production theory
Production theory is the study of production, or the economic process of converting inputs into outputs. Production uses resources to create a good or service that is suitable for use, gift-giving in a gift economy, or exchange in a market economy. This can include manufacturing, storing, shipping, and packaging. Some economists define production broadly as all economic activity other than consumption. They see every commercial activity other than the final purchase as some form of production.
Cost-of-production theory of value
The cost-of-production theory of value states that the price of an object or condition is determined by the sum of the cost of the resources that went into making it. The cost can comprise any of the factors of production (including labor, capital, or land) and taxation. Technology can be viewed either as a form of fixed capital (e.g. an industrial plant) or circulating capital (e.g. intermediate goods).
In the mathematical model for the cost of production, the short-run total cost is equal to fixed cost plus total variable cost. The fixed cost refers to the cost that is incurred regardless of how much the firm produces. The variable cost is a function of the quantity of an object being produced. The cost function can be used to characterize production through the duality theory in economics, developed mainly by Ronald Shephard (1953, 1970) and other scholars (Sickles & Zelenyuk, 2019, ch. 2).
Fixed and variable costs
Fixed cost (FC) – This cost does not change with output. It includes business expenses such as rent, salaries and utility bills.
Variable cost (VC) – This cost changes as output changes. This includes raw materials, delivery costs and production supplies.
Over a short time period (few months), most costs are fixed costs as the firm will have to pay for salaries, contracted shipment and materials used to produce various goods. Over a longer time period (2-3 years), costs can become variable. Firms can decide to reduce output, purchase fewer materials and even sell some machinery. Over 10 years, most costs become variable as workers can be laid off or new machinery can be bought to replace the old machinery
Sunk Costs – This is a fixed cost that has already been incurred and cannot be recovered. An example of this can be in R&D development like in the pharmaceutical industry. Hundreds of millions of dollars are spent to achieve new drug breakthroughs but this is challenging as its increasingly harder to find new breakthroughs and meet tighter regulation standards. Thus many projects are written off leading to losses of millions of dollars
Opportunity cost
Opportunity cost is closely related to the idea of time constraints. One can do only one thing at a time, which means that, inevitably, one is always giving up other things. The opportunity cost of any activity is the value of the next-best alternative thing one may have done instead. Opportunity cost depends only on the value of the next-best alternative. It does not matter whether one has five alternatives or 5,000.
Opportunity costs can tell when not to do something as well as when to do something. For example, one may like waffles, but like chocolate even more. If someone offers only waffles, one would take it. But if offered waffles or chocolate, one would take the chocolate. The opportunity cost of eating waffles is sacrificing the chance to eat chocolate. Because the cost of not eating the chocolate is higher than the benefits of eating the waffles, it makes no sense to choose waffles. Of course, if one chooses chocolate, they are still faced with the opportunity cost of giving up having waffles. But one is willing to do that because the waffle's opportunity cost is lower than the benefits of the chocolate. Opportunity costs are unavoidable constraints on behavior because one has to decide what's best and give up the next-best alternative.
Price theory
Microeconomics is also known as price theory to highlight the significance of prices in relation to buyer and sellers as these agents determine prices due to their individual actions. Price theory is a field of economics that uses the supply and demand framework to explain and predict human behavior. It is associated with the Chicago School of Economics. Price theory studies competitive equilibrium in markets to yield testable hypotheses that can be rejected.
Price theory is not the same as microeconomics. Strategic behavior, such as the interactions among sellers in a market where they are few, is a significant part of microeconomics but is not emphasized in price theory. Price theorists focus on competition believing it to be a reasonable description of most markets that leaves room to study additional aspects of tastes and technology. As a result, price theory tends to use less game theory than microeconomics does.
Price theory focuses on how agents respond to prices, but its framework can be applied to a wide variety of socioeconomic issues that might not seem to involve prices at first glance. Price theorists have influenced several other fields including developing public choice theory and law and economics. Price theory has been applied to issues previously thought of as outside the purview of economics such as criminal justice, marriage, and addiction.
Microeconomic models
Supply and demand
Supply and demand is an economic model of price determination in a perfectly competitive market. It concludes that in a perfectly competitive market with no externalities, per unit taxes, or price controls, the unit price for a particular good is the price at which the quantity demanded by consumers equals the quantity supplied by producers. This price results in a stable economic equilibrium.
Prices and quantities have been described as the most directly observable attributes of goods produced and exchanged in a market economy. The theory of supply and demand is an organizing principle for explaining how prices coordinate the amounts produced and consumed. In microeconomics, it applies to price and output determination for a market with perfect competition, which includes the condition of no buyers or sellers large enough to have price-setting power.
For a given market of a commodity, demand is the relation of the quantity that all buyers would be prepared to purchase at each unit price of the good. Demand is often represented by a table or a graph showing price and quantity demanded (as in the figure). Demand theory describes individual consumers as rationally choosing the most preferred quantity of each good, given income, prices, tastes, etc. A term for this is "constrained utility maximization" (with income and wealth as the constraints on demand). Here, utility refers to the hypothesized relation of each individual consumer for ranking different commodity bundles as more or less preferred.
The law of demand states that, in general, price and quantity demanded in a given market are inversely related. That is, the higher the price of a product, the less of it people would be prepared to buy (other things unchanged). As the price of a commodity falls, consumers move toward it from relatively more expensive goods (the substitution effect). In addition, purchasing power from the price decline increases ability to buy (the income effect). Other factors can change demand; for example an increase in income will shift the demand curve for a normal good outward relative to the origin, as in the figure. All determinants are predominantly taken as constant factors of demand and supply.
Supply is the relation between the price of a good and the quantity available for sale at that price. It may be represented as a table or graph relating price and quantity supplied. Producers, for example business firms, are hypothesized to be profit maximizers, meaning that they attempt to produce and supply the amount of goods that will bring them the highest profit. Supply is typically represented as a function relating price and quantity, if other factors are unchanged.
That is, the higher the price at which the good can be sold, the more of it producers will supply, as in the figure. The higher price makes it profitable to increase production. Just as on the demand side, the position of the supply can shift, say from a change in the price of a productive input or a technical improvement. The "Law of Supply" states that, in general, a rise in price leads to an expansion in supply and a fall in price leads to a contraction in supply. Here as well, the determinants of supply, such as price of substitutes, cost of production, technology applied and various factors of inputs of production are all taken to be constant for a specific time period of evaluation of supply.
Market equilibrium occurs where quantity supplied equals quantity demanded, the intersection of the supply and demand curves in the figure above. At a price below equilibrium, there is a shortage of quantity supplied compared to quantity demanded. This is posited to bid the price up. At a price above equilibrium, there is a surplus of quantity supplied compared to quantity demanded. This pushes the price down. The model of supply and demand predicts that for given supply and demand curves, price and quantity will stabilize at the price that makes quantity supplied equal to quantity demanded. Similarly, demand-and-supply theory predicts a new price-quantity combination from a shift in demand (as to the figure), or in supply.
For a given quantity of a consumer good, the point on the demand curve indicates the value, or marginal utility, to consumers for that unit. It measures what the consumer would be prepared to pay for that unit. The corresponding point on the supply curve measures marginal cost, the increase in total cost to the supplier for the corresponding unit of the good. The price in equilibrium is determined by supply and demand. In a perfectly competitive market, supply and demand equate marginal cost and marginal utility at equilibrium.
On the supply side of the market, some factors of production are described as (relatively) variable in the short run, which affects the cost of changing output levels. Their usage rates can be changed easily, such as electrical power, raw-material inputs, and over-time and temp work. Other inputs are relatively fixed, such as plant and equipment and key personnel. In the long run, all inputs may be adjusted by management. These distinctions translate to differences in the elasticity (responsiveness) of the supply curve in the short and long runs and corresponding differences in the price-quantity change from a shift on the supply or demand side of the market.
Marginalist theory, such as above, describes the consumers as attempting to reach most-preferred positions, subject to income and wealth constraints while producers attempt to maximize profits subject to their own constraints, including demand for goods produced, technology, and the price of inputs. For the consumer, that point comes where marginal utility of a good, net of price, reaches zero, leaving no net gain from further consumption increases. Analogously, the producer compares marginal revenue (identical to price for the perfect competitor) against the marginal cost of a good, with marginal profit the difference. At the point where marginal profit reaches zero, further increases in production of the good stop. For movement to market equilibrium and for changes in equilibrium, price and quantity also change "at the margin": more-or-less of something, rather than necessarily all-or-nothing.
Other applications of demand and supply include the distribution of income among the factors of production, including labor and capital, through factor markets. In a competitive labor market for example the quantity of labor employed and the price of labor (the wage rate) depends on the demand for labor (from employers for production) and supply of labor (from potential workers). Labor economics examines the interaction of workers and employers through such markets to explain patterns and changes of wages and other labor income, labor mobility, and (un)employment, productivity through human capital, and related public-policy issues.
Demand-and-supply analysis is used to explain the behavior of perfectly competitive markets, but as a standard of comparison it can be extended to any type of market. It can also be generalized to explain variables across the economy, for example, total output (estimated as real GDP) and the general price level, as studied in macroeconomics. Tracing the qualitative and quantitative effects of variables that change supply and demand, whether in the short or long run, is a standard exercise in applied economics. Economic theory may also specify conditions such that supply and demand through the market is an efficient mechanism for allocating resources.
Market structure
Market structure refers to features of a market, including the number of firms in the market, the distribution of market shares between them, product uniformity across firms, how easy it is for firms to enter and exit the market, and forms of competition in the market. A market structure can have several types of interacting market systems.
Different forms of markets are a feature of capitalism and market socialism, with advocates of state socialism often criticizing markets and aiming to substitute or replace markets with varying degrees of government-directed economic planning.
Competition acts as a regulatory mechanism for market systems, with government providing regulations where the market cannot be expected to regulate itself. Regulations help to mitigate negative externalities of goods and services when the private equilibrium of the market does not match the social equilibrium. One example of this is with regards to building codes, which if absent in a purely competition regulated market system, might result in several horrific injuries or deaths to be required before companies would begin improving structural safety, as consumers may at first not be as concerned or aware of safety issues to begin putting pressure on companies to provide them, and companies would be motivated not to provide proper safety features due to how it would cut into their profits.
The concept of "market type" is different from the concept of "market structure". Nevertheless, there are a variety of types of markets.
The different market structures produce cost curves based on the type of structure present. The different curves are developed based on the costs of production, specifically the graph contains marginal cost, average total cost, average variable cost, average fixed cost, and marginal revenue, which is sometimes equal to the demand, average revenue, and price in a price-taking firm.
Perfect competition
Perfect competition is a situation in which numerous small firms producing identical products compete against each other in a given industry. Perfect competition leads to firms producing the socially optimal output level at the minimum possible cost per unit. Firms in perfect competition are "price takers" (they do not have enough market power to profitably increase the price of their goods or services). A good example would be that of digital marketplaces, such as eBay, on which many different sellers sell similar products to many different buyers. Consumers in a perfect competitive market have perfect knowledge about the products that are being sold in this market.
Imperfect competition
Imperfect competition is a type of market structure showing some but not all features of competitive markets. In perfect competition, market power is not achievable due to a high level of producers causing high levels of competition. Therefore, prices are brought down to a marginal cost level. In a monopoly, market power is achieved by one firm leading to prices being higher than the marginal cost level.
Between these two types of markets are firms that are neither perfectly competitive or monopolistic. Firms such as Pepsi and Coke and Sony, Nintendo and Microsoft dominate the cola and video game industry respectively. These firms are in imperfect competition
Monopolistic competition
Monopolistic competition is a situation in which many firms with slightly different products compete. Production costs are above what may be achieved by perfectly competitive firms, but society benefits from the product differentiation. Examples of industries with market structures similar to monopolistic competition include restaurants, cereal, clothing, shoes, and service industries in large cities.
Monopoly
A monopoly is a market structure in which a market or industry is dominated by a single supplier of a particular good or service. Because monopolies have no competition, they tend to sell goods and services at a higher price and produce below the socially optimal output level. However, not all monopolies are a bad thing, especially in industries where multiple firms would result in more costs than benefits (i.e. natural monopolies).
Natural monopoly: A monopoly in an industry where one producer can produce output at a lower cost than many small producers.
Oligopoly
An oligopoly is a market structure in which a market or industry is dominated by a small number of firms (oligopolists). Oligopolies can create the incentive for firms to engage in collusion and form cartels that reduce competition leading to higher prices for consumers and less overall market output. Alternatively, oligopolies can be fiercely competitive and engage in flamboyant advertising campaigns.
Duopoly: A special case of an oligopoly, with only two firms. Game theory can elucidate behavior in duopolies and oligopolies.
Monopsony
A monopsony is a market where there is only one buyer and many sellers.
Bilateral monopoly
A bilateral monopoly is a market consisting of both a monopoly (a single seller) and a monopsony (a single buyer).
Oligopsony
An oligopsony is a market where there are a few buyers and many sellers.
Game theory
Game theory is a major method used in mathematical economics and business for modeling competing behaviors of interacting agents. The term "game" here implies the study of any strategic interaction between people. Applications include a wide array of economic phenomena and approaches, such as auctions, bargaining, mergers & acquisitions pricing, fair division, duopolies, oligopolies, social network formation, agent-based computational economics, general equilibrium, mechanism design, and voting systems, and across such broad areas as experimental economics, behavioral economics, information economics, industrial organization, and political economy.
Information economics
Information economics is a branch of microeconomic theory that studies how information and information systems affect an economy and economic decisions. Information has special characteristics. It is easy to create but hard to trust. It is easy to spread but hard to control. It influences many decisions. These special characteristics (as compared with other types of goods) complicate many standard economic theories. The economics of information has recently become of great interest to many - possibly due to the rise of information-based companies inside the technology industry. From a game theory approach, the usual constraints that agents have complete information can be loosened to further examine the consequences of having incomplete information. This gives rise to many results which are applicable to real life situations. For example, if one does loosen this assumption, then it is possible to scrutinize the actions of agents in situations of uncertainty. It is also possible to more fully understand the impacts – both positive and negative – of agents seeking out or acquiring information.
Applied
Applied microeconomics includes a range of specialized areas of study, many of which draw on methods from other fields.
Economic history examines the evolution of the economy and economic institutions, using methods and techniques from the fields of economics, history, geography, sociology, psychology, and political science.
Education economics examines the organization of education provision and its implication for efficiency and equity, including the effects of education on productivity.
Financial economics examines topics such as the structure of optimal portfolios, the rate of return to capital, econometric analysis of security returns, and corporate financial behavior.
Health economics examines the organization of health care systems, including the role of the health care workforce and health insurance programs.
Industrial organization examines topics such as the entry and exit of firms, innovation, and the role of trademarks.
Law and economics applies microeconomic principles to the selection and enforcement of competing legal regimes and their relative efficiencies.
Political economy examines the role of political institutions in determining policy outcomes.
Public economics examines the design of government tax and expenditure policies and economic effects of these policies (e.g., social insurance programs).
Urban economics, which examines the challenges faced by cities, such as sprawl, air and water pollution, traffic congestion, and poverty, draws on the fields of urban geography and sociology.
Labor economics examines primarily labor markets, but comprises a large range of public policy issues such as immigration, minimum wages, or inequality.
See also
Macroeconomics
First-order approach
Critique of political economy
References
Further reading
*
Bouman, John: Principles of Microeconomics – free fully comprehensive Principles of Microeconomics and Macroeconomics texts. Columbia, Maryland, 2011
Colander, David. Microeconomics. McGraw-Hill Paperback, 7th ed.: 2008.
Eaton, B. Curtis; Eaton, Diane F.; and Douglas W. Allen. Microeconomics. Prentice Hall, 5th ed.: 2002.
Frank, Robert H.; Microeconomics and Behavior. McGraw-Hill/Irwin, 6th ed.: 2006.
Friedman, Milton. Price Theory. Aldine Transaction: 1976
Hagendorf, Klaus: Labour Values and the Theory of the Firm. Part I: The Competitive Firm. Paris: EURODOS; 2009.
Hicks, John R. Value and Capital. Clarendon Press. [1939] 1946, 2nd ed.
Hirshleifer, Jack., Glazer, Amihai, and Hirshleifer, David, Price theory and applications: Decisions, markets, and information. Cambridge University Press, 7th ed.: 2005.
Jaffe, Sonia; Minton, Robert; Mulligan, Casey B.; and Murphy, Kevin M.: Chicago Price Theory. Princeton University Press, 2019
Jehle, Geoffrey A.; and Philip J. Reny. Advanced Microeconomic Theory. Addison Wesley Paperback, 2nd ed.: 2000.
Katz, Michael L.; and Harvey S. Rosen. Microeconomics. McGraw-Hill/Irwin, 3rd ed.: 1997.
Kreps, David M. A Course in Microeconomic Theory. Princeton University Press: 1990
Landsburg, Steven. Price Theory and Applications. South-Western College Pub, 5th ed.: 2001.
Mankiw, N. Gregory. Principles of Microeconomics. South-Western Pub, 2nd ed.: 2000.
Mas-Colell, Andreu; Whinston, Michael D.; and Jerry R. Green. Microeconomic Theory. Oxford University Press, US: 1995.
McGuigan, James R.; Moyer, R. Charles; and Frederick H. Harris. Managerial Economics: Applications, Strategy and Tactics. South-Western Educational Publishing, 9th ed.: 2001.
Nicholson, Walter. Microeconomic Theory: Basic Principles and Extensions. South-Western College Pub, 8th ed.: 2001.
Perloff, Jeffrey M. Microeconomics. Pearson – Addison Wesley, 4th ed.: 2007.
Perloff, Jeffrey M. Microeconomics: Theory and Applications with Calculus. Pearson – Addison Wesley, 1st ed.: 2007
Pindyck, Robert S.; and Daniel L. Rubinfeld. Microeconomics. Prentice Hall, 7th ed.: 2008.
Ruffin, Roy J.; and Paul R. Gregory. Principles of Microeconomics. Addison Wesley, 7th ed.: 2000.
Varian, Hal R. (1987). "microeconomics," The New Palgrave: A Dictionary of Economics, v. 3, pp. 461–463.
Varian, Hal R. Intermediate Microeconomics: A Modern Approach. W. W. Norton & Company, 8th ed.: 2009.
Varian, Hal R. Microeconomic Analysis. W.W. Norton & Company, 3rd ed.: 1992.
The economic times (2023). What is Microeconomics. https://economictimes.indiatimes.com/definition/microeconomics.
External links
X-Lab: A Collaborative Micro-Economics and Social Sciences Research Laboratory
Simulations in Microeconomics
A brief history of microeconomics
Money | 0.782452 | 0.998044 | 0.780921 |
Environment | Environment most often refers to:
Natural environment/Biophysical environment, referring respectively to all living and non-living things occurring naturally and the physical and biological factors along with their chemical interactions that affect an organism or a group of organisms
Other physical and cultural environments
Ecology, the branch of ethology that deals with the relations of organisms to one another and to their physical surroundings
Environment (systems), the surroundings of a physical system that may interact with the system by exchanging mass, energy, or other properties.
Built environment, constructed surroundings that provide the settings for human activity, ranging from the large-scale civic surroundings to the personal places
Social environment, the culture that an individual lives in, and the people and institutions with whom they interact
Market environment, business term
Arts, entertainment and publishing
Environment (magazine), a peer-reviewed, popular environmental science publication founded in 1958
Environment (1917 film), 1917 American silent film
Environment (1922 film), 1922 American silent film
Environment (1927 film), 1927 Australian silent film
Environments (album series), a series of LPs, cassettes and CDs depicting natural sounds
Environments (album), a 2007 album by The Future Sound of London
"Environment", a song by Dave from Psychodrama
Environments (journal), a scientific journal
In computing
Environment (type theory), the association between variable names and data types in type theory
Deployment environment, in software deployment, a computer system in which a computer program or software component is deployed and executed
Runtime environment, a virtual machine state which provides software services for processes or programs while a computer is running
Environment variable, a variable capable of affecting the way processes behave on a computer
See also
Environmentalism, a broad philosophy, ideology, and social movement regarding concerns for environmental protection
Environmental disease
Environmental health
Environmental science
Environmental history of the United States
Environmental Issues are disruptions in the usual function of ecosystems. | 0.787398 | 0.991749 | 0.780901 |
Systems ecology | Systems ecology is an interdisciplinary field of ecology, a subset of Earth system science, that takes a holistic approach to the study of ecological systems, especially ecosystems. Systems ecology can be seen as an application of general systems theory to ecology. Central to the systems ecology approach is the idea that an ecosystem is a complex system exhibiting emergent properties. Systems ecology focuses on interactions and transactions within and between biological and ecological systems, and is especially concerned with the way the functioning of ecosystems can be influenced by human interventions. It uses and extends concepts from thermodynamics and develops other macroscopic descriptions of complex systems.
Overview
Systems ecology seeks a holistic view of the interactions and transactions within and between biological and ecological systems. Systems ecologists realise that the function of any ecosystem can be influenced by human economics in fundamental ways. They have therefore taken an additional transdisciplinary step by including economics in the consideration of ecological-economic systems. In the words of R.L. Kitching:
Systems ecology can be defined as the approach to the study of ecology of organisms using the techniques and philosophy of systems analysis: that is, the methods and tools developed, largely in engineering, for studying, characterizing and making predictions about complex entities, that is, systems..
In any study of an ecological system, an essential early procedure is to draw a diagram of the system of interest ... diagrams indicate the system's boundaries by a solid line. Within these boundaries, series of components are isolated which have been chosen to represent that portion of the world in which the systems analyst is interested ... If there are no connections across the systems' boundaries with the surrounding systems environments, the systems are described as closed. Ecological work, however, deals almost exclusively with open systems.As a mode of scientific enquiry, a central feature of Systems Ecology is the general application of the principles of energetics to all systems at any scale. Perhaps the most notable proponent of this view was Howard T. Odum - sometimes considered the father of ecosystems ecology. In this approach the principles of energetics constitute ecosystem principles. Reasoning by formal analogy from one system to another enables the Systems Ecologist to see principles functioning in an analogous manner across system-scale boundaries. H.T. Odum commonly used the Energy Systems Language as a tool for making systems diagrams and flow charts.
The fourth of these principles, the principle of maximum power efficiency, takes central place in the analysis and synthesis of ecological systems. The fourth principle suggests that the most evolutionarily advantageous system function occurs when the environmental load matches the internal resistance of the system. The further the environmental load is from matching the internal resistance, the further the system is away from its sustainable steady state. Therefore, the systems ecologist engages in a task of resistance and impedance matching in ecological engineering, just as the electronic engineer would do.
Closely related fields
Deep ecology
Deep ecology is an ideology whose metaphysical underpinnings are deeply concerned with the science of ecology. The term was coined by Arne Naess, a Norwegian philosopher, Gandhian scholar, and environmental activist. He argues that the prevailing approach to environmental management is anthropocentric, and that the natural environment is not only "more complex than we imagine, it is more complex than we can imagine." Naess formulated deep ecology in 1973 at an environmental conference in Budapest.
Joanna Macy, John Seed, and others developed Naess' thesis into a branch they called experiential deep ecology. Their efforts were motivated by a need they perceived for the development of an "ecological self", which views the human ego as an integrated part of a living system that encompasses the individual. They sought to transcend altruism with a deeper self-interest based on biospherical equality beyond human chauvinism.
Earth systems engineering and management
Earth systems engineering and management (ESEM) is a discipline used to analyze, design, engineer and manage complex environmental systems. It entails a wide range of subject areas including anthropology, engineering, environmental science, ethics and philosophy. At its core, ESEM looks to "rationally design and manage coupled human-natural systems in a highly integrated and ethical fashion"
Ecological economics
Ecological economics is a transdisciplinary field of academic research that addresses the dynamic and spatial interdependence between human economies and natural ecosystems. Ecological economics brings together and connects different disciplines, within the natural and social sciences but especially between these broad areas. As the name suggests, the field is made up of researchers with a background in economics and ecology. An important motivation for the emergence of ecological economics has been criticism on the assumptions and approaches of traditional (mainstream) environmental and resource economics.
Ecological energetics
Ecological energetics is the quantitative study of the flow of energy through ecological systems. It aims to uncover the principles which describe the propensity of such energy flows through the trophic, or 'energy availing' levels of ecological networks. In systems ecology the principles of ecosystem energy flows or "ecosystem laws" (i.e. principles of ecological energetics) are considered formally analogous to the principles of energetics.
Ecological humanities
Ecological humanities aims to bridge the divides between the sciences and the humanities, and between Western, Eastern and Indigenous ways of knowing nature. Like ecocentric political theory, the ecological humanities are characterised by a connectivity ontology and a commitment to two fundamental axioms relating to the need to submit to ecological laws and to see humanity as part of a larger living system.
Ecosystem ecology
Ecosystem ecology is the integrated study of biotic and abiotic components of ecosystems and their interactions within an ecosystem framework. This science examines how ecosystems work and relates this to their components such as chemicals, bedrock, soil, plants, and animals. Ecosystem ecology examines physical and biological structure and examines how these ecosystem characteristics interact.
The relationship between systems ecology and ecosystem ecology is complex. Much of systems ecology can be considered a subset of ecosystem ecology. Ecosystem ecology also utilizes methods that have little to do with the holistic approach of systems ecology. However, systems ecology more actively considers external influences such as economics that usually fall outside the bounds of ecosystem ecology. Whereas ecosystem ecology can be defined as the scientific study of ecosystems, systems ecology is more of a particular approach to the study of ecological systems and phenomena that interact with these systems.
Industrial ecology
Industrial ecology is the study of industrial processes as linear (open loop) systems, in which resource and capital investments move through the system to become waste, to a closed loop system where wastes become inputs for new processes.
See also
Agroecology
Earth system science
Ecosystem ecology
Ecological literacy
Emergy
Energy flow (ecology)
Energy Systems Language
Holism in science
Holon (philosophy)
Holistic management
Landscape ecology
Antireductionism
Biosemiotics
Ecosemiotics
MuSIASEM
References
Bibliography
Gregory Bateson, Steps to an Ecology of Mind, 2000.
Kenneth Edmund Ferguson, Systems Analysis in Ecology, WATT, 1966, 276 pp.
Efraim Halfon, Theoretical Systems Ecology: Advances and Case Studies, Academic Press, 1979.
J. W. Haefner, Modeling Biological Systems: Principles and Applications, London., UK, Chapman and Hall 1996, 473 pp.
Richard F Johnston, Peter W Frank, Charles Duncan Michener, Annual Review of Ecology and Systematics, 1976, 307 pp.
Jorgensen, Sven E., "Introduction to Systems Ecology", CRC Press, 2012.
R.L. Kitching, Systems ecology, University of Queensland Press, 1983.
Howard T. Odum, Systems Ecology: An Introduction, Wiley-Interscience, 1983.
Howard T. Odum, Ecological and General Systems: An Introduction to Systems Ecology. University Press of Colorado, Niwot, CO, 1994.
Friedrich Recknagel, Applied Systems Ecology: Approach and Case Studies in Aquatic Ecology, 1989.
James. Sanderson & Larry D. Harris, Landscape Ecology: A Top-down Approach, 2000, 246 pp.
Sheldon Smith, Human Systems Ecology: Studies in the Integration of Political Economy, 1989.
Shugart, H.H., O’Neil, R.V. (Eds.) Systems Ecology, Dowden, Hutchinson & Ross, Inc., 1979.
Van Dyne, George M., Ecosystems, Systems Ecology, and Systems Ecologists'', ORNL- 3975. Oak Ridge National Laboratory, Oak Ridge, TN, pp. 1–40, 1966.
Patten, Bernard C. (editor), "Systems Analysis and Simulation in Ecology", Volume 1, Academic Press, 1971.
Patten, Bernard C. (editor), "Systems Analysis and Simulation in Ecology", Volume 2, Academic Press, 1972.
Patten, Bernard C. (editor), "Systems Analysis and Simulation in Ecology", Volume 3, Academic Press, 1975.
Patten, Bernard C. (editor), "Systems Analysis and Simulation in Ecology", Volume 4, Academic Press, 1976.
External links
Organisations
Systems Ecology Department at the Stockholm University.
Systems Ecology Department at the University of Amsterdam.
Systems ecology Lab at SUNY-ESF.
Systems Ecology program at the University of Florida
Systems Ecology program at the University of Montana
Terrestrial Systems Ecology of ETH Zürich.
Environmental science
Environmental social science
Formal sciences
Ecology | 0.803134 | 0.972256 | 0.780852 |
Biogeochemical cycle | A biogeochemical cycle, or more generally a cycle of matter, is the movement and transformation of chemical elements and compounds between living organisms, the atmosphere, and the Earth's crust. Major biogeochemical cycles include the carbon cycle, the nitrogen cycle and the water cycle. In each cycle, the chemical element or molecule is transformed and cycled by living organisms and through various geological forms and reservoirs, including the atmosphere, the soil and the oceans. It can be thought of as the pathway by which a chemical substance cycles (is turned over or moves through) the biotic compartment and the abiotic compartments of Earth. The biotic compartment is the biosphere and the abiotic compartments are the atmosphere, lithosphere and hydrosphere.
For example, in the carbon cycle, atmospheric carbon dioxide is absorbed by plants through photosynthesis, which converts it into organic compounds that are used by organisms for energy and growth. Carbon is then released back into the atmosphere through respiration and decomposition. Additionally, carbon is stored in fossil fuels and is released into the atmosphere through human activities such as burning fossil fuels. In the nitrogen cycle, atmospheric nitrogen gas is converted by plants into usable forms such as ammonia and nitrates through the process of nitrogen fixation. These compounds can be used by other organisms, and nitrogen is returned to the atmosphere through denitrification and other processes. In the water cycle, the universal solvent water evaporates from land and oceans to form clouds in the atmosphere, and then precipitates back to different parts of the planet. Precipitation can seep into the ground and become part of groundwater systems used by plants and other organisms, or can runoff the surface to form lakes and rivers. Subterranean water can then seep into the ocean along with river discharges, rich with dissolved and particulate organic matter and other nutrients.
There are biogeochemical cycles for many other elements, such as for oxygen, hydrogen, phosphorus, calcium, iron, sulfur, mercury and selenium. There are also cycles for molecules, such as water and silica. In addition there are macroscopic cycles such as the rock cycle, and human-induced cycles for synthetic compounds such as for polychlorinated biphenyls (PCBs). In some cycles there are geological reservoirs where substances can remain or be sequestered for long periods of time.
Biogeochemical cycles involve the interaction of biological, geological, and chemical processes. Biological processes include the influence of microorganisms, which are critical drivers of biogeochemical cycling. Microorganisms have the ability to carry out wide ranges of metabolic processes essential for the cycling of nutrients and chemicals throughout global ecosystems. Without microorganisms many of these processes would not occur, with significant impact on the functioning of land and ocean ecosystems and the planet's biogeochemical cycles as a whole. Changes to cycles can impact human health. The cycles are interconnected and play important roles regulating climate, supporting the growth of plants, phytoplankton and other organisms, and maintaining the health of ecosystems generally. Human activities such as burning fossil fuels and using large amounts of fertilizer can disrupt cycles, contributing to climate change, pollution, and other environmental problems.
Overview
Energy flows directionally through ecosystems, entering as sunlight (or inorganic molecules for chemoautotrophs) and leaving as heat during the many transfers between trophic levels. However, the matter that makes up living organisms is conserved and recycled. The six most common elements associated with organic molecules — carbon, nitrogen, hydrogen, oxygen, phosphorus, and sulfur — take a variety of chemical forms and may exist for long periods in the atmosphere, on land, in water, or beneath the Earth's surface. Geologic processes, such as weathering, erosion, water drainage, and the subduction of the continental plates, all play a role in this recycling of materials. Because geology and chemistry have major roles in the study of this process, the recycling of inorganic matter between living organisms and their environment is called a biogeochemical cycle.
The six aforementioned elements are used by organisms in a variety of ways. Hydrogen and oxygen are found in water and organic molecules, both of which are essential to life. Carbon is found in all organic molecules, whereas nitrogen is an important component of nucleic acids and proteins. Phosphorus is used to make nucleic acids and the phospholipids that comprise biological membranes. Sulfur is critical to the three-dimensional shape of proteins. The cycling of these elements is interconnected. For example, the movement of water is critical for leaching sulfur and phosphorus into rivers which can then flow into oceans. Minerals cycle through the biosphere between the biotic and abiotic components and from one organism to another.
Ecological systems (ecosystems) have many biogeochemical cycles operating as a part of the system, for example, the water cycle, the carbon cycle, the nitrogen cycle, etc. All chemical elements occurring in organisms are part of biogeochemical cycles. In addition to being a part of living organisms, these chemical elements also cycle through abiotic factors of ecosystems such as water (hydrosphere), land (lithosphere), and/or the air (atmosphere).
The living factors of the planet can be referred to collectively as the biosphere. All the nutrients — such as carbon, nitrogen, oxygen, phosphorus, and sulfur — used in ecosystems by living organisms are a part of a closed system; therefore, these chemicals are recycled instead of being lost and replenished constantly such as in an open system.
The major parts of the biosphere are connected by the flow of chemical elements and compounds in biogeochemical cycles. In many of these cycles, the biota plays an important role. Matter from the Earth's interior is released by volcanoes. The atmosphere exchanges some compounds and elements rapidly with the biota and oceans. Exchanges of materials between rocks, soils, and the oceans are generally slower by comparison.
The flow of energy in an ecosystem is an open system; the Sun constantly gives the planet energy in the form of light while it is eventually used and lost in the form of heat throughout the trophic levels of a food web. Carbon is used to make carbohydrates, fats, and proteins, the major sources of food energy. These compounds are oxidized to release carbon dioxide, which can be captured by plants to make organic compounds. The chemical reaction is powered by the light energy of sunshine.
Sunlight is required to combine carbon with hydrogen and oxygen into an energy source, but ecosystems in the deep sea, where no sunlight can penetrate, obtain energy from sulfur. Hydrogen sulfide near hydrothermal vents can be utilized by organisms such as the giant tube worm. In the sulfur cycle, sulfur can be forever recycled as a source of energy. Energy can be released through the oxidation and reduction of sulfur compounds (e.g., oxidizing elemental sulfur to sulfite and then to sulfate).
Although the Earth constantly receives energy from the Sun, its chemical composition is essentially fixed, as the additional matter is only occasionally added by meteorites. Because this chemical composition is not replenished like energy, all processes that depend on these chemicals must be recycled. These cycles include both the living biosphere and the nonliving lithosphere, atmosphere, and hydrosphere.
Biogeochemical cycles can be contrasted with geochemical cycles. The latter deals only with crustal and subcrustal reservoirs even though some process from both overlap.
Compartments
Atmosphere
Hydrosphere
The global ocean covers more than 70% of the Earth's surface and is remarkably heterogeneous. Marine productive areas, and coastal ecosystems comprise a minor fraction of the ocean in terms of surface area, yet have an enormous impact on global biogeochemical cycles carried out by microbial communities, which represent 90% of the ocean's biomass. Work in recent years has largely focused on cycling of carbon and macronutrients such as nitrogen, phosphorus, and silicate: other important elements such as sulfur or trace elements have been less studied, reflecting associated technical and logistical issues. Increasingly, these marine areas, and the taxa that form their ecosystems, are subject to significant anthropogenic pressure, impacting marine life and recycling of energy and nutrients. A key example is that of cultural eutrophication, where agricultural runoff leads to nitrogen and phosphorus enrichment of coastal ecosystems, greatly increasing productivity resulting in algal blooms, deoxygenation of the water column and seabed, and increased greenhouse gas emissions, with direct local and global impacts on nitrogen and carbon cycles. However, the runoff of organic matter from the mainland to coastal ecosystems is just one of a series of pressing threats stressing microbial communities due to global change. Climate change has also resulted in changes in the cryosphere, as glaciers and permafrost melt, resulting in intensified marine stratification, while shifts of the redox-state in different biomes are rapidly reshaping microbial assemblages at an unprecedented rate.
Global change is, therefore, affecting key processes including primary productivity, CO2 and N2 fixation, organic matter respiration/remineralization, and the sinking and burial deposition of fixed CO2. In addition to this, oceans are experiencing an acidification process, with a change of ~0.1 pH units between the pre-industrial period and today, affecting carbonate/bicarbonate buffer chemistry. In turn, acidification has been reported to impact planktonic communities, principally through effects on calcifying taxa. There is also evidence for shifts in the production of key intermediary volatile products, some of which have marked greenhouse effects (e.g., N2O and CH4, reviewed by Breitburg in 2018, due to the increase in global temperature, ocean stratification and deoxygenation, driving as much as 25 to 50% of nitrogen loss from the ocean to the atmosphere in the so-called oxygen minimum zones or anoxic marine zones, driven by microbial processes. Other products, that are typically toxic for the marine nekton, including reduced sulfur species such as H2S, have a negative impact for marine resources like fisheries and coastal aquaculture. While global change has accelerated, there has been a parallel increase in awareness of the complexity of marine ecosystems, and especially the fundamental role of microbes as drivers of ecosystem functioning.
Lithosphere
Biosphere
Microorganisms drive much of the biogeochemical cycling in the earth system.
Reservoirs
The chemicals are sometimes held for long periods of time in one place. This place is called a reservoir, which, for example, includes such things as coal deposits that are storing carbon for a long period of time. When chemicals are held for only short periods of time, they are being held in exchange pools. Examples of exchange pools include plants and animals.
Plants and animals utilize carbon to produce carbohydrates, fats, and proteins, which can then be used to build their internal structures or to obtain energy. Plants and animals temporarily use carbon in their systems and then release it back into the air or surrounding medium. Generally, reservoirs are abiotic factors whereas exchange pools are biotic factors. Carbon is held for a relatively short time in plants and animals in comparison to coal deposits. The amount of time that a chemical is held in one place is called its residence time or turnover time (also called the renewal time or exit age).
Box models
Box models are widely used to model biogeochemical systems. Box models are simplified versions of complex systems, reducing them to boxes (or storage reservoirs) for chemical materials, linked by material fluxes (flows). Simple box models have a small number of boxes with properties, such as volume, that do not change with time. The boxes are assumed to behave as if they were mixed homogeneously. These models are often used to derive analytical formulas describing the dynamics and steady-state abundance of the chemical species involved.
The diagram at the right shows a basic one-box model. The reservoir contains the amount of material M under consideration, as defined by chemical, physical or biological properties. The source Q is the flux of material into the reservoir, and the sink S is the flux of material out of the reservoir. The budget is the check and balance of the sources and sinks affecting material turnover in a reservoir. The reservoir is in a steady state if Q = S, that is, if the sources balance the sinks and there is no change over time.
The residence or turnover time is the average time material spends resident in the reservoir. If the reservoir is in a steady state, this is the same as the time it takes to fill or drain the reservoir. Thus, if τ is the turnover time, then τ = M/S. The equation describing the rate of change of content in a reservoir is
When two or more reservoirs are connected, the material can be regarded as cycling between the reservoirs, and there can be predictable patterns to the cyclic flow. More complex multibox models are usually solved using numerical techniques.
The diagram on the left shows a simplified budget of ocean carbon flows. It is composed of three simple interconnected box models, one for the euphotic zone, one for the ocean interior or dark ocean, and one for ocean sediments. In the euphotic zone, net phytoplankton production is about 50 Pg C each year. About 10 Pg is exported to the ocean interior while the other 40 Pg is respired. Organic carbon degradation occurs as particles (marine snow) settle through the ocean interior. Only 2 Pg eventually arrives at the seafloor, while the other 8 Pg is respired in the dark ocean. In sediments, the time scale available for degradation increases by orders of magnitude with the result that 90% of the organic carbon delivered is degraded and only 0.2 Pg C yr−1 is eventually buried and transferred from the biosphere to the geosphere.
The diagram on the right shows a more complex model with many interacting boxes. Reservoir masses here represents carbon stocks, measured in Pg C. Carbon exchange fluxes, measured in Pg C yr−1, occur between the atmosphere and its two major sinks, the land and the ocean. The black numbers and arrows indicate the reservoir mass and exchange fluxes estimated for the year 1750, just before the Industrial Revolution. The red arrows (and associated numbers) indicate the annual flux changes due to anthropogenic activities, averaged over the 2000–2009 time period. They represent how the carbon cycle has changed since 1750. Red numbers in the reservoirs represent the cumulative changes in anthropogenic carbon since the start of the Industrial Period, 1750–2011.
Fast and slow cycles
There are fast and slow biogeochemical cycles. Fast cycle operate in the biosphere and slow cycles operate in rocks. Fast or biological cycles can complete within years, moving substances from atmosphere to biosphere, then back to the atmosphere. Slow or geological cycles can take millions of years to complete, moving substances through the Earth's crust between rocks, soil, ocean and atmosphere.
As an example, the fast carbon cycle is illustrated in the diagram below on the left. This cycle involves relatively short-term biogeochemical processes between the environment and living organisms in the biosphere. It includes movements of carbon between the atmosphere and terrestrial and marine ecosystems, as well as soils and seafloor sediments. The fast cycle includes annual cycles involving photosynthesis and decadal cycles involving vegetative growth and decomposition. The reactions of the fast carbon cycle to human activities will determine many of the more immediate impacts of climate change.
The slow cycle is illustrated in the diagram above on the right. It involves medium to long-term geochemical processes belonging to the rock cycle. The exchange between the ocean and atmosphere can take centuries, and the weathering of rocks can take millions of years. Carbon in the ocean precipitates to the ocean floor where it can form sedimentary rock and be subducted into the Earth's mantle. Mountain building processes result in the return of this geologic carbon to the Earth's surface. There the rocks are weathered and carbon is returned to the atmosphere by degassing and to the ocean by rivers. Other geologic carbon returns to the ocean through the hydrothermal emission of calcium ions. In a given year between 10 and 100 million tonnes of carbon moves around this slow cycle. This includes volcanoes returning geologic carbon directly to the atmosphere in the form of carbon dioxide. However, this is less than one percent of the carbon dioxide put into the atmosphere by burning fossil fuels.
Deep cycles
The terrestrial subsurface is the largest reservoir of carbon on earth, containing 14–135 Pg of carbon and 2–19% of all biomass. Microorganisms drive organic and inorganic compound transformations in this environment and thereby control biogeochemical cycles. Current knowledge of the microbial ecology of the subsurface is primarily based on 16S ribosomal RNA (rRNA) gene sequences. Recent estimates show that <8% of 16S rRNA sequences in public databases derive from subsurface organisms and only a small fraction of those are represented by genomes or isolates. Thus, there is remarkably little reliable information about microbial metabolism in the subsurface. Further, little is known about how organisms in subsurface ecosystems are metabolically interconnected. Some cultivation-based studies of syntrophic consortia and small-scale metagenomic analyses of natural communities suggest that organisms are linked via metabolic handoffs: the transfer of redox reaction products of one organism to another. However, no complex environments have been dissected completely enough to resolve the metabolic interaction networks that underpin them. This restricts the ability of biogeochemical models to capture key aspects of the carbon and other nutrient cycles. New approaches such as genome-resolved metagenomics, an approach that can yield a comprehensive set of draft and even complete genomes for organisms without the requirement for laboratory isolation have the potential to provide this critical level of understanding of biogeochemical processes.
Some examples
Some of the more well-known biogeochemical cycles are shown below:
Many biogeochemical cycles are currently being studied for the first time. Climate change and human impacts are drastically changing the speed, intensity, and balance of these relatively unknown cycles, which include:
the mercury cycle, and
the human-caused cycle of PCBs.
Biogeochemical cycles always involve active equilibrium states: a balance in the cycling of the element between compartments. However, overall balance may involve compartments distributed on a global scale.
As biogeochemical cycles describe the movements of substances on the entire globe, the study of these is inherently multidisciplinary. The carbon cycle may be related to research in ecology and atmospheric sciences. Biochemical dynamics would also be related to the fields of geology and pedology.
See also
Carbonate–silicate cycle
Ecological recycling
Great Acceleration
Hydrogen cycle
Redox gradient
References
Further reading
Schink, Bernhard; "Microbes: Masters of the Global Element Cycles" pp 33–58. "Metals, Microbes and Minerals: The Biogeochemical Side of Life", pp xiv + 341. Walter de Gruyter, Berlin. DOI 10.1515/9783110589771-002
Biogeography
Biosphere
Geochemistry | 0.783776 | 0.99623 | 0.780822 |
Ontology | Ontology is the philosophical study of being. As one of the most fundamental concepts, being encompasses all of reality and every entity within it. To articulate the basic structure of being, ontology examines what all entities have in common and how they are divided into fundamental classes, known as categories. An influential distinction is between particular and universal entities. Particulars are unique, non-repeatable entities, like the person Socrates. Universals are general, repeatable entities, like the color green. Another contrast is between concrete objects existing in space and time, like a tree, and abstract objects existing outside space and time, like the number 7. Systems of categories aim to provide a comprehensive inventory of reality, employing categories such as substance, property, relation, state of affairs, and event.
Ontologists disagree about which entities exist on the most basic level. Platonic realism asserts that universals have objective existence. Conceptualism says that universals only exist in the mind while nominalism denies their existence. There are similar disputes about mathematical objects, unobservable objects assumed by scientific theories, and moral facts. Materialism says that, fundamentally, there is only matter while dualism asserts that mind and matter are independent principles. According to some ontologists, there are no objective answers to ontological questions but only perspectives shaped by different linguistic practices.
Ontology uses diverse methods of inquiry. They include the analysis of concepts and experience, the use of intuitions and thought experiments, and the integration of findings from natural science. Applied ontology employs ontological theories and principles to study entities belonging to a specific area. It is of particular relevance to information and computer science, which develop conceptual frameworks of limited domains. These frameworks are used to store information in a structured way, such as a college database tracking academic activities. Ontology is closely related to metaphysics and relevant to the fields of logic, theology, and anthropology.
The origins of ontology lie in the ancient period with speculations about the nature of being and the source of the universe, including ancient Indian, Chinese, and Greek philosophy. In the modern period, philosophers conceived ontology as a distinct academic discipline and coined its name.
Definition
Ontology is the study of being. It is the branch of philosophy that investigates the nature of existence, the features all entities have in common, and how they are divided into basic categories of being. It aims to discover the foundational building blocks of the world and characterize reality as a whole in its most general aspects. In this regard, ontology contrasts with individual sciences like biology and astronomy, which restrict themselves to a limited domain of entities, such as living entities and celestial phenomena. In some contexts, the term ontology refers not to the general study of being but to a specific ontological theory within this discipline. It can also mean a conceptual scheme or inventory of a particular domain.
Ontology is closely related to metaphysics but the exact relation of these two disciplines is disputed. According to a traditionally influential characterization, metaphysics is the study of fundamental reality in the widest sense while ontology is the subdiscipline of metaphysics that restricts itself to the most general features of reality. This view sees ontology as general metaphysics, which is to be distinguished from special metaphysics focused on more specific subject matters, like God, mind, and value. A different conception understands ontology as a preliminary discipline that provides a complete inventory of reality while metaphysics examines the features and structure of the entities in this inventory. Another conception says that metaphysics is about real being while ontology examines possible being or the concept of being. It is not universally accepted that there is a clear boundary between metaphysics and ontology. Some philosophers use both terms as synonyms.
The word ontology has its roots in the ancient Greek terms (, meaning ) and (, meaning ), literally, . The ancient Greeks did not use the term ontology, which was coined by philosophers in the 17th century.
Basic concepts
Being
Being, or existence, is the main topic of ontology. It is one of the most general and fundamental concepts, encompassing the whole of reality and every entity within it. In its widest sense, being only contrasts with non-being or nothingness. It is controversial whether a more substantial analysis of the concept or meaning of being is possible. One proposal understands being as a property possessed by every entity. Critics of this view argue that an entity without being cannot have any properties, meaning that being cannot be a property since properties presuppose being. A different suggestion says that all beings share a set of essential features. According to the Eleatic principle, "power is the mark of being", meaning that only entities with a causal influence truly exist. According to a controversial proposal by philosopher George Berkeley, all existence is mental, expressed in his slogan "to be is to be perceived".
Depending on the context, the term being is sometimes used with a more limited meaning to refer only to certain aspects of reality. In one sense, being is unchanging and impermanent and is distinguished from becoming, which implies change. Another contrast is between being, as what truly exists, and phenomena, as what merely appears to exist. In some contexts, being expresses the fact that something is while essence expresses its qualities or what it is like.
Ontologists often divide being into fundamental classes or highest kinds, called categories of being. Proposed categories include substance, property, relation, state of affairs, and event. They can be used to provide systems of categories, which offer a comprehensive inventory of reality in which every entity belongs to exactly one category. Some philosophers, like Aristotle, say that entities belonging to different categories exist in distinct ways. Others, like John Duns Scotus, insist that there are no differences in the mode of being, meaning that everything exists in the same way. A related dispute is whether some entities have a higher degree of being than others, an idea already found in Plato's work. The more common view in contemporary philosophy is that a thing either exists or not with no intermediary states or degrees.
The relation between being and non-being is a frequent topic in ontology. Influential issues include the status of nonexistent objects and why there is something rather than nothing.
Particulars and universals
A central distinction in ontology is between particular and universal entities. Particulars, also called individuals, are unique, non-repeatable entities, like Socrates, the Taj Mahal, and Mars. Universals are general, repeatable entities, like the color green, the form circularity, and the virtue courage. Universals express aspects or features shared by particulars. For example, Mount Everest and Mount Fuji are particulars characterized by the universal mountain.
Universals can take the form of properties or relations. Properties express what entities are like. They are features or qualities possessed by an entity. Properties are often divided into essential and accidental properties. A property is essential if an entity must have it; it is accidental if the entity can exist without it. For instance, having three sides is an essential property of a triangle while being red is an accidental property. Relations are ways how two or more entities stand to one another. Unlike properties, they apply to several entities and characterize them as a group. For example, being a city is a property while being east of is a relation, as in "Kathmandu is a city" and "Kathmandu is east of New Delhi". Relations are often divided into internal and external relations. Internal relations depend only on the properties of the objects they connect, like the relation of resemblance. External relations express characteristics that go beyond what the connected objects are like, such as spatial relations.
Substances play an important role in the history of ontology as the particular entities that underlie and support properties and relations. They are often considered the fundamental building blocks of reality that can exist on their own, while entities like properties and relations cannot exist without substances. Substances persist through changes as they acquire or lose properties. For example, when a tomato ripens, it loses the property green and acquires the property red.
States of affairs are complex particular entities that have several other entities as their components. The state of affairs "Socrates is wise" has two components: the individual Socrates and the property wise. States of affairs that correspond to reality are called facts. Facts are truthmakers of statements, meaning that whether a statement is true or false depends on the underlying facts.
Events are particular entities that occur in time, like the fall of the Berlin Wall and the first moon landing. They usually involve some kind of change, like the lawn becoming dry. In some cases, no change occurs, like the lawn staying wet. Complex events, also called processes, are composed of a sequence of events.
Concrete and abstract objects
Concrete objects are entities that exist in space and time, such as a tree, a car, and a planet. They have causal powers and can affect each other, like when a car hits a tree and both are deformed in the process. Abstract objects, by contrast, are outside space and time, such as the number 7 and the set of integers. They lack causal powers and do not undergo changes. It is controversial whether or in what sense abstract objects exist and how people can know about them.
Concrete objects encountered in everyday life are complex entities composed of various parts. For example, a book is made up of two covers and pages between them. Each of these components is itself constituted of smaller parts, like molecules, atoms, and elementary particles. Mereology studies the relation between parts and wholes. One position in mereology says that every collection of entities forms a whole. According to a different view, this is only the case for collections that fulfill certain requirements, for instance, that the entities in the collection touch one another. The problem of material constitution asks whether or in what sense a whole should be considered a new object in addition to the collection of parts composing it.
Abstract objects are closely related to fictional and intentional objects. Fictional objects are entities invented in works of fiction. They can be things, like the One Ring in J. R. R. Tolkien's book series The Lord of the Rings, and people, like the Monkey King in the novel Journey to the West. Some philosophers say that fictional objects are one type of abstract object, existing outside space and time. Others understand them as artifacts that are created as the works of fiction are written. Intentional objects are entities that exist within mental states, like perceptions, beliefs, and desires. For example, if a person thinks about the Loch Ness Monster then the Loch Ness Monster is the intentional object of this thought. People can think about existing and non-existing objects, making it difficult to assess the ontological status of intentional objects.
Other concepts
Ontological dependence is a relation between entities. An entity depends ontologically on another entity if the first entity cannot exist without the second entity. For instance, the surface of an apple cannot exist without the apple. An entity is ontologically independent if it does not depend on anything else, meaning that it is fundamental and can exist on its own. Ontological dependence plays a central role in ontology and its attempt to describe reality on its most fundamental level. It is closely related to metaphysical grounding, which is the relation between a ground and facts it explains.
An ontological commitment of a person or a theory is an entity that exists according to them. For instance, a person who believes in God has an ontological commitment to God. Ontological commitments can be used to analyze which ontologies people explicitly defend or implicitly assume. They play a central role in contemporary metaphysics when trying to decide between competing theories. For example, the Quine–Putnam indispensability argument defends mathematical Platonism, asserting that numbers exist because the best scientific theories are ontologically committed to numbers.
Possibility and necessity are further topics in ontology. Possibility describes what can be the case, as in "it is possible that extraterrestrial life exists". Necessity describes what must be the case, as in "it is necessary that three plus two equals five". Possibility and necessity contrast with actuality, which describes what is the case, as in "Doha is the capital of Qatar". Ontologists often use the concept of possible worlds to analyze possibility and necessity. A possible world is a complete and consistent way how things could have been. For example, Haruki Murakami was born in 1949 in the actual world but there are possible worlds in which he was born at a different date. Using this idea, possible world semantics says that a sentence is possibly true if it is true in at least one possible world. A sentence is necessarily true if it is true in all possible worlds.
In ontology, identity means that two things are the same. Philosophers distinguish between qualitative and numerical identity. Two entities are qualitatively identical if they have exactly the same features, such as perfect identical twins. This is also called exact similarity and indiscernibility. Numerical identity, by contrast, means that there is only a single entity. For example, if Fatima is the mother of Leila and Hugo then Leila's mother is numerically identical to Hugo's mother. Another distinction is between synchronic and diachronic identity. Synchronic identity relates an entity to itself at the same time. Diachronic identity relates an entity to itself at different times, as in "the woman who bore Leila three years ago is the same woman who bore Hugo this year".
Branches
There are different and sometimes overlapping ways to divide ontology into branches. Pure ontology focuses on the most abstract topics associated with the concept and nature of being. It is not restricted to a specific domain of entities and studies existence and the structure of reality as a whole. Pure ontology contrasts with applied ontology, also called domain ontology. Applied ontology examines the application of ontological theories and principles to specific disciplines and domains, often in the field of science. It considers ontological problems in regard to specific entities such as matter, mind, numbers, God, and cultural artifacts.
Social ontology, a major subfield of applied ontology, studies social kinds, like money, gender, society, and language. It aims to determine the nature and essential features of these concepts while also examining their mode of existence. According to a common view, social kinds are useful constructions to describe the complexities of social life. This means that they are not pure fictions but, at the same time, lack the objective or mind-independent reality of natural phenomena like elementary particles, lions, and stars. In the fields of computer science, information science, and knowledge representation, applied ontology is interested in the development of formal frameworks to encode and store information about a limited domain of entities in a structured way. A related application in genetics is Gene Ontology, which is a comprehensive framework for the standardized representation of gene-related information across species and databases.
Formal ontology is the study of objects in general while focusing on their abstract structures and features. It divides objects into different categories based on the forms they exemplify. Formal ontologists often rely on the tools of formal logic to express their findings in an abstract and general manner. Formal ontology contrasts with material ontology, which distinguishes between different areas of objects and examines the features characteristic of a specific area. Examples are ideal spatial beings in the area of geometry and living beings in the area of biology.
Descriptive ontology aims to articulate the conceptual scheme underlying how people ordinarily think about the world. Prescriptive ontology departs from common conceptions of the structure of reality and seeks to formulate a new and better conceptualization.
Another contrast is between analytic and speculative ontology. Analytic ontology examines the types and categories of being to determine what kinds of things could exist and what features they would have. Speculative ontology aims to determine which entities actually exist, for example, whether there are numbers or whether time is an illusion.
Metaontology studies the underlying concepts, assumptions, and methods of ontology. Unlike other forms of ontology, it does not ask "what exists" but "what does it mean for something to exist" and "how can people determine what exists". It is closely related to fundamental ontology, an approach developed by philosopher Martin Heidegger that seeks to uncover the meaning of being.
Schools of thought
Realism and anti-realism
The term realism is used for various theories that affirm that some kind of phenomenon is real or has mind-independent existence. Ontological realism is the view that there are objective facts about what exists and what the nature and categories of being are. Ontological realists do not make claims about what those facts are, for example, whether elementary particles exist. They merely state that there are mind-independent facts that determine which ontological theories are true. This idea is denied by ontological anti-realists, also called ontological deflationists, who say that there are no substantive facts one way or the other. According to philosopher Rudolf Carnap, for example, ontological statements are relative to language and depend on the ontological framework of the speaker. This means that there are no framework-independent ontological facts since different frameworks provide different views while there is no objectively right or wrong framework.
In a more narrow sense, realism refers to the existence of certain types of entities. Realists about universals say that universals have mind-independent existence. According to Platonic realists, universals exist not only independent of the mind but also independent of particular objects that exemplify them. This means that the universal red could exist by itself even if there were no red objects in the world. Aristotelian realism, also called moderate realism, rejects this idea and says that universals only exist as long as there are objects that exemplify them. Conceptualism, by contrast, is a form of anti-realism, stating that universals only exist in the mind as concepts that people use to understand and categorize the world. Nominalists defend a strong form of anti-realism by saying that universals have no existence. This means that the world is entirely composed of particular objects.
Mathematical realism, a closely related view in the philosophy of mathematics, says that mathematical facts exist independently of human language, thought, and practices and are discovered rather than invented. According to mathematical Platonism, this is the case because of the existence of mathematical objects, like numbers and sets. Mathematical Platonists say that mathematical objects are as real as physical objects, like atoms and stars, even though they are not accessible to empirical observation. Influential forms of mathematical anti-realism include conventionalism, which says that mathematical theories are trivially true simply by how mathematical terms are defined, and game formalism, which understands mathematics not as a theory of reality but as a game governed by rules of string manipulation.
Modal realism is the theory that in addition to the actual world, there are countless possible worlds as real and concrete as the actual world. The primary difference is that the actual world is inhabited by us while other possible worlds are inhabited by our counterparts. Modal anti-realists reject this view and argue that possible worlds do not have concrete reality but exist in a different sense, for example, as abstract or fictional objects.
Scientific realists say that the scientific description of the world is an accurate representation of reality. It is of particular relevance in regard to things that cannot be directly observed by humans but are assumed to exist by scientific theories, like electrons, forces, and laws of nature. Scientific anti-realism says that scientific theories are not descriptions of reality but instruments to predict observations and the outcomes of experiments.
Moral realists claim that there exist mind-independent moral facts. According to them, there are objective principles that determine which behavior is morally right. Moral anti-realists either claim that moral principles are subjective and differ between persons and cultures, a position known as moral relativism, or outright deny the existence of moral facts, a view referred to as moral nihilism.
By number of categories
Monocategorical theories say that there is only one fundamental category, meaning that every single entity belongs to the same universal class. For example, some forms of nominalism state that only concrete particulars exist while some forms of bundle theory state that only properties exist. Polycategorical theories, by contrast, hold that there is more than one basic category, meaning that entities are divided into two or more fundamental classes. They take the form of systems of categories, which list the highest genera of being to provide a comprehensive inventory of everything.
The closely related discussion between monism and dualism is about the most fundamental types that make up reality. According to monism, there is only one kind of thing or substance on the most basic level. Materialism is an influential monist view; it says that everything is material. This means that mental phenomena, such as beliefs, emotions, and consciousness, either do not exist or exist as aspects of matter, like brain states. Idealists take the converse perspective, arguing that everything is mental. They may understand physical phenomena, like rocks, trees, and planets, as ideas or perceptions of conscious minds. Neutral monism occupies a middle ground by saying that both mind and matter are derivative phenomena. Dualists state that mind and matter exist as independent principles, either as distinct substances or different types of properties. In a slightly different sense, monism contrasts with pluralism as a view not about the number of basic types but the number of entities. In this sense, monism is the controversial position that only a single all-encompassing entity exists in all of reality. Pluralism is more commonly accepted and says that several distinct entities exist.
By fundamental categories
The historically influential substance-attribute ontology is a polycategorical theory. It says that reality is at its most fundamental level made up of unanalyzable substances that are characterized by universals, such as the properties an individual substance has or relations that exist between substances. The closely related to substratum theory says that each concrete object is made up of properties and a substratum. The difference is that the substratum is not characterized by properties: it is a featureless or bare particular that merely supports the properties.
Various alternative ontological theories have been proposed that deny the role of substances as the foundational building blocks of reality. Stuff ontologies say that the world is not populated by distinct entities but by continuous stuff that fills space. This stuff may take various forms and is often conceived as infinitely divisible. According to process ontology, processes or events are the fundamental entities. This view usually emphasizes that nothing in reality is static, meaning that being is dynamic and characterized by constant change. Bundle theories state that there are no regular objects but only bundles of co-present properties. For example, a lemon may be understood as a bundle that includes the properties yellow, sour, and round. According to traditional bundle theory, the bundled properties are universals, meaning that the same property may belong to several different bundles. According to trope bundle theory, properties are particular entities that belong to a single bundle.
Some ontologies focus not on distinct objects but on interrelatedness. According to relationalism, all of reality is relational at its most fundamental level. Ontic structural realism agrees with this basic idea and focuses on how these relations form complex structures. Some structural realists state that there is nothing but relations, meaning that individual objects do not exist. Others say that individual objects exist but depend on the structures in which they participate. Fact ontologies present a different approach by focusing on how entities belonging to different categories come together to constitute the world. Facts, also known as states of affairs, are complex entities; for example, the fact that the Earth is a planet consists of the particular object the Earth and the property being a planet. Fact ontologies state that facts are the fundamental constituents of reality, meaning that objects, properties, and relations cannot exist on their own and only form part of reality to the extent that they participate in facts.
In the history of philosophy, various ontological theories based on several fundamental categories have been proposed. One of the first theories of categories was suggested by Aristotle, whose system includes ten categories: substance, quantity, quality, relation, place, date, posture, state, action, and passion. An early influential system of categories in Indian philosophy, first proposed in the Vaisheshika school, distinguishes between six categories: substance, quality, motion, universal, individuator, and inherence. Immanuel Kant's transcendental idealism includes a system of twelve categories, which Kant saw as pure concepts of understanding. They are subdivided into four classes: quantity, quality, relation, and modality. In more recent philosophy, theories of categories were developed by C. S. Peirce, Edmund Husserl, Samuel Alexander, Roderick Chisholm, and E. J. Lowe.
Others
The dispute between constituent and relational ontologies concerns the internal structure of concrete particular objects. Constituent ontologies say that objects have an internal structure with properties as their component parts. Bundle theories are an example of this position: they state that objects are bundles of properties. This view is rejected by relational ontologies, which say that objects have no internal structure, meaning that properties do not inhere in them but are externally related to them. According to one analogy, objects are like pin-cushions and properties are pins that can be stuck to objects and removed again without becoming a real part of objects. Relational ontologies are common in certain forms of nominalism that reject the existence of universal properties.
Hierarchical ontologies state that the world is organized into levels. Entities on all levels are real but low-level entities are more fundamental than high-level entities. This means that they can exist without high-level entities while high-level entities cannot exist without low-level entities. One hierarchical ontology says that elementary particles are more fundamental than the macroscopic objects they compose, like chairs and tables. Other hierarchical theories assert that substances are more fundamental than their properties and that nature is more fundamental than culture. Flat ontologies, by contrast, deny that any entity has a privileged status, meaning that all entities exist on the same level. For them, the main question is only whether something exists rather than identifying the level at which it exists.
The ontological theories of endurantism and perdurantism aim to explain how material objects persist through time. Endurantism is the view that material objects are three-dimensional entities that travel through time while being fully present in each moment. They remain the same even when they gain or lose properties as they change. Perdurantism is the view that material objects are four-dimensional entities that extend not just through space but also through time. This means that they are composed of temporal parts and, at any moment, only one part of them is present but not the others. According to perdurantists, change means that an earlier part exhibits different qualities than a later part. When a tree loses its leaves, for instance, there is an earlier temporal part with leaves and a later temporal part without leaves.
Differential ontology is a poststructuralist approach interested in the relation between the concepts of identity and difference. It says that traditional ontology sees identity as the more basic term by first characterizing things in terms of their essential features and then elaborating differences based on this conception. Differential ontologists, by contrast, privilege difference and say that the identity of a thing is a secondary determination that depends on how this thing differs from other things.
Object-oriented ontology belongs to the school of speculative realism and examines the nature and role of objects. It sees objects as the fundamental building blocks of reality. As a flat ontology, it denies that some entities have a more fundamental form of existence than others. It uses this idea to argue that objects exist independently of human thought and perception.
Methods
Methods of ontology are ways of conducting ontological inquiry and deciding between competing theories. There is no single standard method; the diverse approaches are studied by metaontology.
Conceptual analysis is a method to understand ontological concepts and clarify their meaning. It proceeds by analyzing their component parts and the necessary and sufficient conditions under which a concept applies to an entity. This information can help ontologists decide whether a certain type of entity, such as numbers, exists. Eidetic variation is a related method in phenomenological ontology that aims to identify the essential features of different types of objects. Phenomenologists start by imagining an example of the investigated type. They proceed by varying the imagined features to determine which ones cannot be changed, meaning they are essential. The transcendental method begins with a simple observation that a certain entity exists. In the following step, it studies the ontological repercussions of this observation by examining how it is possible or which conditions are required for this entity to exist.
Another approach is based on intuitions in the form of non-inferential impressions about the correctness of general principles. These principles can be used as the foundation on which an ontological system is built and expanded using deductive reasoning. A further intuition-based method relies on thought experiments to evoke new intuitions. This happens by imagining a situation relevant to an ontological issue and then employing counterfactual thinking to assess the consequences of this situation. For example, some ontologists examine the relation between mind and matter by imagining creatures identical to humans but without consciousness.
Naturalistic methods rely on the insights of the natural sciences to determine what exists. According to an influential approach by Willard Van Orman Quine, ontology can be conducted by analyzing the ontological commitments of scientific theories. This method is based on the idea that scientific theories provide the most reliable description of reality and that their power can be harnessed by investigating the ontological assumptions underlying them.
Principles of theory choice offer guidelines for assessing the advantages and disadvantages of ontological theories rather than guiding their construction. The principle of Ockham's Razor says that simple theories are preferable. A theory can be simple in different respects, for example, by using very few basic types or by describing the world with a small number of fundamental entities. Ontologists are also interested in the explanatory power of theories and give preference to theories that can explain many observations. A further factor is how close a theory is to common sense. Some ontologists use this principle as an argument against theories that are very different from how ordinary people think about the issue.
In applied ontology, ontological engineering is the process of creating and refining conceptual models of specific domains. Developing a new ontology from scratch involves various preparatory steps, such as delineating the scope of the domain one intends to model and specifying the purpose and use cases of the ontology. Once the foundational concepts within the area have been identified, ontology engineers proceed by defining them and characterizing the relations between them. This is usually done in a formal language to ensure precision and, in some cases, automatic computability. In the following review phase, the validity of the ontology is assessed using test data. Various more specific instructions for how to carry out the different steps have been suggested. They include the Cyc method, Grüninger and Fox's methodology, and so-called METHONTOLOGY. In some cases, it is feasible to adapt a pre-existing ontology to fit a specific domain and purpose rather than creating a new one from scratch.
Related fields
Ontology overlaps with many disciplines, including logic, the study of correct reasoning. Ontologists often employ logical systems to express their insights, specifically in the field of formal ontology. Of particular interest to them is the existential quantifier, which is used to express what exists. In first-order logic, for example, the formula states that dogs exist. Some philosophers study ontology by examining the structure of thought and language, saying that they reflect the structure of being. Doubts about the accuracy of natural language have led some ontologists to seek a new formal language, termed ontologese, for a better representation of the fundamental structure of reality.
Ontologies are often used in information science to provide a conceptual scheme or inventory of a specific domain, making it possible to classify objects and formally represent information about them. This is of specific interest to computer science, which builds databases to store this information and defines computational processes to automatically transform and use it. For instance, to encode and store information about clients and employees in a database, an organization may use an ontology with categories such as person, company, address, and name. In some cases, it is necessary to exchange information belonging to different domains or to integrate databases using distinct ontologies. This can be achieved with the help of upper ontologies, which are not limited to one specific domain. They use general categories that apply to most or all domains, like Suggested Upper Merged Ontology and Basic Formal Ontology.
Similar applications of ontology are found in various fields seeking to manage extensive information within a structured framework. Protein Ontology is a formal framework for the standardized representation of protein-related entities and their relationships. Gene Ontology and Sequence Ontology serve a similar purpose in the field of genetics. Environment Ontology is a knowledge representation focused on ecosystems and environmental processes. Friend of a Friend provides a conceptual framework to represent relations between people and their interests and activities.
The topic of ontology has received increased attention in anthropology since the 1990s, sometimes termed the "ontological turn". This type of inquiry is focused on how people from different cultures experience and understand the nature of being. Specific interest has been given to the ontological outlook of Indigenous people and how it differs from a Western perspective. As an example of this contrast, it has been argued that various indigenous communities ascribe intentionality to non-human entities, like plants, forests, or rivers. This outlook is known as animism and is also found in Native American ontologies, which emphasize the interconnectedness of all living entities and the importance of balance and harmony with nature.
Ontology is closely related to theology and its interest in the existence of God as an ultimate entity. The ontological argument, first proposed by Anselm of Canterbury, attempts to prove the existence of the divine. It defines God as the greatest conceivable being. From this definition it concludes that God must exist since God would not be the greatest conceivable being if God lacked existence. Another overlap in the two disciplines is found in ontological theories that use God or an ultimate being as the foundational principle of reality. Heidegger criticized this approach, terming it ontotheology.
History
The roots of ontology in ancient philosophy are speculations about the nature of being and the source of the universe. Discussions of the essence of reality are found in the Upanishads, ancient Indian scriptures dating from as early as 700 BCE. They say that the universe has a divine foundation and discuss in what sense ultimate reality is one or many. Samkhya, the first orthodox school of Indian philosophy, formulated an atheist dualist ontology based on the Upanishads, identifying pure consciousness and matter as its two foundational principles. The later Vaisheshika school proposed a comprehensive system of categories. In ancient China, Laozi's (6th century BCE) Taoism examines the underlying order of the universe, known as Tao, and how this order is shaped by the interaction of two basic forces, yin and yang. The philosophical movement of Xuanxue emerged in the 3rd century CE and explored the relation between being and non-being.
Starting in the 6th century BCE, Presocratic philosophers in ancient Greece aimed to provide rational explanations of the universe. They suggested that a first principle, such as water or fire, is the primal source of all things. Parmenides (c. 515–450 BCE) is sometimes considered the founder of ontology because of his explicit discussion of the concepts of being and non-being. Inspired by Presocratic philosophy, Plato (427–347 BCE) developed his theory of forms. It distinguishes between unchangeable perfect forms and matter, which has a lower degree of existence and imitates the forms. Aristotle (384–322 BCE) suggested an elaborate system of categories that introduced the concept of substance as the primary kind of being. The school of Neoplatonism arose in the 3rd century CE and proposed an ineffable source of everything, called the One, which is more basic than being itself.
The problem of universals was an influential topic in medieval ontology. Boethius (477–524 CE) suggested that universals can exist not only in matter but also in the mind. This view inspired Peter Abelard (1079–1142 CE), who proposed that universals exist only in the mind. Thomas Aquinas (1224–1274 CE) developed and refined fundamental ontological distinctions, such as the contrast between existence and essence, between substance and accidents, and between matter and form. He also discussed the transcendentals, which are the most general properties or modes of being. John Duns Scotus (1266–1308) argued that all entities, including God, exist in the same way and that each entity has a unique essence, called haecceity. William of Ockham (c. 1287–1347 CE) proposed that one can decide between competing ontological theories by assessing which one uses the smallest number of elements, a principle known as Ockham's razor.
In Arabic-Persian philosophy, Avicenna (980–1037 CE) combined ontology with theology. He identified God as a necessary being that is the source of everything else, which only has contingent existence. In 8th-century Indian philosophy, the school of Advaita Vedanta emerged. It says that only a single all-encompassing entity exists, stating that the impression of a plurality of distinct entities is an illusion. Starting in the 13th century CE, the Navya-Nyāya school built on Vaisheshika ontology with a particular focus on the problem of non-existence and negation. 9th-century China saw the emergence of Neo-Confucianism, which developed the idea that a rational principle, known as li, is the ground of being and order of the cosmos.
René Descartes (1596–1650) formulated a dualist ontology at the beginning of the modern period. It distinguishes between mind and matter as distinct substances that causally interact. Rejecting Descartes's dualism, Baruch Spinoza (1632–1677) proposed a monist ontology according to which there is only a single entity that is identical to God and nature. Gottfried Wilhelm Leibniz (1646–1716), by contrast, said that the universe is made up of many simple substances, which are synchronized but do not interact with one another. John Locke (1632–1704) proposed his substratum theory, which says that each object has a featureless substratum that supports the object's properties. Christian Wolff (1679–1754) was influential in establishing ontology as a distinct discipline, delimiting its scope from other forms of metaphysical inquiry. George Berkeley (1685–1753) developed an idealist ontology according to which material objects are ideas perceived by minds.
Immanuel Kant (1724–1804) rejected the idea that humans can have direct knowledge of independently existing things and their nature, limiting knowledge to the field of appearances. For Kant, ontology does not study external things but provides a system of pure concepts of understanding. Influenced by Kant's philosophy, Georg Wilhelm Friedrich Hegel (1770–1831) linked ontology and logic. He said that being and thought are identical and examined their foundational structures. Arthur Schopenhauer (1788–1860) rejected Hegel's philosophy and proposed that the world is an expression of a blind and irrational will. Francis Herbert Bradley (1846–1924) saw absolute spirit as the ultimate and all-encompassing reality while denying that there are any external relations.
At the beginning of the 20th century, Edmund Husserl (1859–1938) developed phenomenology and employed its method, the description of experience, to address ontological problems. This idea inspired his student Martin Heidegger (1889–1976) to clarify the meaning of being by exploring the mode of human existence. Jean-Paul Sartre responded to Heidegger's philosophy by examining the relation between being and nothingness from the perspective of human existence, freedom, and consciousness. Based on the phenomenological method, Nicolai Hartmann (1882–1950) developed a complex hierarchical ontology that divides reality into four levels: inanimate, biological, psychological, and spiritual.
Alexius Meinong (1853–1920) articulated a controversial ontological theory that includes nonexistent objects as part of being. Arguing against this theory, Bertrand Russell (1872–1970) formulated a fact ontology known as logical atomism. This idea was further refined by the early Ludwig Wittgenstein (1889–1951) and inspired D. M. Armstrong's (1926–2014) ontology. Alfred North Whitehead (1861–1947), by contrast, developed a process ontology. Rudolf Carnap (1891–1970) questioned the objectivity of ontological theories by claiming that what exists depends on one's linguistic framework. He had a strong influence on Willard Van Orman Quine (1908–2000), who analyzed the ontological commitments of scientific theories to solve ontological problems. Quine's student David Lewis (1941–2001) formulated the position of modal realism, which says that possible worlds are as real and concrete as the actual world. Since the end of the 20th century, interest in applied ontology has risen in computer and information science with the development of conceptual frameworks for specific domains.
See also
References
Notes
Citations
Sources
External links | 0.781196 | 0.999506 | 0.78081 |
Pastoralism | Pastoralism is a form of animal husbandry where domesticated animals (known as "livestock") are released onto large vegetated outdoor lands (pastures) for grazing, historically by nomadic people who moved around with their herds. The animal species involved include cattle, camels, goats, yaks, llamas, reindeer, horses, and sheep.
Pastoralism occurs in many variations throughout the world, generally where environmentally effected characteristics such as aridity, poor soils, cold or hot temperatures, and lack of water make crop-growing difficult or impossible. Operating in more extreme environments with more marginal lands means that pastoral communities are very vulnerable to the effects of global warming.
Pastoralism remains a way of life in many geographic areas, including Africa, the Tibetan plateau, the Eurasian steppes, the Andes, Patagonia, the Pampas, Australia and many other places. , between 200 million and 500 million people globally practiced pastoralism, and 75% of all countries had pastoral communities.
Pastoral communities have different levels of mobility. Sedentary pastoralism has become more common as the hardening of political borders, land tenures, expansion of crop farming, and construction of fences and dedicated agricultural buildings all reduce the ability to move livestock around freely, leading to the rise of pastoral farming on established grazing-zones (sometimes called "ranches"). Sedentary pastoralists may also raise crops and livestock together in the form of mixed farming, for the purpose of diversifying productivity, obtaining manure for organic farming, and improving pasture conditions for their livestock. Mobile pastoralism includes moving herds locally across short distances in search of fresh forage and water (something that can occur daily or even within a few hours); as well as transhumance, where herders routinely move animals between different seasonal pastures across regions; and nomadism, where nomadic pastoralists and their families move with the animals in search for any available grazing-grounds—without much long-term planning. Grazing in woodlands and forests may be referred to as silvopastoralism.
Those who practice pastoralism are called "pastoralists".
Pastoralist herds interact with their environment, and mediate human relations with the environment as a way of turning uncultivated plants (like wild grass) into food. In many places, grazing herds on savannas and in woodlands can help maintain the biodiversity of such landscapes and prevent them from evolving into dense shrublands or forests. Grazing and browsing at the appropriate levels often can increase biodiversity in Mediterranean climate regions. Pastoralists shape ecosystems in different ways: some communities use fire to make ecosystems more suitable for grazing and browsing animals.
Origins
One theory suggests that pastoralism developed from mixed farming. Bates and Lees proposed that the incorporation of irrigation into farming resulted in specialization. Advantages of mixed farming include reducing risk of failure, spreading labour, and re-utilizing resources. The importance of these advantages and disadvantages to different farmers or farming societies differs according to the sociocultural preferences of the farmers and the biophysical conditions as determined by rainfall, radiation, soil type, and disease. The increased productivity of irrigation agriculture led to an increase in population and an added impact on resources. Bordering areas of land remained in use for animal breeding. This meant that large distances had to be covered by herds to collect sufficient forage. Specialization occurred as a result of the increasing importance of both intensive agriculture and pastoralism. Both agriculture and pastoralism developed alongside each other, with continuous interactions.
A different theory suggests that pastoralism evolved from the hunting and gathering. Hunters of wild goats and sheep were knowledgeable about herd mobility and the needs of the animals. Such hunters were mobile and followed the herds on their seasonal rounds. Undomesticated herds were chosen to become more controllable for the proto-pastoralist nomadic hunter and gatherer groups by taming and domesticating them. Hunter-gatherers' strategies in the past have been very diverse and contingent upon the local environmental conditions, like those of mixed farmers. Foraging strategies have included hunting or trapping big game and smaller animals, fishing, collecting shellfish or insects, and gathering wild-plant foods such as fruits, seeds, and nuts.
These diverse strategies for survival amongst the migratory herds could also provide an evolutionary route towards nomadic pastoralism.
Resources
Pastoralism occurs in uncultivated areas. Wild animals eat the forage from the marginal lands and humans survive from milk, blood, and often meat of the herds and often trade by-products like wool and milk for money and food.
Pastoralists do not exist at basic subsistence. Pastoralists often compile wealth and participate in international trade. Pastoralists have trade relations with agriculturalists, horticulturalists, and other groups. Pastoralists are not extensively dependent on milk, blood, and meat of their herd. McCabe noted that when common property institutions are created, in long-lived communities, resource sustainability is much higher, which is evident in the East African grasslands of pastoralist populations. However, the property rights structure is only one of the many different parameters that affect the sustainability of resources, and common or private property per se, does not necessarily lead to sustainability.
Some pastoralists supplement herding with hunting and gathering, fishing and/or small-scale farming or pastoral farming.
Mobility
Mobility allows pastoralists to adapt to the environment, which opens up the possibility for both fertile and infertile regions to support human existence. Important components of pastoralism include low population density, mobility, vitality, and intricate information systems. The system is transformed to fit the environment rather than adjusting the environment to support the "food production system." Mobile pastoralists can often cover a radius of a hundred to five hundred kilometers.
Pastoralists and their livestock have impacted the environment. Lands long used for pastoralism have transformed under the forces of grazing livestock and anthropogenic fire. Fire was a method of revitalizing pastureland and preventing forest regrowth. The collective environmental weights of fire and livestock browsing have transformed landscapes in many parts of the world. Fire has permitted pastoralists to tend the land for their livestock. Political boundaries are based on environmental boundaries. The Maquis shrublands of the Mediterranean region are dominated by pyrophytic plants that thrive under conditions of anthropogenic fire and livestock grazing.
Nomadic pastoralists have a global food-producing strategy depending on the management of herd animals for meat, skin, wool, milk, blood, manure, and transport. Nomadic pastoralism is practiced in different climates and environments with daily movement and seasonal migration. Pastoralists are among the most flexible populations. Pastoralist societies have had field armed men protect their livestock and their people and then to return into a disorganized pattern of foraging. The products of the herd animals are the most important resources, although the use of other resources, including domesticated and wild plants, hunted animals, and goods accessible in a market economy are not excluded. The boundaries between states impact the viability of subsistence and trade relations with cultivators.
Pastoralist strategies typify effective adaptation to the environment. Precipitation differences are evaluated by pastoralists. In East Africa, different animals are taken to specific regions throughout the year that corresponds to the seasonal patterns of precipitation. Transhumance is the migration of livestock and pastoralists between seasonal pastures.
In the Himalayas, pastoralists have often historically and traditionally depended on rangelands lying across international borders. The Himalayas contain several international borders, such as those between India and China, India and Nepal, Bhutan and China, India and Pakistan, and Pakistan and China. With the growth of nation states in Asia since the mid-twentieth century, mobility across the international borders in these countries have tended to be more and more restricted and regulated. As a consequence, the old, customary arrangements of trans-border pastoralism have generally tended to disintegrate, and trans-border pastoralism has declined. Within these countries, pastoralism is often at conflict these days with new modes of community forestry, such as Van Panchayats (Uttarakhand) and Community Forest User Groups (Nepal), which tend to benefit settled agricultural communities more. Frictions have also tended to arise between pastoralists and development projects such as dam-building and the creation of protected areas.
Some pastoralists are constantly moving, which may put them at odds with sedentary people of towns and cities. The resulting conflicts can result in war for disputed lands. These disputes are recorded in ancient times in the Middle East, as well as for East Asia. Other pastoralists are able to remain in the same location which results in longer-standing housing.
Different mobility patterns can be observed: Somali pastoralists keep their animals in one of the harshest environments but they have evolved over the centuries. Somalis have well developed pastoral culture where complete system of life and governance has been refined. Somali poetry depicts humans interactions, pastoral animals, beasts on the prowl, and other natural things such the rain, celestial events and historic events of significance. Wise sage Guled Haji coined a proverb that encapsulates the centrality of water in pastoral life:
Mobility was an important strategy for the Ariaal; however with the loss of grazing land impacted by the growth in population, severe drought, the expansion of agriculture, and the expansion of commercial ranches and game parks, mobility was lost. The poorest families were driven out of pastoralism and into towns to take jobs. Few Ariaal families benefited from education, healthcare, and income earning.
The flexibility of pastoralists to respond to environmental change was reduced by colonization. For example, mobility was limited in the Sahel region of Africa with settlement being encouraged. The population tripled and sanitation and medical treatment were improved.
Environment knowledge
Pastoralists have mental maps of the value of specific environments at different times of year. Pastoralists have an understanding of ecological processes and the environment. Information sharing is vital for creating knowledge through the networks of linked societies.
Pastoralists produce food in the world's harshest environments, and pastoral production supports the livelihoods of rural populations on almost half of the world's land. Several hundred million people are pastoralists, mostly in Africa and Asia. ReliefWeb reported that "Several hundred million people practice pastoralism—the use of extensive grazing on rangelands for livestock production, in over 100 countries worldwide. The African Union estimated that Africa has about 268 million pastoralists—over a quarter of the total population—living on about 43 percent of the continent's total land mass." Pastoralists manage rangelands covering about a third of the Earth's terrestrial surface and are able to produce food where crop production is not possible.
Pastoralism has been shown, "based on a review of many studies, to be between 2 and 10 times more productive per unit of land than the capital intensive alternatives that have been put forward". However, many of these benefits go unmeasured and are frequently squandered by policies and investments that seek to replace pastoralism with more capital intensive modes of production. They have traditionally suffered from poor understanding, marginalization and exclusion from dialogue. The Pastoralist Knowledge Hub, managed by the Food and Agriculture Organization of the UN serves as a knowledge repository on technical excellence on pastoralism as well as "a neutral forum for exchange and alliance building among pastoralists and stakeholders working on pastoralist issues".
The Afar pastoralists in Ethiopia uses an indigenous communication method called dagu for information. This helps them in getting crucial information about climate and availability of pastures at various locations.
Farm animal genetic resource
There is a variation in genetic makeup of the farm animals driven mainly by natural and human based selection. For example, pastoralists in large parts of Sub Saharan Africa are preferring livestock breeds which are adapted to their environment and able to tolerate drought and diseases. However, in other animal production systems these breeds are discouraged and more productive exotic ones are favored. This situation could not be left unaddressed due to the changes in market preferences and climate all over the world, which could lead to changes in livestock diseases occurrence and decline forage quality and availability. Hence pastoralists can maintain farm animal genetic resources by conserving local livestock breeds. Generally conserving farm animal genetic resources under pastoralism is advantageous in terms of reliability and associated cost.
Tragedy of the commons
Hardin's Tragedy of the Commons (1968) described how common property resources, such as the land shared by pastoralists, eventually become overused and ruined. According to Hardin's paper, the pastoralist land use strategy was unstable and a cause of environmental degradation.
One of Hardin's conditions for a "tragedy of the commons" is that people cannot communicate with each other or make agreements and contracts. Many scholars have pointed out that this is implausible, and yet it is applied in development projects around the globe, motivating the destruction of community and other governance systems that have managed sustainable pastoral systems for thousands of years. The outcomes have often been disastrous. In her book Governing the Commons, Elinor Ostrom showed that communities were not trapped and helpless amid diminishing commons. She argued that a common-pool resource, such as grazing lands used for pastoralism, can be managed more sustainably through community groups and cooperatives than through privatization or total governmental control. Ostrom was awarded a Nobel Memorial Prize in Economic Sciences for her work.
Pastoralists in the Sahel zone in Africa were held responsible for the depletion of resources. The depletion of resources was actually triggered by a prior interference and punitive climate conditions. Hardin's paper suggested a solution to the problems, offering a coherent basis for privatization of land, which stimulates the transfer of land from tribal peoples to the state or to individuals. The privatized programs impact the livelihood of the pastoralist societies while weakening the environment. Settlement programs often serve the needs of the state in reducing the autonomy and livelihoods of pastoral people.
The violent herder–farmer conflicts in Nigeria, Mali, Sudan, Ethiopia and other countries in the Sahel and Horn of Africa regions have been exacerbated by climate change, land degradation, and population growth.
It has also been shown that pastoralism supports human existence in harsh environments and often represents a sustainable approach to land use.
See also
Animal Genetic Resources for Food and Agriculture
Herding
Holistic management
Pastoral society
Pastoral (or bucolic) – related genre of literature, art, and music
References
Bibliography
Fagan, B. (1999). "Drought Follows the Plow", adapted from Floods, Famines and Emperors: Basic Books.
Fratkin, E. (1997). "Pastoralism: Governance & Development Issues". Annual Review of Anthropology, 26: 235–261.
Hardin, G. (1968). “The Tragedy of the Commons". Science, 162(3859), 1243–1248.
Angioni, Giulio (1989). I pascoli erranti. Antropologia del pastore in Sardegna. Napoli, Liguori. .
Hole, F. (1996). "The context of caprine domestication in the Zagros region'". in The Origins and Spread of Agriculture and Pastoralism in Eurasia. D.R. Harris (ed.). London, University College of London: 263–281.
Lees, S & Bates, D. (1974). "The Origins of Specialized Nomadic Pastoralism: A Systematic Model". American Antiquity, 39, 2.
Levy, T.E. (1983). "Emergence of specialized pastoralism in the Levant". World Archaeology 15(1): 15–37.
Moran, E. (2006). People and Nature: An Introduction to Human Ecological Relations. UK: Blackwell Publishing.
Pyne, Stephen J. (1997). Vestal Fire: An Environmental History, Told through Fire, of Europe and Europe's Encounter with the World. Seattle and London: University of Washington Press. .
Townsend, P. (2009). Environmental Anthropology: From Pigs to Policies. United States of America: Waveland Press.
Wilson, K.B. (1992). "Re-Thinking the Pastoral Ecological Impact in East Africa". Global Ecology and Biogeography Letters, 2(4): 143–144.
Toutain B., Marty A., Bourgeot A. Ickowicz A. & Lhoste P. (2012). Pastoralism in dryland areas. A case study in sub-Saharan Africa. Les dossiers thématiques du CSFD. N°9. January 2013. CSFD/Agropolis International, Montpellier, France. 60 p.
Animal husbandry | 0.782765 | 0.997321 | 0.780667 |
Arboriculture | Arboriculture is the cultivation, management, and study of individual trees, shrubs, vines, and other perennial woody plants. The science of arboriculture studies how these plants grow and respond to cultural practices and to their environment. The practice of arboriculture includes cultural techniques such as selection, planting, training, fertilization, pest and pathogen control, pruning, shaping, and removal.
Overview
A person who practices or studies arboriculture can be termed an arborist or an arboriculturist. A tree surgeon is more typically someone who is trained in the physical maintenance and manipulation of trees and therefore more a part of the arboriculture process rather than an arborist. Risk management, legal issues, and aesthetic considerations have come to play prominent roles in the practice of arboriculture. Businesses often need to hire arboriculturists to complete "tree hazard surveys" and generally manage the trees on-site to fulfill occupational safety and health obligations.
Arboriculture is primarily focused on individual woody plants and trees maintained for permanent landscape and amenity purposes, usually in gardens, parks or other populated settings, by arborists, for the enjoyment, protection, and benefit of people.
Arboricultural matters are also considered to be within the practice of urban forestry yet the clear and separate divisions are not distinct or discreet.
Tree Benefits
Tree benefits are the economic, ecological, social and aesthetic use, function purpose, or services of a tree (or group of trees), in its situational context in the landscape.
Environmental Benefits
Erosion control and soil retention
Improved water infiltration and percolation
Protection from exposure: windbreak, shade, impact from hail/rainfall
Air humidification
Modulates environmental conditions in a given microclimate: shields wind, humidifies, provides shade
Carbon sequestration and oxygen production
Ecological Benefits
Attracting pollinators
Increased biodiversity
Food for decomposers, consumers, and pollinators
Soil health: organic matter accumulation from leaf litter and root exudates (symbiotic microbes)
Ecological habitat
Socioeconomic Benefits
Increases employment: forestry, education, tourism
Run-off and flood control (e.g. bioswales, plantings on slopes)
Aesthetic beauty: parks, gatherings, social events, tourism, senses (fragrance, visual), focal point
Adds character and prestige to the landscape, creating a “natural” feel
Climate control (e.g shade): can reduce energy consumption of buildings
Privacy and protection: from noise, wind
Cultural benefits: eg. memorials for a loved one
Medical benefits: eg. Taxus chemotherapy
Materials: wood for building, paper pulp
Fodder for livestock
Property value: trees can increase by 10-20%
Increases the amount of time customers will spend in a mall, strip mall, shopping district
Tree Defects
A tree defect is any feature, condition, or deformity of a tree that indicates weak structure or instability that could contribute to tree failure.
Common types of tree defects:
Codominant stems: two or more stems that grow upward from a single point of origin and compete with one another.
common with decurrent growth habits
occurs in excurrent trees only after the leader is killed and multiple leaders compete for dominance
Included bark: bark is incorporated in the joint between two limbs, creating a weak attachment
occurs in branch unions with a high attachment angle (i.e. v-shaped unions)
common in many columnar/fastigiate growing deciduous trees
Dead, diseased, or broken branches:
woundwood cannot grow over stubs or dead branches to seal off decay
symptoms/signs of disease: e.g. oozing through the bark, sunken areas in the bark, and bark with abnormal patterns or colours, stunted new growth, discolouration of the foliage
Cracks
longitudinal cracks result from interior decay, bark rips/tears, or torsion from wind load
transverse cracks result from buckled wood, often caused by unnatural loading on branches, such as lion's tailing.
Seams: bark edges meet at a crack or wound
Ribs: bulges, indicating interior cracks
Cavity and hollows: sunken or open areas wherein a tree has suffered injury followed by decay. Further indications include: fungal fruiting structures, insect or animal nests.
Lean: a lean of more than 40% from vertical presents a risk of tree failure
Taper: change in diameter over the length of trunks branches and roots
Epicormic branches (water sprouts in canopy or suckers from root system): often grow in response to major damage or excessive pruning
Roots:
girdling roots compress the trunk, leading to poor trunk taper, and restrict vascular flow
kinked roots provide poor structural support; the kink is a site of potential root failure
circling roots occurs when roots encounter obstructions/limitations such as a small tree well or being grown too long in a nursery pot; these cannot provide adequate structural support and are limited in accessing nutrients and water
healthy soil texture and depth, drainage, water availability, makes for healthy roots
Tree Installation
Proper tree installation ensures the long-term viability of the tree and reduces the risk of tree failure.
Quality nursery stock must be used. There must be no visible damage or sign of disease. Ideally the tree should have good crown structure. A healthy root ball should not have circling roots and new fibrous roots should be present at the soil perimeter. Girdling or circling roots should be pruned out. Excess soil above the root flare should be removed immediately, since it present a risk of disease ingress into the trunk.
Appropriate time of year to plant: generally fall or early spring in temperate regions of the northern hemisphere.
Planting hole: the planting hole should be 3 times the width of the root ball. The hole should be dug deep enough that when the root ball is placed on the substrate, the root flare is 3-5cm above the surrounding soil grade. If soil is left against the trunk, it may lead to bark, cambium and wood decay. Angular sides to the planting hole will encourage roots to grow radially from the trunk, rather than circling the planting hole. In urban settings, soil preparation may include the use of:
Silva cells: suspended pavement over modular cells containing soil for root development
Structural soils: growing medium composed of 80% crushed rock and 20% loam, which supports surface load without it leading to soil compaction
Tree wells: a zone of mulch can be installed around the tree trunk to: limit root zone competition (from turf or weeds), reduce soil compaction, improve soil structure, conserve moisture, and keep lawn equipment at a distance. No more than 5-10cm of mulch should be used to avoid suffocating the roots. Mulch must be kept approximately 20cm from the trunk to avoid burying the root flare. With city trees additional tree well preparation includes:
Tree grates/grill and frames: limit compaction on root zone and mechanical damage to roots and trunk
Root barriers: forces roots to grow down under surface asphalt/concrete/pavers to limit infrastructure damage from roots
Staking: newly planted, immature trees should be staked for one growing season to allow for the root system to establish. Staking for longer than one season should only be considered in situations where the root system has failed to establish sufficient structural support. Guy wires can be used for larger, newly planted trees. Care must be used to avoid stem girdling from the support system ties.
Irrigation: irrigation infrastructure may be installed to ensure a regular water supply throughout the lifetime of the tree. Wicking beds are an underground reservoir from which water is wicked into soil. Watering bags may be temporarily installed around tree stakes to provide water until the root system becomes established. Permeable paving allows for water infiltration in paved urban settings, such as parks and walkways.
UK
Within the United Kingdom trees are considered as a material consideration within the town planning system and may be conserved as amenity landscape features.
The role of the Arborist or Local Government Arboricultural Officer is likely to have a great effect on such matters. Identification of trees of high quality which may have extensive longevity is a key element in the preservation of trees.
Urban and rural trees may benefit from statutory protection under the Town and Country Planning system. Such protection can result in the conservation and improvement of the urban forest as well as rural settlements.
Historically the profession divides into the operational and professional areas. These might be further subdivided into the private and public sectors. The profession is broadly considered as having one trade body known as the Arboricultural Association, although the Institute of Chartered Foresters offers a route for professional recognition and chartered arboriculturist status.
The qualifications associated with the industry range from vocational to Doctorate. Arboriculture is a comparatively young industry.
See also
Agroforestry
Arborist
Bonsai
European Arboricultural Council
Forester
Forestry
Fruit tree pruning
Horticulture
International Society of Arboriculture
Landscape architecture
Landscaping
Silviculture
Silvology
Tree forks
Tree shaping
Tropical horticulture
Viticulture
References
External links
Arboriculture Australia Australia
Arboricultural Association UK
International Society of Arboriculture (USA)
European Arboricultural Council
BatsandTrees.com Promoting the importance of British trees to bats
Institute of Chartered Foresters The UK based Chartered body for forestry and arboricultural professionals
American Forests Urban forestry resources
Encyclopædia Britannica
Horticultural techniques
Horticulture
Trees
Forest management | 0.789523 | 0.988742 | 0.780634 |
Theoretical ecology | Theoretical ecology is the scientific discipline devoted to the study of ecological systems using theoretical methods such as simple conceptual models, mathematical models, computational simulations, and advanced data analysis. Effective models improve understanding of the natural world by revealing how the dynamics of species populations are often based on fundamental biological conditions and processes. Further, the field aims to unify a diverse range of empirical observations by assuming that common, mechanistic processes generate observable phenomena across species and ecological environments. Based on biologically realistic assumptions, theoretical ecologists are able to uncover novel, non-intuitive insights about natural processes. Theoretical results are often verified by empirical and observational studies, revealing the power of theoretical methods in both predicting and understanding the noisy, diverse biological world.
The field is broad and includes foundations in applied mathematics, computer science, biology, statistical physics, genetics, chemistry, evolution, and conservation biology. Theoretical ecology aims to explain a diverse range of phenomena in the life sciences, such as population growth and dynamics, fisheries, competition, evolutionary theory, epidemiology, animal behavior and group dynamics, food webs, ecosystems, spatial ecology, and the effects of climate change.
Theoretical ecology has further benefited from the advent of fast computing power, allowing the analysis and visualization of large-scale computational simulations of ecological phenomena. Importantly, these modern tools provide quantitative predictions about the effects of human induced environmental change on a diverse variety of ecological phenomena, such as: species invasions, climate change, the effect of fishing and hunting on food network stability, and the global carbon cycle.
Modelling approaches
As in most other sciences, mathematical models form the foundation of modern ecological theory.
Phenomenological models: distill the functional and distributional shapes from observed patterns in the data, or researchers decide on functions and distribution that are flexible enough to match the patterns they or others (field or experimental ecologists) have found in the field or through experimentation.
Mechanistic models: model the underlying processes directly, with functions and distributions that are based on theoretical reasoning about ecological processes of interest.
Ecological models can be deterministic or stochastic.
Deterministic models always evolve in the same way from a given starting point. They represent the average, expected behavior of a system, but lack random variation. Many system dynamics models are deterministic.
Stochastic models allow for the direct modeling of the random perturbations that underlie real world ecological systems. Markov chain models are stochastic.
Species can be modelled in continuous or discrete time.
Continuous time is modelled using differential equations.
Discrete time is modelled using difference equations. These model ecological processes that can be described as occurring over discrete time steps. Matrix algebra is often used to investigate the evolution of age-structured or stage-structured populations. The Leslie matrix, for example, mathematically represents the discrete time change of an age structured population.
Models are often used to describe real ecological reproduction processes of single or multiple species.
These can be modelled using stochastic branching processes. Examples are the dynamics of interacting populations (predation competition and mutualism), which, depending on the species of interest, may best be modeled over either continuous or discrete time. Other examples of such models may be found in the field of mathematical epidemiology where the dynamic relationships that are to be modeled are host–pathogen interactions.
Bifurcation theory is used to illustrate how small changes in parameter values can give rise to dramatically different long run outcomes, a mathematical fact that may be used to explain drastic ecological differences that come about in qualitatively very similar systems. Logistic maps are polynomial mappings, and are often cited as providing archetypal examples of how chaotic behaviour can arise from very simple non-linear dynamical equations. The maps were popularized in a seminal 1976 paper by the theoretical ecologist Robert May. The difference equation is intended to capture the two effects of reproduction and starvation.
In 1930, R.A. Fisher published his classic The Genetical Theory of Natural Selection, which introduced the idea that frequency-dependent fitness brings a strategic aspect to evolution, where the payoffs to a particular organism, arising from the interplay of all of the relevant organisms, are the number of this organism' s viable offspring. In 1961, Richard Lewontin applied game theory to evolutionary biology in his Evolution and the Theory of Games,
followed closely by John Maynard Smith, who in his seminal 1972 paper, “Game Theory and the Evolution of Fighting", defined the concept of the evolutionarily stable strategy.
Because ecological systems are typically nonlinear, they often cannot be solved analytically and in order to obtain sensible results, nonlinear, stochastic and computational techniques must be used. One class of computational models that is becoming increasingly popular are the agent-based models. These models can simulate the actions and interactions of multiple, heterogeneous, organisms where more traditional, analytical techniques are inadequate. Applied theoretical ecology yields results which are used in the real world. For example, optimal harvesting theory draws on optimization techniques developed in economics, computer science and operations research, and is widely used in fisheries.
Population ecology
Population ecology is a sub-field of ecology that deals with the dynamics of species populations and how these populations interact with the environment. It is the study of how the population sizes of species living together in groups change over time and space, and was one of the first aspects of ecology to be studied and modelled mathematically.
Exponential growth
The most basic way of modeling population dynamics is to assume that the rate of growth of a population depends only upon the population size at that time and the per capita growth rate of the organism. In other words, if the number of individuals in a population at a time t, is N(t), then the rate of population growth is given by:
where r is the per capita growth rate, or the intrinsic growth rate of the organism. It can also be described as r = b-d, where b and d are the per capita time-invariant birth and death rates, respectively. This first order linear differential equation can be solved to yield the solution
,
a trajectory known as Malthusian growth, after Thomas Malthus, who first described its dynamics in 1798. A population experiencing Malthusian growth follows an exponential curve, where N(0) is the initial population size. The population grows when r > 0, and declines when r < 0. The model is most applicable in cases where a few organisms have begun a colony and are rapidly growing without any limitations or restrictions impeding their growth (e.g. bacteria inoculated in rich media).
Logistic growth
The exponential growth model makes a number of assumptions, many of which often do not hold. For example, many factors affect the intrinsic growth rate and is often not time-invariant. A simple modification of the exponential growth is to assume that the intrinsic growth rate varies with population size. This is reasonable: the larger the population size, the fewer resources available, which can result in a lower birth rate and higher death rate. Hence, we can replace the time-invariant r with r’(t) = (b –a*N(t)) – (d + c*N(t)), where a and c are constants that modulate birth and death rates in a population dependent manner (e.g. intraspecific competition). Both a and c will depend on other environmental factors which, we can for now, assume to be constant in this approximated model. The differential equation is now:
This can be rewritten as:
where r = b-d and K = (b-d)/(a+c).
The biological significance of K becomes apparent when stabilities of the equilibria of the system are considered. The constant K is the carrying capacity of the population. The equilibria of the system are N = 0 and N = K. If the system is linearized, it can be seen that N = 0 is an unstable equilibrium while K is a stable equilibrium.
Structured population growth
Another assumption of the exponential growth model is that all individuals within a population are identical and have the same probabilities of surviving and of reproducing. This is not a valid assumption for species with complex life histories. The exponential growth model can be modified to account for this, by tracking the number of individuals in different age classes (e.g. one-, two-, and three-year-olds) or different stage classes (juveniles, sub-adults, and adults) separately, and allowing individuals in each group to have their own survival and reproduction rates.
The general form of this model is
where Nt is a vector of the number of individuals in each class at time t and L is a matrix that contains the survival probability and fecundity for each class. The matrix L is referred to as the Leslie matrix for age-structured models, and as the Lefkovitch matrix for stage-structured models.
If parameter values in L are estimated from demographic data on a specific population, a structured model can then be used to predict whether this population is expected to grow or decline in the long-term, and what the expected age distribution within the population will be. This has been done for a number of species including loggerhead sea turtles and right whales.
Community ecology
An ecological community is a group of trophically similar, sympatric species that actually or potentially compete in a local area for the same or similar resources. Interactions between these species form the first steps in analyzing more complex dynamics of ecosystems. These interactions shape the distribution and dynamics of species. Of these interactions, predation is one of the most widespread population activities.
Taken in its most general sense, predation comprises predator–prey, host–pathogen, and host–parasitoid interactions.
Predator–prey interaction
Predator–prey interactions exhibit natural oscillations in the populations of both predator and the prey. In 1925, the American mathematician Alfred J. Lotka developed simple equations for predator–prey interactions in his book on biomathematics. The following year, the Italian mathematician Vito Volterra, made a statistical analysis of fish catches in the Adriatic and independently developed the same equations. It is one of the earliest and most recognised ecological models, known as the Lotka-Volterra model:
where N is the prey and P is the predator population sizes, r is the rate for prey growth, taken to be exponential in the absence of any predators, α is the prey mortality rate for per-capita predation (also called ‘attack rate’), c is the efficiency of conversion from prey to predator, and d is the exponential death rate for predators in the absence of any prey.
Volterra originally used the model to explain fluctuations in fish and shark populations after fishing was curtailed during the First World War. However, the equations have subsequently been applied more generally. Other examples of these models include the Lotka-Volterra model of the snowshoe hare and Canadian lynx in North America, any infectious disease modeling such as the recent outbreak of SARS
and biological control of California red scale by the introduction of its parasitoid, Aphytis melinus
.
A credible, simple alternative to the Lotka-Volterra predator–prey model and their common prey dependent generalizations is the ratio dependent or Arditi-Ginzburg model. The two are the extremes of the spectrum of predator interference models. According to the authors of the alternative view, the data show that true interactions in nature are so far from the Lotka–Volterra extreme on the interference spectrum that the model can simply be discounted as wrong. They are much closer to the ratio-dependent extreme, so if a simple model is needed one can use the Arditi–Ginzburg model as the first approximation.
Host–pathogen interaction
The second interaction, that of host and pathogen, differs from predator–prey interactions in that pathogens are much smaller, have much faster generation times, and require a host to reproduce. Therefore, only the host population is tracked in host–pathogen models. Compartmental models that categorize host population into groups such as susceptible, infected, and recovered (SIR) are commonly used.
Host–parasitoid interaction
The third interaction, that of host and parasitoid, can be analyzed by the Nicholson–Bailey model, which differs from Lotka-Volterra and SIR models in that it is discrete in time. This model, like that of Lotka-Volterra, tracks both populations explicitly. Typically, in its general form, it states:
where f(Nt, Pt) describes the probability of infection (typically, Poisson distribution), λ is the per-capita growth rate of hosts in the absence of parasitoids, and c is the conversion efficiency, as in the Lotka-Volterra model.
Competition and mutualism
In studies of the populations of two species, the Lotka-Volterra system of equations has been extensively used to describe dynamics of behavior between two species, N1 and N2. Examples include relations between D. discoiderum and E. coli,
as well as theoretical analysis of the behavior of the system.
The r coefficients give a “base” growth rate to each species, while K coefficients correspond to the carrying capacity. What can really change the dynamics of a system, however are the α terms. These describe the nature of the relationship between the two species. When α12 is negative, it means that N2 has a negative effect on N1, by competing with it, preying on it, or any number of other possibilities. When α12 is positive, however, it means that N2 has a positive effect on N1, through some kind of mutualistic interaction between the two.
When both α12 and α21 are negative, the relationship is described as competitive. In this case, each species detracts from the other, potentially over competition for scarce resources.
When both α12 and α21 are positive, the relationship becomes one of mutualism. In this case, each species provides a benefit to the other, such that the presence of one aids the population growth of the other.
See Competitive Lotka–Volterra equations for further extensions of this model.
Neutral theory
Unified neutral theory is a hypothesis proposed by Stephen P. Hubbell in 2001. The hypothesis aims to explain the diversity and relative abundance of species in ecological communities, although like other neutral theories in ecology, Hubbell's hypothesis assumes that the differences between members of an ecological community of trophically similar species are "neutral," or irrelevant to their success. Neutrality means that at a given trophic level in a food web, species are equivalent in birth rates, death rates, dispersal rates and speciation rates, when measured on a per-capita basis. This implies that biodiversity arises at random, as each species follows a random walk. This can be considered a null hypothesis to niche theory. The hypothesis has sparked controversy, and some authors consider it a more complex version of other null models that fit the data better.
Under unified neutral theory, complex ecological interactions are permitted among individuals of an ecological community (such as competition and cooperation), providing all individuals obey the same rules. Asymmetric phenomena such as parasitism and predation are ruled out by the terms of reference; but cooperative strategies such as swarming, and negative interaction such as competing for limited food or light are allowed, so long as all individuals behave the same way. The theory makes predictions that have implications for the management of biodiversity, especially the management of rare species. It predicts the existence of a fundamental biodiversity constant, conventionally written θ, that appears to govern species richness on a wide variety of spatial and temporal scales.
Hubbell built on earlier neutral concepts, including MacArthur & Wilson's theory of island biogeography and Gould's concepts of symmetry and null models.
Spatial ecology
Biogeography
Biogeography is the study of the distribution of species in space and time. It aims to reveal where organisms live, at what abundance, and why they are (or are not) found in a certain geographical area.
Biogeography is most keenly observed on islands, which has led to the development of the subdiscipline of island biogeography. These habitats are often a more manageable areas of study because they are more condensed than larger ecosystems on the mainland. In 1967, Robert MacArthur and E.O. Wilson published The Theory of Island Biogeography. This showed that the species richness in an area could be predicted in terms of factors such as habitat area, immigration rate and extinction rate. The theory is considered one of the fundamentals of ecological theory. The application of island biogeography theory to habitat fragments spurred the development of the fields of conservation biology and landscape ecology.
r/K-selection theory
A population ecology concept is r/K selection theory, one of the first predictive models in ecology used to explain life-history evolution. The premise behind the r/K selection model is that natural selection pressures change according to population density. For example, when an island is first colonized, density of individuals is low. The initial increase in population size is not limited by competition, leaving an abundance of available resources for rapid population growth. These early phases of population growth experience density-independent forces of natural selection, which is called r-selection. As the population becomes more crowded, it approaches the island's carrying capacity, thus forcing individuals to compete more heavily for fewer available resources. Under crowded conditions, the population experiences density-dependent forces of natural selection, called K-selection.
Niche theory
Metapopulations
Spatial analysis of ecological systems often reveals that assumptions that are valid for spatially homogenous populations – and indeed, intuitive – may no longer be valid when migratory subpopulations moving from one patch to another are considered. In a simple one-species formulation, a subpopulation may occupy a patch, move from one patch to another empty patch, or die out leaving an empty patch behind. In such a case, the proportion of occupied patches may be represented as
where m is the rate of colonization, and e is the rate of extinction. In this model, if e < m, the steady state value of p is 1 – (e/m) while in the other case, all the patches will eventually be left empty. This model may be made more complex by addition of another species in several different ways, including but not limited to game theoretic approaches, predator–prey interactions, etc. We will consider here an extension of the previous one-species system for simplicity. Let us denote the proportion of patches occupied by the first population as p1, and that by the second as p2. Then,
In this case, if e is too high, p1 and p2 will be zero at steady state. However, when the rate of extinction is moderate, p1 and p2 can stably coexist. The steady state value of p2 is given by
(p*1 may be inferred by symmetry).
If e is zero, the dynamics of the system favor the species that is better at colonizing (i.e. has the higher m value). This leads to a very important result in theoretical ecology known as the Intermediate Disturbance Hypothesis, where the biodiversity (the number of species that coexist in the population) is maximized when the disturbance (of which e is a proxy here) is not too high or too low, but at intermediate levels.
The form of the differential equations used in this simplistic modelling approach can be modified. For example:
Colonization may be dependent on p linearly (m*(1-p)) as opposed to the non-linear m*p*(1-p) regime described above. This mode of replication of a species is called the “rain of propagules”, where there is an abundance of new individuals entering the population at every generation. In such a scenario, the steady state where the population is zero is usually unstable.
Extinction may depend non-linearly on p (e*p*(1-p)) as opposed to the linear (e*p) regime described above. This is referred to as the “rescue effect” and it is again harder to drive a population extinct under this regime.
The model can also be extended to combinations of the four possible linear or non-linear dependencies of colonization and extinction on p are described in more detail in.
Ecosystem ecology
Introducing new elements, whether biotic or abiotic, into ecosystems can be disruptive. In some cases, it leads to ecological collapse, trophic cascades and the death of many species within the ecosystem. The abstract notion of ecological health attempts to measure the robustness and recovery capacity for an ecosystem; i.e. how far the ecosystem is away from its steady state. Often, however, ecosystems rebound from a disruptive agent. The difference between collapse or rebound depends on the toxicity of the introduced element and the resiliency of the original ecosystem.
If ecosystems are governed primarily by stochastic processes, through which its subsequent state would be determined by both predictable and random actions, they may be more resilient to sudden change than each species individually. In the absence of a balance of nature, the species composition of ecosystems would undergo shifts that would depend on the nature of the change, but entire ecological collapse would probably be infrequent events. In 1997, Robert Ulanowicz used information theory tools to describe the structure of ecosystems, emphasizing mutual information (correlations) in studied systems. Drawing on this methodology and prior observations of complex ecosystems, Ulanowicz depicts approaches to determining the stress levels on ecosystems and predicting system reactions to defined types of alteration in their settings (such as increased or reduced energy flow), and eutrophication.
Ecopath is a free ecosystem modelling software suite, initially developed by NOAA, and widely used in fisheries management as a tool for modelling and visualising the complex relationships that exist in real world marine ecosystems.
Food webs
Food webs provide a framework within which a complex network of predator–prey interactions can be organised. A food web model is a network of food chains. Each food chain starts with a primary producer or autotroph, an organism, such as a plant, which is able to manufacture its own food. Next in the chain is an organism that feeds on the primary producer, and the chain continues in this way as a string of successive predators. The organisms in each chain are grouped into trophic levels, based on how many links they are removed from the primary producers. The length of the chain, or trophic level, is a measure of the number of species encountered as energy or nutrients move from plants to top predators. Food energy flows from one organism to the next and to the next and so on, with some energy being lost at each level. At a given trophic level there may be one species or a group of species with the same predators and prey.
In 1927, Charles Elton published an influential synthesis on the use of food webs, which resulted in them becoming a central concept in ecology. In 1966, interest in food webs increased after Robert Paine's experimental and descriptive study of intertidal shores, suggesting that food web complexity was key to maintaining species diversity and ecological stability. Many theoretical ecologists, including Sir Robert May and Stuart Pimm, were prompted by this discovery and others to examine the mathematical properties of food webs. According to their analyses, complex food webs should be less stable than simple food webs. The apparent paradox between the complexity of food webs observed in nature and the mathematical fragility of food web models is currently an area of intensive study and debate. The paradox may be due partially to conceptual differences between persistence of a food web and equilibrial stability of a food web.
Systems ecology
Systems ecology can be seen as an application of general systems theory to ecology. It takes a holistic and interdisciplinary approach to the study of ecological systems, and particularly ecosystems. Systems ecology is especially concerned with the way the functioning of ecosystems can be influenced by human interventions. Like other fields in theoretical ecology, it uses and extends concepts from thermodynamics and develops other macroscopic descriptions of complex systems. It also takes account of the energy flows through the different trophic levels in the ecological networks. Systems ecology also considers the external influence of ecological economics, which usually is not otherwise considered in ecosystem ecology. For the most part, systems ecology is a subfield of ecosystem ecology.
Ecophysiology
This is the study of how "the environment, both physical and biological, interacts with the physiology of an organism. It includes the effects of climate and nutrients on physiological processes in both plants and animals, and has a particular focus on how physiological processes scale with organism size".
Behavioral ecology
Swarm behaviour
Swarm behaviour is a collective behaviour exhibited by animals of similar size which aggregate together, perhaps milling about the same spot or perhaps migrating in some direction. Swarm behaviour is commonly exhibited by insects, but it also occurs in the flocking of birds, the schooling of fish and the herd behaviour of quadrupeds. It is a complex emergent behaviour that occurs when individual agents follow simple behavioral rules.
Recently, a number of mathematical models have been discovered which explain many aspects of the emergent behaviour. Swarm algorithms follow a Lagrangian approach or an Eulerian approach. The Eulerian approach views the swarm as a field, working with the density of the swarm and deriving mean field properties. It is a hydrodynamic approach, and can be useful for modelling the overall dynamics of large swarms.<ref>Toner J and Tu Y (1995) "Long-range order in a two-dimensional xy model: how birds fly together" Physical Revue Letters, '75 (23)(1995), 4326–4329.</ref> However, most models work with the Lagrangian approach, which is an agent-based model following the individual agents (points or particles) that make up the swarm. Individual particle models can follow information on heading and spacing that is lost in the Eulerian approach. Examples include ant colony optimization, self-propelled particles and particle swarm optimization.
On cellular levels, individual organisms also demonstrated swarm behavior. Decentralized systems are where individuals act based on their own decisions without overarching guidance. Studies have shown that individual Trichoplax adhaerens behave like self-propelled particles (SPPs) and collectively display phase transition from ordered movement to disordered movements. Previously, it was thought that the surface-to-volume ratio was what limited the animal size in the evolutionary game. Considering the collective behaviour of the individuals, it was suggested that order is another limiting factor. Central nervous systems were indicated to be vital for large multicellular animals in the evolutionary pathway.
Synchronization Photinus carolinus firefly will synchronize their shining frequencies in a collective setting. Individually, there are no apparent patterns for the flashing. In a group setting, periodicity emerges in the shining pattern. The coexistence of the synchronization and asynchronization in the flashings in the system composed of multiple fireflies could be characterized by the chimera states. Synchronization could spontaneously occur. The agent-based model has been useful in describing this unique phenomenon. The flashings of individual fireflies could be viewed as oscillators and the global coupling models were similar to the ones used in condensed matter physics.
Evolutionary ecology
The British biologist Alfred Russel Wallace is best known for independently proposing a theory of evolution due to natural selection that prompted Charles Darwin to publish his own theory. In his famous 1858 paper, Wallace proposed natural selection as a kind of feedback mechanism which keeps species and varieties adapted to their environment.
The action of this principle is exactly like that of the centrifugal governor of the steam engine, which checks and corrects any irregularities almost before they become evident; and in like manner no unbalanced deficiency in the animal kingdom can ever reach any conspicuous magnitude, because it would make itself felt at the very first step, by rendering existence difficult and extinction almost sure soon to follow.
The cybernetician and anthropologist Gregory Bateson observed in the 1970s that, though writing it only as an example, Wallace had "probably said the most powerful thing that’d been said in the 19th Century". Subsequently, the connection between natural selection and systems theory has become an area of active research.
Other theories
In contrast to previous ecological theories which considered floods to be catastrophic events, the river flood pulse concept argues that the annual flood pulse is the most important aspect and the most biologically productive feature of a river's ecosystem.Benke, A. C., Chaubey, I., Ward, G. M., & Dunn, E. L. (2000). Flood Pulse Dynamics of an Unregulated River Floodplain in the Southeastern U.S. Coastal Plain. Ecology, 2730-2741.
History
Theoretical ecology draws on pioneering work done by G. Evelyn Hutchinson and his students. Brothers H.T. Odum and E.P. Odum are generally recognised as the founders of modern theoretical ecology. Robert MacArthur brought theory to community ecology. Daniel Simberloff was the student of E.O. Wilson, with whom MacArthur collaborated on The Theory of Island Biogeography, a seminal work in the development of theoretical ecology.
Simberloff added statistical rigour to experimental ecology and was a key figure in the SLOSS debate, about whether it is preferable to protect a single large or several small reserves. This resulted in the supporters of Jared Diamond's community assembly rules defending their ideas through Neutral Model Analysis. Simberloff also played a key role in the (still ongoing) debate on the utility of corridors for connecting isolated reserves.
Stephen P. Hubbell and Michael Rosenzweig combined theoretical and practical elements into works that extended MacArthur and Wilson's Island Biogeography Theory - Hubbell with his Unified Neutral Theory of Biodiversity and Biogeography and Rosenzweig with his Species Diversity in Space and Time.
Theoretical and mathematical ecologists
A tentative distinction can be made between mathematical ecologists, ecologists who apply mathematics to ecological problems, and mathematicians who develop the mathematics itself that arises out of ecological problems.
Some notable theoretical ecologists can be found in these categories:
:Category:Mathematical ecologists
:Category:Theoretical biologists
Journals
The American Naturalist Journal of Mathematical Biology Journal of Theoretical Biology Theoretical Ecology Theoretical Population Biology Ecological ModellingSee also
Butterfly effect
Complex system biology
Ecological systems theory
Ecosystem model
Integrodifference equation – widely used to model the dispersal and growth of populations
Limiting similarity
Mathematical biology
Population dynamics
Population modeling
Quantitative ecology
Taylor's law
Theoretical biology
References
Further reading
The classic text is Theoretical Ecology: Principles and Applications, by Angela McLean and Robert May. The 2007 edition is published by the Oxford University Press. .
Bolker BM (2008) Ecological Models and Data in R Princeton University Press. .
Case TJ (2000) An illustrated guide to theoretical ecology Oxford University Press. .
Caswell H (2000) Matrix Population Models: Construction, Analysis, and Interpretation'', Sinauer, 2nd Ed. .
Edelstein-Keshet L (2005) Mathematical Models in Biology Society for Industrial and Applied Mathematics. .
Gotelli NJ (2008) A Primer of Ecology Sinauer Associates, 4th Ed. .
Gotelli NJ & A Ellison (2005) A Primer Of Ecological Statistics Sinauer Associates Publishers. .
Hastings A (1996) Population Biology: Concepts and Models Springer. .
Hilborn R & M Clark (1997) The Ecological Detective: Confronting Models with Data Princeton University Press.
Kokko H (2007) Modelling for field biologists and other interesting people Cambridge University Press. .
Kot M (2001) Elements of Mathematical Ecology Cambridge University Press. .
Murray JD (2002) Mathematical Biology, Volume 1 Springer, 3rd Ed. .
Murray JD (2003) Mathematical Biology, Volume 2 Springer, 3rd Ed. .
Pastor J (2008) Mathematical Ecology of Populations and Ecosystems Wiley-Blackwell. .
Roughgarden J (1998) Primer of Ecological Theory Prentice Hall. .
Ulanowicz R (1997) Ecology: The Ascendant Perspective Columbia University Press.
Ecology | 0.804872 | 0.969862 | 0.780615 |
Environmental philosophy | Environmental philosophy is the branch of philosophy that is concerned with the natural environment and humans' place within it. It asks crucial questions about human environmental relations such as "What do we mean when we talk about nature?" "What is the value of the natural, that is non-human environment to us, or in itself?" "How should we respond to environmental challenges such as environmental degradation, pollution and climate change?" "How can we best understand the relationship between the natural world and human technology and development?" and "What is our place in the natural world?" Environmental philosophy includes environmental ethics, environmental aesthetics, ecofeminism, environmental hermeneutics, and environmental theology. Some of the main areas of interest for environmental philosophers are:
Defining environment and nature
How to value the environment
Moral status of animals and plants
Endangered species
Environmentalism and deep ecology
Aesthetic value of nature
Intrinsic value
Wilderness
Restoration of nature
Consideration of future generations
Ecophenomenology
Contemporary issues
Modern issues within environmental philosophy include but are not restricted to the concerns of environmental activism, questions raised by science and technology, environmental justice, and climate change. These include issues related to the depletion of finite resources and other harmful and permanent effects brought on to the environment by humans, as well as the ethical and practical problems raised by philosophies and practices of environmental conservation, restoration, and policy in general. Another question that has settled on the minds of modern environmental philosophers is "Do rivers have rights?" At the same time environmental philosophy deals with the value human beings attach to different kinds of environmental experience, particularly how experiences in or close to non-human environments contrast with urban or industrialized experiences, and how this varies across cultures with close attention paid to indigenous people.
Modern history
Environmental philosophy emerged as a branch of philosophy in 1970s. Early environmental philosophers include Seyyed Hossein Nasr, Richard Routley, Arne Næss, and J. Baird Callicott. The movement was an attempt to connect with humanity's sense of alienation from nature in a continuing fashion throughout history. This was very closely related to the development at the same time of ecofeminism, an intersecting discipline. Since then its areas of concern have expanded significantly.
The field is today characterized by a notable diversity of stylistic, philosophical and cultural approaches to human environmental relationships, from personal and poetic reflections on environmental experience and arguments for panpsychism to Malthusian applications of game theory or the question of how to put an economic value on nature's services. A major debate arose in the 1970s and 80s was that of whether nature has intrinsic value in itself independent of human values or whether its value is merely instrumental, with ecocentric or deep ecology approaches emerging on the one hand versus consequentialist or pragmatist anthropocentric approaches on the other.
Another debate that arose at this time was the debate over whether there really is such a thing as wilderness or not, or whether it is merely a cultural construct with colonialist implications as suggested by William Cronon. Since then, readings of environmental history and discourse have become more critical and refined. In this ongoing debate, a diversity of dissenting voices have emerged from different cultures around the world questioning the dominance of Western assumptions, helping to transform the field into a global area of thought.
In recent decades, there has been a significant challenge to deep ecology and the concepts of nature that underlie it, some arguing that there is not really such a thing as nature at all beyond some self-contradictory and even politically dubious constructions of an ideal other that ignore the real human-environmental interactions that shape our world and lives. This has been alternately dubbed the postmodern, constructivist, and most recently post-naturalistic turn in environmental philosophy. Environmental aesthetics, design and restoration have emerged as important intersecting disciplines that keep shifting the boundaries of environmental thought, as have the science of climate change and biodiversity and the ethical, political and epistemological questions they raise.
Social ecology movement
In 1982, Murray Bookchin described his philosophy of Social Ecology which provides a framework for understanding nature, our relationship with nature, and our relationships to each other.
According to this philosophy, defining nature as "unspoiled wilderness" denies that humans are biological creatures created by natural evolution. It also takes issue with the attitude that "everything that exists is natural", as this provides us with no framework for judging a landfill as less natural than a forest. Instead, social ecology defines nature as a tendency in healthy ecosystems toward greater levels of diversity, complementarity, and freedom. Practices that are congruent with these principles are more natural than those that are not.
Building from this foundation, Bookchin argues that "The ecological crisis is a social crisis":
Practices which simplify biodiversity and dominate nature (monocropping, overfishing, clearcutting, etc.) are linked to societal tendencies to simplify and dominate humanity.
Such societies create cultural institutions like poverty, racism, patriarchy, homophobia, and genocide from this same desire to simplify and dominate.
In turn, Social Ecology suggests addressing the root causes of environmental degradation requires creating a society that promotes decentralization, interdependence, and direct democracy rather than profit extraction.
Deep ecology movement
In 1984, George Sessions and Arne Næss articulated the principles of the new Deep Ecology Movement.
These basic principles are:
The well-being and flourishing of human and non-human life have value.
Richness and diversity of life forms contribute to the realization of these values and are also values in themselves.
Humans have no right to reduce this richness and diversity except to satisfy vital needs.
The flourishing of human life and cultures is compatible with a substantial decrease in the human population.
Present human interference with the nonhuman world is excessive, and the situation is rapidly worsening.
Policies must therefore be changed. These policies affect basic economic, technological, and ideological structures. The resulting state of affairs will be deeply different from the present.
The ideological change is mainly that of appreciating life quality (dwelling in situations of inherent value), rather than adhering to an increasingly higher standard of living. There will be a profound awareness of the difference between big and great.
Those who subscribe to the foregoing points have an obligation directly or indirectly to try to implement the necessary changes.
Resacralization of nature
See also
Environmental Philosophy (journal)
Environmental Values
Environmental Ethics (journal)
List of environmental philosophers
Environmental hermeneutics
References
Notes
Further reading
Armstrong, Susan, Richard Botzler. Environmental Ethics: Divergence and Convergence, McGraw-Hill, Inc., New York, New York. .
Auer, Matthew, 2019. Environmental Aesthetics in the Age of Climate Change, Sustainability, 11 (18), 5001.
Benson, John, 2000. Environmental Ethics: An Introduction with Readings, Psychology Press.
Callicott, J. Baird, and Michael Nelson, 1998. The Great New Wilderness Debate, University of Georgia Press.
Conesa-Sevilla, J., 2006. The Intrinsic Value of the Whole: Cognitive and Utilitarian Evaluative Processes as they Pertain to Ecocentric, Deep Ecological, and Ecopsychological "Valuing", The Trumpeter, 22 (2), 26-42.
Derr, Patrick, G, Edward McNamara, 2003. Case Studies in Environmental Ethics, Bowman & Littlefield Publishers.
DesJardins, Joseph R., Environmental Ethics Wadsworth Publishing Company, ITP, An International Thomson Publishing Company, Belmont, California. A Division of Wadsworth, Inc.
Devall, W. and G. Sessions. 1985. Deep Ecology: Living As if Nature Mattered, Salt Lake City: Gibbs M. Smith, Inc.
Drengson, Inoue, 1995. "The Deep Ecology Movement", North Atlantic Books, Berkeley, California.
Foltz, Bruce V., Robert Frodeman. 2004. Rethinking Nature, Indiana University Press, 601 North Morton Street, Bloomington, IN 47404-3797
Gade, Anna M. 2019. Muslim Environmentalisms: Religious and Social Foundations, Columbia University Press, New York
Keulartz, Jozef, 1999. The Struggle for Nature: A Critique of Environmental Philosophy, Routledge.
LaFreniere, Gilbert F, 2007. The Decline of Nature: Environmental History and the Western Worldview, Academica Press, Bethesda, MD
Light, Andrew, and Eric Katz,1996. Environmental Pragmatism, Psychology Press.
Mannison, D., M. McRobbie, and R. Routley (ed), 1980. Environmental Philosophy, Australian National University
Matthews, Steve, 2002. [https://core.ac.uk/download/pdf/48856927.pdf A Hybrid Theory of Environmentalism, Essays in Philosophy, 3.
Næss, A. 1989. Ecology, Community and Lifestyle: Outline of an Ecosophy, Translated by D. Rothenberg. Cambridge: Cambridge University Press.
Oelschlaeger, Max, 1993. The Idea of Wilderness: From Prehistory to the Age of Ecology, New Haven: Yale University Press,
Pojman, Louis P., Paul Pojman. Environmental Ethics, Thomson-Wadsworth, United States
Sarvis, Will. Embracing Philanthropic Environmentalism: The Grand Responsibility of Stewardship, (McFarland, 2019).
Sherer, D., ed, Thomas Attig. 1983. Ethics and the Environment, Prentice-Hall, Inc., Englewood Cliffs, New Jersey 07632.
VanDeVeer, Donald, Christine Pierce. The Environmental Ethics and Policy Book, Wadsworth Publishing Company. An International Thomson Publishing Company
Vogel, Steven, 1999. Environmental Philosophy After the End of Nature, Environmental Ethics 24 (1):23-39
Weston, 1999. An Invitation to Environmental Philosophy, Oxford University Press, New York, New York.
Zimmerman, Michael E., J. Baird Callicott, George Sessions, Karen J. Warren, John Clark. 1993.Environmental Philosophy: From Animal Rights to Radical Ecology, Prentice-Hall, Inc., Englewood Cliffs, New Jersey 07632
External links | 0.789594 | 0.988601 | 0.780593 |
Bright green environmentalism | Bright green environmentalism is an environmental philosophy and movement that emphasizes the use of advanced technology, social innovation, eco-innovation, and sustainable design to address environmental challenges. This approach contrasts with more traditional forms of environmentalism that may advocate for reduced consumption or a return to simpler lifestyles.
Origin and evolution of bright green thinking
The term bright green, coined in 2003 by writer Alex Steffen, refers to the fast-growing new wing of environmentalism, distinct from traditional forms. Bright green environmentalism aims to provide prosperity in an ecologically sustainable way through the use of new technologies and improved design.
Proponents promote and advocate for green energy, electric vehicles, efficient manufacturing systems, bio and nanotechnologies, ubiquitous computing, dense urban settlements, closed loop materials cycles and sustainable product designs. One-planet living is a commonly used phrase. Their principal focus is on the idea that through a combination of well-built communities, new technologies and sustainable living practices, the quality of life can actually be improved even while ecological footprints shrink.
The term bright green has been used with increased frequency due to the promulgation of these ideas through the Internet and recent coverage by some traditional media.
Dark greens, light greens and bright greens
Alex Steffen describes contemporary environmentalists as being split into three groups, dark, light, and bright greens.
Light Green
Light greens see protecting the environment first and foremost as a personal responsibility. They fall into the transformational activist end of the spectrum, but light greens do not emphasize environmentalism as a distinct political ideology, or even seek fundamental political reform. Instead, they often focus on environmentalism as a lifestyle choice. The motto "Green is the new black" sums up this way of thinking, for many. This is different from the term lite green, which some environmentalists use to describe products or practices they believe are greenwashing, those products and practices which pretend to achieve more change than they actually do (if any).
Dark Green
In contrast, dark greens believe that environmental problems are an inherent part of industrialized, capitalist civilization, and seek radical political change. Dark greens believe that currently and historically dominant modes of societal organization inevitably lead to consumerism, overconsumption, waste, alienation from nature and resource depletion. Dark greens claim this is caused by the emphasis on economic growth that exists within all existing ideologies, a tendency sometimes referred to as growth mania. The dark green brand of environmentalism is associated with ideas of ecocentrism, deep ecology, degrowth, anti-consumerism, post-materialism, holism, the Gaia hypothesis of James Lovelock, and sometimes a support for a reduction in human numbers and/or a relinquishment of technology to reduce humanity's effect on the biosphere.
Contrast between Light Green and Dark Green
In The Song of the Earth, Jonathan Bate notes that there are typically significant divisions within environmental theory. He identifies one group as “light Greens” or “environmentalists,” who view environmental protection primarily as a personal responsibility. The other group, termed “dark Greens” or “deep ecologists,” believes that environmental issues are fundamentally tied to industrialized civilization and advocate for radical political changes. This distinction can be summarized as “Know Technology” versus “No Technology” (Suresh Frederick in Ecocriticism: Paradigms and Praxis).
Bright Green
More recently, bright greens emerged as a group of environmentalists who believe that radical changes are needed in the economic and political operation of society in order to make it sustainable, but that better designs, new technologies and more widely distributed social innovations are the means to make those changes—and that society can neither stop nor protest its way to sustainability. As Ross Robertson writes,
See also
References
External links
The Viridian Design Movement
Environmentalism
Green politics
Ecomodernism | 0.798355 | 0.977741 | 0.780585 |
Agrarianism | Agrarianism is a social and political philosophy that advocates for a return to subsistence agriculture, family farming, widespread property ownership, and political decentralization. Those who adhere to agrarianism tend to value traditional forms of local community over urban modernity. Agrarian political parties sometimes aim to support the rights and sustainability of small farmers and poor peasants against the wealthy in society.
Philosophy
Some scholars suggest that agrarianism espouses the superiority of rural society to urban society and the independent farmer as superior to the paid worker, and sees farming as a way of life that can shape the ideal social values. It stresses the superiority of a simpler rural life in comparison to the complexity of urban life. For example, M. Thomas Inge defines agrarianism by the following basic tenets:
Farming is the sole occupation that offers total independence and self-sufficiency.
Urban life, capitalism, and technology destroy independence and dignity and foster vice and weakness.
The agricultural community, with its fellowship of labor and co-operation, is the model society.
The farmer has a solid, stable position in the world order. They have "a sense of identity, a sense of historical and religious tradition, a feeling of belonging to a concrete family, place, and region, which are psychologically and culturally beneficial." The harmony of their life checks the encroachments of a fragmented, alienated modern society.
Cultivation of the soil "has within it a positive spiritual good" and from it the cultivator acquires the virtues of "honor, manliness, self-reliance, courage, moral integrity, and hospitality." They result from a direct contact with nature and, through nature, a closer relationship to God. The agrarian is blessed in that they follow the example of God in creating order out of chaos.
History
The philosophical roots of agrarianism include European and Chinese philosophers. The Chinese school of Agriculturalism (农家/農家) was a philosophy that advocated peasant utopian communalism and egalitarianism. In societies influenced by Confucianism that had as its foundation that humans are innately good, the farmer was considered an esteemed productive member of society, but merchants who made money were looked down upon. That influenced European intellectuals like François Quesnay, an avid Confucianist and advocate of China's agrarian policies, in forming the French agrarian philosophy of physiocracy. The physiocrats, along with the ideas of John Locke and the Romantic Era, formed the basis of modern European and American agrarianism.
Types of agrarianism
Physiocracy
Jeffersonian democracy
The United States president Thomas Jefferson was an agrarian who based his ideas about the budding American democracy around the notion that farmers are "the most valuable citizens" and the truest republicans. Jefferson and his support base were committed to American republicanism, which they saw as being in opposition to aristocracy and corruption, and which prioritized virtue, exemplified by the "yeoman farmer", "planters", and the "plain folk". In praising the rural farmfolk, the Jeffersonians felt that financiers, bankers and industrialists created "cesspools of corruption" in the cities and should thus be avoided.
The Jeffersonians sought to align the American economy more with agriculture than industry. Part of their motive to do so was Jefferson's fear that the over-industrialization of America would create a class of wage slaves who relied on their employers for income and sustenance. In turn, these workers would cease to be independent voters as their vote could be manipulated by said employers. To counter this, Jefferson introduced, as scholar Clay Jenkinson noted, "a graduated income tax that would serve as a disincentive to vast accumulations of wealth and would make funds available for some sort of benign redistribution downward" and tariffs on imported articles, which were mainly purchased by the wealthy. In 1811, Jefferson, writing to a friend, explained: "these revenues will be levied entirely on the rich... . the rich alone use imported articles, and on these alone the whole taxes of the general government are levied. the poor man ... pays not a farthing of tax to the general government, but on his salt."
There is general agreement that the substantial United States' federal policy of offering land grants (such as thousands of gifts of land to veterans) had a positive impact on economic development in the 19th century.
Agrarian socialism
Agrarian socialism is a form of agrarianism that is anti-capitalist in nature and seeks to introduce socialist economic systems in their stead.
Zapatismo
Notable agrarian socialists include Emiliano Zapata who was a leading figure in the Mexican Revolution. As part of the Liberation Army of the South, his group of revolutionaries fought on behalf of the Mexican peasants, whom they saw as exploited by the landowning classes. Zapata published the Plan of Ayala, which called for significant land reforms and land redistribution in Mexico as part of the revolution. Zapata was killed and his forces crushed over the course of the Revolution, but his political ideas lived on in the form of Zapatismo.
Zapatismo would form the basis for neozapatismo, the ideology of the Zapatista Army of National Liberation. Known as Ejército Zapatista de Liberación Nacional or EZLN in Spanish, EZLN is a far-left libertarian socialist political and militant group that emerged in the state of Chiapas in southmost Mexico in 1994. EZLN and Neozapatismo, as explicit in their name, seek to revive the agrarian socialist movement of Zapata, but fuse it with new elements such as a commitment to indigenous rights and community-level decision making.
Subcommander Marcos, a leading member of the movement, argues that the peoples' collective ownership of the land was and is the basis for all subsequent developments the movement sought to create:...When the land became property of the peasants ... when the land passed into the hands of those who work it ... [This was] the starting point for advances in government, health, education, housing, nutrition, women's participation, trade, culture, communication, and information ...[it was] recovering the means of production, in this case, the land, animals, and machines that were in the hands of large property owners."
Maoism
Maoism, the far-left ideology of Mao Zedong and his followers, places a heavy emphasis on the role of peasants in its goals. In contrast to other Marxist schools of thought which normally seek to acquire the support of urban workers, Maoism sees the peasantry as key. Believing that "political power grows out of the barrel of a gun", Maoism saw the Chinese Peasantry as the prime source for a Marxist vanguard because it possessed two qualities: (i) they were poor, and (ii) they were a political blank slate; in Mao's words, "A clean sheet of paper has no blotches, and so the newest and most beautiful words can be written on it". During the Chinese Civil War and the Second Sino-Japanese War, Mao and the Chinese Communist Party made extensive use of peasants and rural bases in their military tactics, often eschewing the cities.
Following the eventual victory of the Communist Party in both wars, the countryside and how it should be run remained a focus for Mao. In 1958, Mao launched the Great Leap Forward, a social and economic campaign which, amongst other things, altered many aspects of rural Chinese life. It introduced mandatory collective farming and forced the peasantry to organize itself into communal living units which were known as people's communes. These communes, which consisted of 5,000 people on average, were expected to meet high production quotas while the peasants who lived on them adapted to this radically new way of life. The communes were run as co-operatives where wages and money were replaced by work points. Peasants who criticised this new system were persecuted as "rightists" and "counter-revolutionaries". Leaving the communes was forbidden and escaping from them was difficult or impossible, and those who attempted it were subjected to party-orchestrated "public struggle sessions," which further jeopardized their survival. These public criticism sessions were often used to intimidate the peasants into obeying local officials and they often devolved into little more than public beatings.
On the communes, experiments were conducted in order to find new methods of planting crops, efforts were made to construct new irrigation systems on a massive scale, and the communes were all encouraged to produce steel backyard furnaces as part of an effort to increase steel production. However, following the Anti-Rightist Campaign, Mao had instilled a mass distrust of intellectuals into China, and thus engineers often were not consulted with regard to the new irrigation systems and the wisdom of asking untrained peasants to produce good quality steel from scrap iron was not publicly questioned. Similarly, the experimentation with the crops did not produce results. In addition to this the Four Pests Campaign was launched, in which the peasants were called upon to destroy sparrows and other wild birds that ate crop seeds, in order to protect fields. Pest birds were shot down or scared away from landing until they dropped from exhaustion. This campaign resulted in an ecological disaster that saw an explosion of the vermin population, especially crop-eating insects, which was consequently not in danger of being killed by predators.
None of these new systems were working, but local leaders did not dare to state this, instead, they falsified reports so as not to be punished for failing to meet the quotas. In many cases they stated that they were greatly exceeding their quotas, and in turn, the Chinese state developed a completely false sense of success with regard to the commune system.
All of this culminated in the Great Chinese Famine, which began in 1959, lasted 3 years, and saw an estimated 15 to 30 million Chinese people die. A combination of bad weather and the new, failed farming techniques that were introduced by the state led to massive shortages of food. By 1962, the Great Leap Forward was declared to be at an end.
In the late 1960s and early 1970s, Mao once again radically altered life in rural China with the launching of the Down to the Countryside Movement. As a response to the Great Chinese Famine, the Chinese President Liu Shaoqi began "sending down" urban youths to rural China in order to recover its population losses and alleviate overcrowding in the cities. However, Mao turned the practice into a political crusade, declaring that the sending down would strip the youth of any bourgeois tendencies by forcing them to learn from the unprivileged rural peasants. In reality, it was the Communist Party's attempt to reign in the Red Guards, who had become uncontrollable during the course of the Cultural Revolution. 10% of the 1970 urban population of China was sent out to remote rural villages, often in Inner Mongolia. The villages, which were still poorly recovering from the effects of the Great Chinese Famine, did not have the excess resources that were needed to support the newcomers. Furthermore, the so-called "sent-down youth" had no agricultural experience and as a result, they were unaccustomed to the harsh lifestyle that existed in the countryside, and their unskilled labor in the villages provided little benefit to the agricultural sector. As a result, many of the sent-down youth died in the countryside. The relocation of the youths was originally intended to be permanent, but by the end of the Cultural Revolution, the Communist Party relented and some of those who had the capacity to return to the cities were allowed to do so.
In imitation of Mao's policies, the Khmer Rouge of Cambodia (who were heavily funded and supported by the People's Republic of China) created their own version of the Great Leap Forward which was known as "Maha Lout Ploh". With the Great Leap Forward as its model, it had similarly disastrous effects, contributing to what is now known as the Cambodian genocide. As a part of the Maha Lout Ploh, the Khmer Rouge sought to create an entirely agrarian socialist society by forcibly relocating 100,000 people to move from Cambodia's cities into newly created communes. The Khmer Rouge leader, Pol Pot sought to "purify" the country by setting it back to "Year Zero", freeing it from "corrupting influences". Besides trying to completely de-urbanize Cambodia, ethnic minorities were slaughtered along with anyone else who was suspected of being a "reactionary" or a member of the "bourgeoisie", to the point that wearing glasses was seen as grounds for execution. The killings were only brought to an end when Cambodia was invaded by the neighboring socialist nation of Vietnam, whose army toppled the Khmer Rouge. However, with Cambodia's entire society and economy in disarray, including its agricultural sector, the country still plunged into renewed famine due to vast food shortages. However, as international journalists began to report on the situation and send images of it out to the world, a massive international response was provoked, leading to one of the most concentrated relief efforts of its time.
Notable agrarian parties
Peasant parties first appeared across Eastern Europe between 1860 and 1910, when commercialized agriculture and world market forces disrupted traditional rural society, and the railway and growing literacy facilitated the work of roving organizers. Agrarian parties advocated land reforms to redistribute land on large estates among those who work it. They also wanted village cooperatives to keep the profit from crop sales in local hands and credit institutions to underwrite needed improvements. Many peasant parties were also nationalist parties because peasants often worked their land for the benefit of landlords of different ethnicity.
Peasant parties rarely had any power before World War I but some became influential in the interwar era, especially in Bulgaria and Czechoslovakia. For a while, in the 1920s and the 1930s, there was a Green International (International Agrarian Bureau) based on the peasant parties in Bulgaria, Czechoslovakia, Poland, and Serbia. It functioned primarily as an information center that spread the ideas of agrarianism and combating socialism on the left and landlords on the right and never launched any significant activities.
Europe
Bulgaria
In Bulgaria, the Bulgarian Agrarian National Union (BZNS) was organized in 1899 to resist taxes and build cooperatives. BZNS came to power in 1919 and introduced many economic, social, and legal reforms. However, conservative forces crushed BZNS in a 1923 coup and assassinated its leader, Aleksandar Stamboliyski (1879–1923). BZNS was made into a communist puppet group until 1989, when it reorganized as a genuine party.
Czechoslovakia
In Czechoslovakia, the Republican Party of Agricultural and Smallholder People often shared power in parliament as a partner in the five-party pětka coalition. The party's leader, Antonín Švehla (1873–1933), was prime minister several times. It was consistently the strongest party, forming and dominating coalitions. It moved beyond its original agrarian base to reach middle-class voters. The party was banned by the National Front after the Second World War.
France
In France, the Hunting, Fishing, Nature, Tradition party is a moderate conservative, agrarian party, reaching a peak of 4.23% in the 2002 French presidential election. It would later on become affiliated to France's main conservative party, Union for a Popular Movement. More recently, the Resistons! movement of Jean Lassalle espoused agrarianism.
Hungary
In Hungary, the first major agrarian party, the small-holders party was founded in 1908. The party became part of the government in the 1920s but lost influence in the government. A new party, the Independent Smallholders, Agrarian Workers and Civic Party was established in 1930 with a more radical program representing larger scale land redistribution initiatives. They implemented this program together with the other coalition parties after WWII. However, after 1949 the party was outlawed when a one-party system was introduced. They became part of the government again 1990–1994, and 1998–2002 after which they lost political support. The ruling Fidesz party has an agrarian faction, and promotes agrarian interest since 2010 with the emphasis now placed on supporting larger family farms versus small-holders.
Ireland
In the late 19th century, the Irish National Land League aimed to abolish landlordism in Ireland and enable tenant farmers to own the land they worked on. The "Land War" of 1878–1909 led to the Irish Land Acts, ending absentee landlords and ground rent and redistributing land among peasant farmers.
Post-independence, the Farmers' Party operated in the Irish Free State from 1922, folding into the National Centre Party in 1932. It was mostly supported by wealthy farmers in the east of Ireland.
Clann na Talmhan (Family of the Land; also called the National Agricultural Party) was founded in 1938. They focused more on the poor smallholders of the west, supporting land reclamation, afforestation, social democracy and rates reform. They formed part of the governing coalition of the Government of the 13th Dáil and Government of the 15th Dáil. Economic improvement in the 1960s saw farmers vote for other parties and Clann na Talmhan disbanded in 1965.
Kazakhstan
In Kazakhstan, the Peasants' Union, originally a communist organization, was formed as one of first agrarian parties in independent Kazakhstan and would win four seats in the 1994 legislative election. The Agrarian Party of Kazakhstan, led by Romin Madinov, was founded in 1999, which favored the privatization of agricultural land, developments towards rural infrastructure, as well as changes in the tax system in agrarian economy. The party would go on to win three Mäjilis seats in the 1999 legislative election and eventually unite with the Civic Party of Kazakhstan to form the pro-government Agrarian-Industrial Union of Workers (AIST) bloc that would be chaired by Madinov for the 2004 legislative election, with the AIST bloc winning 11 seats in the Mäjilis. From there, the bloc remained short-lived as it would merge with the ruling Nur Otan party in 2006.
Several other parties in Kazakhstan over the years have embraced agrarian policies in their programs in an effort to appeal towards a large rural Kazakh demographic base, which included Amanat, ADAL, and Respublica.
Since late 2000s, the "Auyl" People's Democratic Patriotic Party remains the largest and most influential agrarian-oriented party in Kazakhstan, as its presidential candidate Jiguli Dairabaev had become the second-place frontrunner in the 2022 presidential election after sweeping 3.4% of the vote. In the 2023 legislative election, the Auyl party for the first time was represented the parliament after winning nine seats in the lower chamber Mäjilis. The party raises rural issues in regard to decaying villages, rural development and the agro-industrial complex, the issues of social security of the rural population, and has consistently opposed the ongoing rural flight in Kazakhstan.
Latvia
In Latvia, the Union of Greens and Farmers is supportive of traditional small farms and perceives them as more environmentally friendly than large-scale farming: Nature is threatened by development, while small farms are threatened by large industrial-scale farms.
Lithuania
In Lithuania, the government led by the Lithuanian Farmers and Greens Union was in power between 2016 and 2020.
Nordic countries
Poland
In Poland, the Polish People's Party (Polskie Stronnictwo Ludowe, PSL) traces its tradition to an agrarian party in Austro-Hungarian-controlled Galician Poland. After the fall of the communist regime, PSL's biggest success came in 1993 elections, where it won 132 out of 460 parliamentary seats. Since then, PSL's support has steadily declined, until 2019, when they formed Polish Coalition with an anti- establishment, direct democracy Kukiz'15 party, and managed to get 8.5% of popular vote. Moreover, PSL tends to get much better results in local elections. In 2014 elections they have managed to get 23.88% of votes.
The right-wing Law and Justice party has also become supportive of agrarian policies in recent years and polls show that most of their support comes from rural areas. AGROunia resembles the features of agrarianism.
Romania
In Romania, older party parties from Transylvania, Moldavia, and Wallachia merged to become the National Peasants' Party (PNȚ) in 1926. Iuliu Maniu (1873–1953) was a prime minister with an agrarian cabinet from 1928 to 1930 and briefly in 1932–1933, but the Great Depression made proposed reforms impossible. The communist administration dissolved the party in 1947 (along with other historical parties such as the National Liberal Party), but it reformed in 1989 after they fell from power.
The reformed party, which also incorporated elements of Christian democracy in its ideology, governed Romania as part of the Romanian Democratic Convention (CDR) between 1996 and 2000.
Serbia
In Serbia, Nikola Pašić (1845–1926) and his People's Radical Party dominated Serbian politics after 1903. The party also monopolized power in Yugoslavia from 1918 to 1929. During the dictatorship of the 1930s, the prime minister was from that party.
Ukraine
In Ukraine, the Radical Party of Oleh Lyashko has promised to purify the country of oligarchs "with a pitchfork". The party advocates a number of traditional left-wing positions (a progressive tax structure, a ban on agricultural land sale and eliminating the illegal land market, a tenfold increase in budget spending on health, setting up primary health centres in every village) and mixes them with strong nationalist sentiments.
United Kingdom
In land law the heyday of English, Irish (and thus Welsh) agrarianism was to 1603, led by the Tudor royal advisors, who sought to maintain a broad pool of agricultural commoners from which to draw military men, against the interests of larger landowners who sought enclosure (meaning complete private control of common land, over which by custom and common law lords of the manor always enjoyed minor rights). The heyday was eroded by hundreds of Acts of Parliament to expressly permit enclosure, chiefly from 1650 to the 1810s. Politicians standing strongly as reactionaries to this included the Levellers, those anti-industrialists (Luddites) going beyond opposing new weaving technology and, later, radicals such as William Cobbett.
A high level of net national or local self-sufficiency has a strong base in campaigns and movements. In the 19th century such empowered advocates included Peelites and most Conservatives. The 20th century saw the growth or start of influential non-governmental organisations, such as the National Farmers' Union of England and Wales, Campaign for Rural England, Friends of the Earth (EWNI) and of the England Wales, Scottish and Northern Irish political parties prefixed by and focussed on Green politics. The 21st century has seen decarbonisation already in electricity markets. Following protests and charitable lobbying local food has seen growing market share, sometimes backed by wording in public policy papers and manifestos. The UK has many sustainability-prioritising businesses, green charity campaigns, events and lobby groups ranging from espousing allotment gardens (hobby community farming) through to a clear policy of local food and/or self-sustainability models.
Oceania
Australia
Historian F.K. Crowley finds that:
The National Party of Australia (formerly called the Country Party), from the 1920s to the 1970s, promulgated its version of agrarianism, which it called "countrymindedness". The goal was to enhance the status of the graziers (operators of big sheep stations) and small farmers and justified subsidies for them.
New Zealand
The New Zealand Liberal Party aggressively promoted agrarianism in its heyday (1891–1912). The landed gentry and aristocracy ruled Britain at this time. New Zealand never had an aristocracy but its wealthy landowners largely controlled politics before 1891. The Liberal Party set out to change that by a policy it called "populism." Richard Seddon had proclaimed the goal as early as 1884: "It is the rich and the poor; it is the wealthy and the landowners against the middle and labouring classes. That, Sir, shows the real political position of New Zealand." The Liberal strategy was to create a large class of small landowning farmers who supported Liberal ideals. The Liberal government also established the basis of the later welfare state such as old age pensions and developed a system for settling industrial disputes, which was accepted by both employers and trade unions. In 1893, it extended voting rights to women, making New Zealand the first country in the world to do so.
To obtain land for farmers, the Liberal government from 1891 to 1911 purchased of Maori land. The government also purchased from large estate holders for subdivision and closer settlement by small farmers. The Advances to Settlers Act (1894) provided low-interest mortgages, and the agriculture department disseminated information on the best farming methods. The Liberals proclaimed success in forging an egalitarian, anti-monopoly land policy. The policy built up support for the Liberal Party in rural North Island electorates. By 1903, the Liberals were so dominant that there was no longer an organized opposition in Parliament.
North America
The United States and Canada both saw a rise of Agrarian-oriented parties in the early twentieth century as economic troubles motivated farming communities to become politically active. It has been proposed that different responses to agrarian protest largely determined the course of power generated by these newly energized rural factions. According to Sociologist Barry Eidlin:"In the United States, Democrats adopted a co-optive response to farmer and labor protest, incorporating these constituencies into the New Deal coalition. In Canada, both mainstream parties adopted a coercive response, leaving these constituencies politically excluded and available for an independent left coalition."These reactions may have helped determine the outcome of agrarian power and political associations in the US and Canada.
United States of America
Kansas
Economic desperation experienced by farmers across the state of Kansas in the nineteenth century spurred the creation of The People's Party in 1890, and soon-after would gain control of the governor's office in 1892. This party, consisting of a mix of Democrats, Socialists, Populists, and Fusionists, would find itself buckling from internal conflict regarding the unlimited coinage of silver. The Populists permanently lost power in 1898.
Oklahoma
Oklahoma farmers considered their political activity during the early twentieth century due to the outbreak of war, depressed crop prices, and an inhibited sense of progression towards owning their own farms. Tenancy had been reportedly as high as 55% in Oklahoma by 1910. These pressures saw agrarian counties in Oklahoma supporting Socialist policies and politics, with the Socialist platform proposing a deeply agrarian-radical platform:...the platform proposed a "Renters and Farmer's Program" which was strongly agrarian radical in its insistence upon various measures to put land into "The hands of the actual tillers of the soil." Although it did not propose to nationalize privately owned land, it did offer numerous plans to enlarge the state's public domain, from which land would be rented at prevailing share rents to tenants until they had paid rent equal to the land's value. The tenant and his children would have the right of occupancy and use, but the 'title' would remind in the 'commonwealth', an arrangement that might be aptly termed 'Socialist fee simple'. They proposed to exempt from taxation all farm dwellings, animals, and improvements up to the value of $1,000. The State Board of Agriculture would encourage 'co-operative societies' of farmers to make plans f or the purchase of land, seed, tools, and for preparing and selling produce. In order to give farmers essential services at cost, the Socialists called for the creation of state banks and mortgage agencies, crop insurance, elevators, and warehouses.This agrarian-backed Socialist party would win numerous offices, causing a panic within the local Democratic party. This agrarian-Socialist movement would be inhibited by voter suppression laws aimed at reducing the participation of voters of color, as well as national wartime policies intended to disrupt political elements considered subversive. This party would peak in power in 1914.
Back-to-the-land movement
Agrarianism is similar to but not identical with the back-to-the-land movement. Agrarianism concentrates on the fundamental goods of the earth, on communities of more limited economic and political scale than in modern society, and on simple living, even when the shift involves questioning the "progressive" character of some recent social and economic developments. Thus, agrarianism is not industrial farming, with its specialization on products and industrial scale.
See also
Agrarian socialism
Farmer–Labor Party, USA early 20th century
Jeffersonian democracy
Labour-Farmer Party, Japan 1920s
Minnesota Farmer–Labor Party, USA early 20th century
Nordic agrarian parties
Yeoman, English farmers
References
Further reading
Agrarian values
Brass, Tom. Peasants, Populism and Postmodernism: The Return of the Agrarian Myth (2000)
* Inge, M. Thomas. Agrarianism in American Literature (1969)
Kolodny, Annette. The Land before Her: Fantasy and Experience of the American Frontiers, 1630–1860 (1984). online edition
Marx, Leo. The Machine in the Garden: Technology and the Pastoral Ideal in America (1964).
Murphy, Paul V. The Rebuke of History: The Southern Agrarians and American Conservative Thought (2000)
Parrington, Vernon. Main Currents in American Thought (1927), 3-vol online
Thompson, Paul, and Thomas C. Hilde, eds. The Agrarian Roots of Pragmatism (2000)
Primary sources
Sorokin, Pitirim A. et al., eds. A Systematic Source Book in Rural Sociology (3 vol. 1930) vol 1 pp. 1–146 covers many major thinkers down to 1800
Europe
Bell, John D. Peasants in Power: Alexander Stamboliski and the Bulgarian Agrarian National Union, 1899–1923(1923)
Donnelly, James S. Captain Rock: The Irish Agrarian Rebellion of 1821–1824 (2009)
Donnelly, James S. Irish Agrarian Rebellion, 1760–1800 (2006)
Gross, Feliks, ed. European Ideologies: A Survey of 20th Century Political Ideas (1948) pp. 391–481 online edition , on Russia and Bulgaria
Kubricht, Andrew Paul. "The Czech Agrarian Party, 1899-1914: a study of national and economic agitation in the Habsburg monarchy" (PhD thesis, Ohio State University Press, 1974)
Narkiewicz, Olga A. The Green Flag: Polish Populist Politics, 1867–1970 (1976).
Oren, Nissan. Revolution Administered: Agrarianism and Communism in Bulgaria (1973), focus is post 1945
Stefanov, Kristian. Between Ideological Loyalty and Political Adaptation: 'The Agrarian Question' in the Development of Bulgarian Social Democracy, 1891–1912, East European Politics, Societies and Cultures, Is. 4, 2023.
Paine, Thomas. Agrarian Justice (1794)
Roberts, Henry L. Rumania: Political Problems of an Agrarian State (1951).
North America
Goodwyn, Lawrence. The Populist Moment: A Short History of the Agrarian Revolt in America (1978), 1880s and 1890s in U.S.
Lipset, Seymour Martin. Agrarian socialism: the Coöperative Commonwealth Federation in Saskatchewan (1950), 1930s-1940s
McConnell, Grant. The decline of agrarian democracy(1953), 20th century U.S.
Mark, Irving. Agrarian conflicts in colonial New York, 1711–1775 (1940)
Ochiai, Akiko. Harvesting Freedom: African American Agrarianism in Civil War Era South Carolina (2007)
Robison, Dan Merritt. Bob Taylor and the agrarian revolt in Tennessee (1935)
Stine, Harold E. The agrarian revolt in South Carolina;: Ben Tillman and the Farmers' Alliance (1974)
Summerhill, Thomas. Harvest of Dissent: Agrarianism in Nineteenth-Century New York (2005)
Szatmary, David P. Shays' Rebellion: The Making of an Agrarian Insurrection (1984), 1787 in Massachusetts
Woodward, C. Vann. Tom Watson: Agrarian Rebel (1938) online edition
Global South
Brass, Tom (ed.). New Farmers' Movements in India (1995) 304 pages.
Handy, Jim. Revolution in the Countryside: Rural Conflict and Agrarian Reform in Guatemala, 1944–1954 (1994)
Paige, Jeffery M. Agrarian revolution: social movements and export agriculture in the underdeveloped world (1978) 435 pages excerpt and text search
Sanderson, Steven E. Agrarian populism and the Mexican state: the struggle for land in Sonora (1981)
Stokes, Eric. The Peasant and the Raj: Studies in Agrarian Society and Peasant Rebellion in Colonial India (1980)
Tannenbaum, Frank. The Mexican Agrarian Revolution'' (1930)
External links
Writings of a Deliberate Agrarian
The New Agrarian | 0.78263 | 0.997285 | 0.780505 |
Sustainable Development Goal 13 | Sustainable Development Goal 13 (SDG 13 or Global Goal 13) is to limit and adapt to climate change. It is one of 17 Sustainable Development Goals established by the United Nations General Assembly in 2015. The official mission statement of this goal is to "Take urgent action to combat climate change and its impacts". SDG 13 and SDG 7 on clean energy are closely related and complementary.
SDG 13 has five targets which are to be achieved by 2030. They cover a wide range of issues surrounding climate action. The first three targets are outcome targets. The first target is to strengthen resilience and adaptive capacity towards climate change-related disasters. The second target is to integrate climate change measures into policies and planning. The third target is to build knowledge and capacity. The remaining two targets are means of implementation targets. These include implementing the UN Framework Convention on Climate Change (UNFCCC), and to promote mechanisms to raise capacity for effective climate change-related planning and management. Along with each target, there are indicators that provide a method to review the overall progress of each target. The UNFCCC is the main intergovernmental forum for negotiating the global response to climate change.
Under the 2015 Paris Agreement, nations collectively agreed to keep warming "well under 2°C". However, with pledges made under the Agreement, global warming would still reach about by the end of the century.
As of 2020, many countries are now implementing their national climate change adaptation plans.
Context
SDG 13 intends to take urgent action in order to combat climate change and its impacts. Many climate change impacts are already felt at the current level of warming. Additional warming will increase these impacts and can trigger tipping points, such as the melting of the Greenland ice sheet. Under the 2015 Paris Agreement, nations collectively agreed to keep warming "well under 2 °C". However, with pledges made under the Agreement, global warming would still reach about by the end of the century.
Reducing emissions requires generating electricity from low-carbon sources rather than burning fossil fuels. This change includes phasing out coal and natural gas fired power plants, vastly increasing use of wind, solar, and other types of renewable energy, and reducing energy use.
Targets, indicators and progress
SDG 13 has five targets. The targets include to strengthening resilience and adaptive capacity to climate-related disasters (Target 13.1), integrate climate change measures into policies and planning (Target 13.2), build knowledge and capacity to meet climate change (Target 13.3), implement the UN Framework Convention on Climate Change (Target 13.a), and promote mechanisms to raise capacity for planning and management (Target 13.b).
Each target includes one or more indicators that help to measure and monitor the progress. Some of the indicators are number of deaths, missing people and directly affected people attributed to disasters per 100,000 population (13.1.1) or total greenhouse emissions generated by year (13.2.2.)
Target 13.1: Strengthen resilience and adaptive capacity to climate-related disasters
The full text of Target 13.1 is: "Strengthen resilience and adaptive capacity to climate-related hazards and natural disasters in all countries".
This target has 3 indicators.
Indicator 13.1.1: "Number of deaths, missing people and directly affected people attributed to disasters per 100,000 population"
Indicator 13.1.2: "Number of countries that adopt and implement national disaster risk reduction strategies in line with the Sendai Framework for Disaster Risk Reduction 2015–2030"
Indicator 13.1.3: "Proportion of local governments that adopt and implement local disaster risk reduction strategies in line with national disaster risk reduction strategies"
Indicator 13.1.2 serves as a bridge between the Sustainable Development Goals and the Sendai Framework for Disaster Risk Reduction.
In April 2020, the number of countries and territories that adopted national disaster risk reduction strategies increased to 118 compared to 48 from the first year of the Sendai Framework.
Target 13.2: Integrate climate change measures into policy and planning
The full text of Target 13.2 is: "Integrate climate change measures into national policies, strategies and planning".
This target has two indicators:
Indicator 13.2.1: "Number of countries with nationally determined contributions, long-term strategies, national adaptation plans, strategies as reported in adaptation communications and national communications".
Indicator 13.2.2: "Total greenhouse gas emissions per year"
In order to stay under 1.5°C of global warming, carbon dioxide (CO₂) emissions from G20 countries need to decline by about 45% by 2030 and attain net zero in 2050. To be able to meet the 1.5 °C or even 2 °C, which is the maximum set by the Paris Agreement, greenhouse gas emissions must start to fall by 7.6% per year starting on 2020. However, there is a large gap between these overall temperature targets and the nationally determined contributions set by individual countries. Between 2000 and 2018, greenhouse gas emissions of transition economies and developed countries have declined by 6.5%. In contrast, developing countries saw their emissions go up by 43% between 2000 and 2013.
As of 2015, 170 countries are a part of at least one multilateral environmental agreement, with each year having an increase in the number of countries signing onto environmental agreements.
Target 13.3: Build knowledge and capacity to meet climate change
The full text of Target 13.3 is: "Improve education, awareness-raising and human and institutional capacity on climate change mitigation, adaptation, impact reduction and early warning".
This target has two indicators:
Indicator 13.3.1: "The extent to which (i) global citizenship education and (ii) education for sustainable development are mainstreamed in (a) national education policies; (b) curricula; (c) teacher education; and (d) student assessment"
Indicator 13.3.2: "Number of countries that have communicated the strengthening of institutional, systemic and individual capacity-building to implement adaptation, mitigation and technology transfer, and development actions"
The indicator 13.3.1 measures the extent to which countries mainstream Global Citizenship Education (GCED) and Education for Sustainable Development (ESD) in their education systems and educational policies.
The indicator 13.3.2 identifies countries who have and have not adopted and implemented disaster risk management strategies in line with the Sendai Framework for Disaster Risk Reduction. The goal by 2030 is to strengthen resilience and adaptive capacity to climate-related hazards and natural disasters in all countries.
To explain the concept of "Education for Sustainable Development and Global Citizenship seeks to equip learners with the knowledge of how their choices impact others and their immediate environment.
There is currently no data available for this indicator as of September 2020.
Target 13.a: Implement the UN Framework Convention on Climate Change
The full text of Target 13.a is: "Implement the commitment undertaken by developed-country parties to the United Nations Framework Convention on Climate Change to a goal of mobilizing jointly $100 billion annually by 2020 from all sources to address the needs of developing countries in the context of meaningful mitigation actions and transparency on implementation and fully operationalize the Green Climate Fund through its capitalization as soon as possible."
This target only has one indicator: Indicator 13.a is the "Amounts provided and mobilized in United States dollars per year in relation to the continued existing collective mobilization goal of the $100 billion commitment through to 2025".
Previously, the indicator was worded as "Mobilized amount of United States dollars per year between 2020 and 2025 accountable towards the $100 billion commitment".
This indicator measures the current pledged commitments from countries to the Green Climate Fund (GCF), the amounts provided and mobilized in United States dollars (USD) per year in relation to the continued existing collective mobilization goal of the US$100 billion commitment to 2025.
A report by the UN stated in 2020 that the financial flows for global climate finance as well as for renewable energy are "relatively small in relation to the scale of annual investment needed for a low-carbon, climate-resilient transition".
Target 13.b: Promote mechanisms to raise capacity for planning and management
The full text of Target 13.b is: "Promote mechanisms for raising capacity for effective climate change-related planning and management in least developed countries and small island developing States, including focusing on women, youth and local and marginalized communities acknowledging that the United Nations Framework Convention on Climate Change is the primary international, intergovernmental forum for negotiating the global response to climate change."
This target has one indicator: Indicator 13.b.1 is the "Number of least developed countries and small island developing states with nationally determined contributions, long-term strategies, national adaptation plans, strategies as reported in adaptation communications and national communications".
A previous version of this indicator was: "Indicator 13.b.1: Number of least developed countries and small island developing states that are receiving specialized support, and amount of support, including finance, technology and capacity building, for mechanisms for raising capacities for effective climate change-related planning and management, including focusing on women, youth and local and marginalized communities." This indicator's previous focus on women, youth and local and marginalized communities is not included anymore in the latest version of the indicator.
Annual UN reports are monitoring how many countries are implementing national adaptation plans.
Custodian agencies
Custodian agencies are in charge of reporting on the following indicators:
Indicators 13.1.1, 13.1.2 and 13.1.3: UN International Strategy for Disaster Reduction (UNISDR).
Indicator 13.2.1: United Nations Framework Convention on Climate Change (UNFCCC), UN Educational, Scientific, and Cultural Organization-Institute for Statistics (UNESCO-UIS).
Indicators 13.3.1, 13.a.1 and 13.b.1: United Nations Framework Convention on Climate Change (UNFCCC) and Organization for Economic Cooperation and Development (OECD).
Monitoring
High-level progress reports for all the SDGs are published in the form of reports by the United Nations Secretary General. Updates and progress can also be found on the SDG website that is managed by the United Nations and at Our World in Data.
Challenges
Impacts of the COVID-19 pandemic
During the COVID-19 pandemic, there was a reduction in economic activity. This resulted in a 6% drop in greenhouse gas emissions from what was initially projected for 2020, however these improvements were only temporary. Greenhouse gas emissions rebounded later in the pandemic as many countries began lifting restrictions, with the direct impact of pandemic policies having a negligible long-term impact on climate change. A rebound in transport pollution has occurred since restrictions of government lockdown policies have been lifted. Transport pollution accounts for roughly 21% of global carbon emissions due to it being still 95% dependent on oil.
Post pandemic, there is a rush for governments globally to stimulate local economies by putting money towards fossil fuel production and in turn economic stimulation. Funding for economic policies will likely divert the emergency funds usually afforded to climate funding like The Green Climate Fund and other sustainable policies, unless an emphasis is put on green deals in the redirection of monetary funds.
Russian invasion of Ukraine
The Russian invasion of Ukraine and the resulting trade sanctions had a further adverse effect on SDG 13, as some countries responded to the crisis by increasing domestic oil production.
Links with other SDGs
Sustainable Development Goal 13 can connect with the other 16 SDGs. For example, increasing access to sustainable energy (SDG 7) will reduce greenhouse gas emissions. Combating climate change can improve agricultural yield which will lead to zero hunger ( SDG 2 )
Organizations
United Nations organizations
Climate target
United Nations Framework Convention on Climate Change (UNFCCC)
Intergovernmental Panel on Climate Change (IPCC)
Conferences of the Parties (COP)
World Meteorological Organization (WMO)
UN-Habitat
United Nations Environment Program (UNEP)
Green Climate Fund (GCF)
United Nations Children's Fund (UNICEF)
United Nations Educational, Scientific, and Cultural Organization (UNESCO)
References
Sources
External links
UN Sustainable Development Knowledge Platform – SDG 13
"Global Goals" Campaign - SDG 13
SDG-Track.org - SDG 13
UN SDG 13 in the US
Sustainable Development Goals
2015 establishments in New York City
Projects established in 2015
Climate change mitigation
Climate change adaptation
Climate change policy | 0.784809 | 0.994486 | 0.780482 |
Fecundity | Fecundity is defined in two ways; in human demography, it is the potential for reproduction of a recorded population as opposed to a sole organism, while in population biology, it is considered similar to fertility, the natural capability to produce offspring, measured by the number of gametes (eggs), seed set, or asexual propagules.
Human demography
Human demography considers only human fecundity, at its culturally differing rates, while population biology studies all organisms. The term fecundity in population biology is often used to describe the rate of offspring production after one time step (often annual). In this sense, fecundity may include both birth rates and survival of young to that time step. While levels of fecundity vary geographically, it is generally a consistent feature of each culture. Fecundation is another term for fertilization.
In obstetrics and gynecology, fecund-ability is the probability of being pregnant in a single menstrual cycle, and fecundity is the probability of achieving a live birth within a single cycle.
Population ecology
In ecology, fecundity is a measure of the reproductive capacity of an individual or population, typically restricted to the reproductive individuals. It can be equally applied to sexual and asexual reproduction, as the purpose of fecundity is to measure how many new individuals are being added to a population. Fecundity may be defined differently for different ecological studies to explain the specific data the study examined. For example, some studies use apparent fecundity to describe that their data looks at a particular moment in time rather than the species' entire life span. In other studies, these definitions are changed to better quantify fecundity for the organism in question. This need is particularly true for modular organisms, as their modular organization differs from the more typical unitary organism, in which fecundity is best defined through a count of offspring.
Life history patterns (parity)
Parity is the organization of fecundity into two distinct types, semelparity, and iteroparity.
Semelparity occurs when an organism reproduces only once in its lifetime, with death being a part of its reproductive strategy. These species produce many offspring during their one reproductive event, giving them a potential advantage when it comes to fecundity, as they are producing more offspring.
Iteroparity is when a species reproduces multiple times over its lifetime. This species' strategy is to protect against the unpredictable survivability of their offspring, in which if their first litter of offspring dies, they can reproduce again and replace the dead offspring. It also allows the organism to care for its offspring, as they will be alive during their development.
Factors affecting fecundity
There are a multitude of factors that potentially affect the rates of fecundity. For example: ontogeny, population density and latitude.
Ontogeny
Fecundity in iteroparous organisms often increases with age but can decline at older ages. Several hypotheses have been proposed to explain this relationship. For species with declining growth rates after maturity, the suggestion is that as the organism's growth rate decreases, more resources can be allocated to reproduction. Other possible explanations exist for this pattern for organisms that do not grow after maturity. These explanations include: increased competence of older individuals; less fit individuals have already died off; or since life expectancy decreases with age, older individuals may allocate more resources to reproduction at the expense of survival. In semelparous species, age is frequently a poor predictor of fecundity. In these cases, size is likely a better predictor.
Population density
Population density is often observed to negatively affect fecundity, making fecundity density-dependent. The reasoning behind this observation is that once an area is overcrowded, fewer resources are available for each individual. Thus there may be insufficient energy to reproduce in high numbers when offspring survival is low. Occasionally high density can stimulate the production of offspring, particularly in plant species, because if there are more plants, there is food to lure pollinators, who will then spread that plant's pollen and allow for more reproduction.
Latitude
There are many different hypotheses to explain the relationship between latitude and fecundity. One claimed that fecundity increases predictably with increasing latitude. Reginald Morean proposed this hypothesis, the explanation being that there is higher mortality in seasonal environments.
A different hypothesis by David Lack attributed the positive relationship to the change in daylight hours found with changing latitudes. These differing daylight hours, in turn, change the hours in which a parent can collect food. He also accounts for a drop in fecundity at the poles due to their extreme amounts of day lengths, which can exhaust the parent.
Fecundity intensity due to seasonality is a hypothesis proposed by Phillip Ashmole. He suggests latitude affects fecundity due to seasonality increasing with increasing latitudes. This theory relies on the mortality concept proposed by Moreau but focuses on how seasonality affects mortality and, in turn, population densities. Thus in places with higher mortality, there is more food availability, leading to higher fecundity. Another hypothesis claims that seasonality affects fecundity due to varying lengths of breeding seasons. This idea suggests that shorter breeding seasons select a larger clutch size to compensate for the reduced reproduction frequency, thus increasing those species' fecundity.
Fecundity and fitness
Fecundity is a significant component of fitness. Fecundity selection builds on that idea. This idea claims that the genetic selection of traits that increase an organism's fecundity is, in turn, advantageous to an organism's fitness.
Fecundity Schedule
Fecundity Schedules are data tables that display the patterns of birth amongst individuals of different ages in a population. These are typically found in life tables under the columns Fx and mx.
Fx lists the total number of young produced by each age class, and mx is the mean number of young produced, found by finding the number of young produced per surviving individual. For example, if you have 12 individuals in an age class and they produced 16 surviving young, the Fx is 16, and the mx is 1.336.
Infecundity
Infecundity is a term meaning "inability to conceive after several years of exposure to the risk of pregnancy." This usage is prevalent in medicine, especially reproductive medicine, and in demographics. Infecundity would be synonymous with infertility, but in demographic and medical use fertility (and thus its antonym infertility) may refer to quantity and rates of offspring produced, rather than any physiological or other limitations on reproduction.
Additional information
Additionally, social trends and societal norms may influence fecundity, though this influence tends to be temporary. Indeed, it is considered impossible to cease reproduction based on social factors, and fecundity tends to rise after a brief decline.
Fecundity has also been shown to increase in ungulates with relation to warmer weather.
In sexual evolutionary biology, especially in sexual selection, fecundity is contrasted to reproductivity.
See also
Biological life cycle
Birth rate
Fecundity selection
Natalism
Population ecology
References
Fertility
Population
Philosophy of science
Human reproduction
Demographics
Infertility | 0.783997 | 0.9955 | 0.780469 |
Tinbergen's four questions | Tinbergen's four questions, named after 20th century biologist Nikolaas Tinbergen, are complementary categories of explanations for animal behaviour. These are also commonly referred to as levels of analysis. It suggests that an integrative understanding of behaviour must include ultimate (evolutionary) explanations, in particular:
behavioural adaptive functions
phylogenetic history; and the proximate explanations
underlying physiological mechanisms
ontogenetic/developmental history.
Four categories of questions and explanations
When asked about the purpose of sight in humans and animals, even elementary-school children can answer that animals have vision to help them find food and avoid danger (function/adaptation). Biologists have three additional explanations: sight is caused by a particular series of evolutionary steps (phylogeny), the mechanics of the eye (mechanism/causation), and even the process of an individual's development (ontogeny). This schema constitutes a basic framework of the overlapping behavioural fields of ethology, behavioural ecology, comparative psychology, sociobiology, evolutionary psychology, and anthropology. Julian Huxley identified the first three questions. Niko Tinbergen gave only the fourth question, as Huxley's questions failed to distinguish between survival value and evolutionary history; Tinbergen's fourth question helped resolve this problem.
Evolutionary (ultimate) explanations
First question: Function (adaptation)
Darwin's theory of evolution by natural selection is the only scientific explanation for why an animal's behaviour is usually well adapted for survival and reproduction in its environment. However, claiming that a particular mechanism is well suited to the present environment is different from claiming that this mechanism was selected for in the past due to its history of being adaptive.
The literature conceptualizes the relationship between function and evolution in two ways. On the one hand, function and evolution are often presented as separate and distinct explanations of behaviour. On the other hand, the common definition of adaptation is a central concept in evolution: a trait that was functional to the reproductive success of the organism and that is thus now present due to being selected for; that is, function and evolution are inseparable. However, a trait can have a current function that is adaptive without being an adaptation in this sense, if for instance the environment has changed. Imagine an environment in which having a small body suddenly conferred benefit on an organism when previously body size had had no effect on survival. A small body's function in the environment would then be adaptive, but it would not become an adaptation until enough generations had passed in which small bodies were advantageous to reproduction for small bodies to be selected for. Given this, it is best to understand that presently functional traits might not all have been produced by natural selection. The term "function" is preferable to "adaptation", because adaptation is often construed as implying that it was selected for due to past function. This corresponds to Aristotle's final cause.
Second question: Phylogeny (evolution)
Evolution captures both the history of an organism via its phylogeny, and the history of natural selection working on function to produce adaptations. There are several reasons why natural selection may fail to achieve optimal design (Mayr 2001:140–143; Buss et al. 1998). One entails random processes such as mutation and environmental events acting on small populations. Another entails the constraints resulting from early evolutionary development. Each organism harbors traits, both anatomical and behavioural, of previous phylogenetic stages, since many traits are retained as species evolve.
Reconstructing the phylogeny of a species often makes it possible to understand the "uniqueness" of recent characteristics: Earlier phylogenetic stages and (pre-) conditions which persist often also determine the form of more modern characteristics. For instance, the vertebrate eye (including the human eye) has a blind spot, whereas octopus eyes do not. In those two lineages, the eye was originally constructed one way or the other. Once the vertebrate eye was constructed, there were no intermediate forms that were both adaptive and would have enabled it to evolve without a blind spot.
It corresponds to Aristotle's formal cause.
Proximate explanations
Third question: Mechanism (causation)
Some prominent classes of Proximate causal mechanisms include:
The brain: For example, Broca's area, a small section of the human brain, has a critical role in linguistic capability.
Hormones: Chemicals used to communicate among cells of an individual organism. Testosterone, for instance, stimulates aggressive behaviour in a number of species.
Pheromones: Chemicals used to communicate among members of the same species. Some species (e.g., dogs and some moths) use pheromones to attract mates.
In examining living organisms, biologists are confronted with diverse levels of complexity (e.g. chemical, physiological, psychological, social). They therefore investigate causal and functional relations within and between these levels. A biochemist might examine, for instance, the influence of social and ecological conditions on the release of certain neurotransmitters and hormones, and the effects of such releases on behaviour, e.g. stress during birth has a tocolytic (contraction-suppressing) effect.
However, awareness of neurotransmitters and the structure of neurons is not by itself enough to understand higher levels of neuroanatomic structure or behaviour: "The whole is more than the sum of its parts." All levels must be considered as being equally important: cf. transdisciplinarity, Nicolai Hartmann's "Laws about the Levels of Complexity."
It corresponds to Aristotle's efficient cause.
Fourth question: Ontogeny (development)
Ontogeny is the process of development of an individual organism from the zygote through the embryo to the adult form.
In the latter half of the twentieth century, social scientists debated whether human behaviour was the product of nature (genes) or nurture (environment in the developmental period, including culture).
An example of interaction (as distinct from the sum of the components) involves familiarity from childhood. In a number of species, individuals prefer to associate with familiar individuals but prefer to mate with unfamiliar ones (Alcock 2001:85–89, Incest taboo, Incest). By inference, genes affecting living together interact with the environment differently from genes affecting mating behaviour. A simple example of interaction involves plants: Some plants grow toward the light (phototropism) and some away from gravity (gravitropism).
Many forms of developmental learning have a critical period, for instance, for imprinting among geese and language acquisition among humans. In such cases, genes determine the timing of the environmental impact.
A related concept is labeled "biased learning" (Alcock 2001:101–103) and "prepared learning" (Wilson, 1998:86–87). For instance, after eating food that subsequently made them sick, rats are predisposed to associate that food with smell, not sound (Alcock 2001:101–103). Many primate species learn to fear snakes with little experience (Wilson, 1998:86–87).
See developmental biology and developmental psychology.
It corresponds to Aristotle's material cause.
Causal relationships
The figure shows the causal relationships among the categories of explanations. The left-hand side represents the evolutionary explanations at the species level; the right-hand side represents the proximate explanations at the individual level. In the middle are those processes' end products—genes (i.e., genome) and behaviour, both of which can be analyzed at both levels.
Evolution, which is determined by both function and phylogeny, results in the genes of a population. The genes of an individual interact with its developmental environment, resulting in mechanisms, such as a nervous system. A mechanism (which is also an end-product in its own right) interacts with the individual's immediate environment, resulting in its behaviour.
Here we return to the population level. Over many generations, the success of the species' behaviour in its ancestral environment—or more technically, the environment of evolutionary adaptedness (EEA) may result in evolution as measured by a change in its genes.
In sum, there are two processes—one at the population level and one at the individual level—which are influenced by environments in three time periods.
Examples
Vision
Four ways of explaining visual perception:
Function: To find food and avoid danger.
Phylogeny: The vertebrate eye initially developed with a blind spot, but the lack of adaptive intermediate forms prevented the loss of the blind spot.
Mechanism: The lens of the eye focuses light on the retina.
Development: Neurons need the stimulation of light to wire the eye to the brain (Moore, 2001:98–99).
Westermarck effect
Four ways of explaining the Westermarck effect, the lack of sexual interest in one's siblings (Wilson, 1998:189–196):
Function: To discourage inbreeding, which decreases the number of viable offspring.
Phylogeny: Found in a number of mammalian species, suggesting initial evolution tens of millions of years ago.
Mechanism: Little is known about the neuromechanism.
Ontogeny: Results from familiarity with another individual early in life, especially in the first 30 months for humans. The effect is manifested in nonrelatives raised together, for instance, in kibbutzs.
Romantic love
Four ways of explaining romantic love have been used to provide a comprehensive biological definition (Bode & Kushnick, 2021):
Function: Mate choice, courtship, sex, pair-bonding.
Phylogeny: Evolved by co-opting mother-infant bonding mechanisms sometime in the recent evolutionary history of humans.
Mechanisms: Social, psychological mate choice, genetic, neurobiological, and endocrinological mechanisms cause romantic love.
Ontogeny: Romantic love can first manifest in childhood, manifests with all its characteristics following puberty, but can manifest across the lifespan.
Sleep
Sleep has been described using Tinbergen's four questions as a framework (Bode & Kuula, 2021):
Function: Energy restoration, metabolic regulation, thermoregulation, boosting immune system, detoxification, brain maturation, circuit reorganization, synaptic optimization, avoiding danger.
Phylogeny: Sleep exists in invertebrates, lower vertebrates, and higher vertebrates. NREM and REM sleep exist in eutheria, marsupialiformes, and also evolved in birds.
Mechanisms: Mechanisms regulate wakefulness, sleep onset, and sleep. Specific mechanisms involve neurotransmitters, genes, neural structures, and the circadian rhythm.
Ontogeny: Sleep manifests differently in babies, infants, children, adolescents, adults, and older adults. Differences include the stages of sleep, sleep duration, and sex differences.
Use of the four-question schema as "periodic table"
Konrad Lorenz, Julian Huxley and Niko Tinbergen were familiar with both conceptual categories (i.e. the central questions of biological research: 1. - 4. and the levels of inquiry: a. - g.), the tabulation was made by Gerhard Medicus. The tabulated schema is used as the central organizing device in many animal behaviour, ethology, behavioural ecology and evolutionary psychology textbooks (e.g., Alcock, 2001). One advantage of this organizational system, what might be called the "periodic table of life sciences," is that it highlights gaps in knowledge, analogous to the role played by the periodic table of elements in the early years of chemistry.
This "biopsychosocial" framework clarifies and classifies the associations between the various levels of the natural and social sciences, and it helps to integrate the social and natural sciences into a "tree of knowledge" (see also Nicolai Hartmann's "Laws about the Levels of Complexity"). Especially for the social sciences, this model helps to provide an integrative, foundational model for interdisciplinary collaboration, teaching and research (see The Four Central Questions of Biological Research Using Ethology as an Example – PDF).
References
Sources
Alcock, John (2001) Animal Behaviour: An Evolutionary Approach, Sinauer, 7th edition. .
Buss, David M., Martie G. Haselton, Todd K. Shackelford, et al. (1998) "Adaptations, Exaptations, and Spandrels," American Psychologist, 53:533–548. http://www.sscnet.ucla.edu/comm/haselton/webdocs/spandrels.html
Buss, David M. (2004) Evolutionary Psychology: The New Science of the Mind, Pearson Education, 2nd edition. .
Cartwright, John (2000) Evolution and Human Behaviour, MIT Press, .
Krebs, John R., Davies N.B. (1993) An Introduction to Behavioural Ecology, Blackwell Publishing, .
Lorenz, Konrad (1937) Biologische Fragestellungen in der Tierpsychologie (I.e. Biological Questions in Animal Psychology). Zeitschrift für Tierpsychologie, 1: 24–32.
Mayr, Ernst (2001) What Evolution Is, Basic Books. .
Gerhard Medicus (2017, chapter 1). Being Human – Bridging the Gap between the Sciences of Body and Mind, Berlin VWB
Medicus, Gerhard (2017) Being Human – Bridging the Gap between the Sciences of Body and Mind. Berlin: VWB 2015,
Nesse, Randolph M (2013) "Tinbergen's Four Questions, Organized," Trends in Ecology and Evolution, 28:681-682.
Moore, David S. (2001) The Dependent Gene: The Fallacy of 'Nature vs. Nurture''', Henry Holt. .
Pinker, Steven (1994) The Language Instinct: How the Mind Creates Language, Harper Perennial. .
Tinbergen, Niko (1963) "On Aims and Methods of Ethology," Zeitschrift für Tierpsychologie, 20: 410–433.
Wilson, Edward O. (1998) Consilience: The Unity of Knowledge'', Vintage Books. .
External links
Diagrams
The Four Areas of Biology pdf
The Four Areas and Levels of Inquiry pdf
Tinbergen's four questions within the "Fundamental Theory of Human Sciences" ppt
Tinbergen's Four Questions, organized pdf
Derivative works
On aims and methods of cognitive ethology (pdf) by Jamieson and Bekoff.
Behavioral ecology
Ethology
Evolutionary psychology
Sociobiology | 0.793968 | 0.982909 | 0.780399 |
Biomedical sciences | Biomedical sciences are a set of sciences applying portions of natural science or formal science, or both, to develop knowledge, interventions, or technology that are of use in healthcare or public health. Such disciplines as medical microbiology, clinical virology, clinical epidemiology, genetic epidemiology, and biomedical engineering are medical sciences. In explaining physiological mechanisms operating in pathological processes, however, pathophysiology can be regarded as basic science.
Biomedical Sciences, as defined by the UK Quality Assurance Agency for Higher Education Benchmark Statement in 2015, includes those science disciplines whose primary focus is the biology of human health and disease and ranges from the generic study of biomedical sciences and human biology to more specialised subject areas such as pharmacology, human physiology and human nutrition. It is underpinned by relevant basic sciences including anatomy and physiology, cell biology, biochemistry, microbiology, genetics and molecular biology, pharmacology, immunology, mathematics and statistics, and bioinformatics. As such the biomedical sciences have a much wider range of academic and research activities and economic significance than that defined by hospital laboratory sciences. Biomedical Sciences are the major focus of bioscience research and funding in the 21st century.
Roles within biomedical science
A sub-set of biomedical sciences is the science of clinical laboratory diagnosis. This is commonly referred to in the UK as 'biomedical science' or 'healthcare science'. There are at least 45 different specialisms within healthcare science, which are traditionally grouped into three main divisions:
specialisms involving life sciences
specialisms involving physiological science
specialisms involving medical physics or bioengineering
Life sciences specialties
Molecular toxicology
Molecular pathology
Blood transfusion science
Cervical cytology
Clinical biochemistry
Clinical embryology
Clinical immunology
Clinical pharmacology and therapeutics
Electron microscopy
External quality assurance
Haematology
Haemostasis and thrombosis
Histocompatibility and immunogenetics
Histopathology and cytopathology
Molecular genetics and cytogenetics
Molecular biology and cell biology
Microbiology including mycology
Bacteriology
Tropical diseases
Phlebotomy
Tissue banking/transplant
Virology
Physiological science specialisms
Physics and bioengineering specialisms
Biomedical science in the United Kingdom
The healthcare science workforce is an important part of the UK's National Health Service. While people working in healthcare science are only 5% of the staff of the NHS, 80% of all diagnoses can be attributed to their work.
The volume of specialist healthcare science work is a significant part of the work of the NHS. Every year, NHS healthcare scientists carry out:
nearly 1 billion pathology laboratory tests
more than 12 million physiological tests
support for 1.5 million fractions of radiotherapy
The four governments of the UK have recognised the importance of healthcare science to the NHS, introducing the Modernising Scientific Careers initiative to make certain that the education and training for healthcare scientists ensures there is the flexibility to meet patient needs while keeping up to date with scientific developments.
Graduates of an accredited biomedical science degree programme can also apply for the NHS' Scientist training programme, which gives successful applicants an opportunity to work in a clinical setting whilst also studying towards an MSc or Doctoral qualification.
Biomedical Science in the 20th century
At this point in history the field of medicine was the most prevalent sub field of biomedical science, as several breakthroughs on how to treat diseases and help the immune system were made. As well as the birth of body augmentations.
1910s
In 1912, the Institute of Biomedical Science was founded in the United Kingdom. The institute is still standing today and still regularly publishes works in the major breakthroughs in disease treatments and other breakthroughs in the field 117 years later. The IBMS today represents approximately 20,000 members employed mainly in National Health Service and private laboratories.
1920s
In 1928, British Scientist Alexander Fleming discovered the first antibiotic penicillin. This was a huge breakthrough in biomedical science because it allowed for the treatment of bacterial infections.
In 1926, the first artificial pacemaker was made by Australian physician Dr. Mark C. Lidwell. This portable machine was plugged into a lighting point. One pole was applied to a skin pad soaked with strong salt solution, while the other consisted of a needle insulated up to the point and was plunged into the appropriate cardiac chamber and the machine started. A switch was incorporated to change the polarity. The pacemaker rate ranged from about 80 to 120 pulses per minute and the voltage also variable from 1.5 to 120 volts.
1930s
The 1930s was a huge era for biomedical research, as this was the era where antibiotics became more widespread and vaccines started to be developed. In 1935, the idea of a polio vaccine was introduced by Dr. Maurice Brodie. Brodie prepared a died poliomyelitis vaccine, which he then tested on chimpanzees, himself, and several children. Brodie's vaccine trials went poorly since the polio-virus became active in many of the human test subjects. Many subjects had fatal side effects, paralyzing, and causing death.
1940s
During and after World War II, the field of biomedical science saw a new age of technology and treatment methods. For instance in 1941 the first hormonal treatment for prostate cancer was implemented by Urologist and cancer researcher Charles B. Huggins. Huggins discovered that if you remove the testicles from a man with prostate cancer, the cancer had nowhere to spread, and nothing to feed on thus putting the subject into remission. This advancement lead to the development of hormonal blocking drugs, which is less invasive and still used today. At the tail end of this decade, the first bone marrow transplant was done on a mouse in 1949. The surgery was conducted by Dr. Leon O. Jacobson, he discovered that he could transplant bone marrow and spleen tissues in a mouse that had both no bone marrow and a destroyed spleen. The procedure is still used in modern medicine today and is responsible for saving countless lives.
1950s
In the 1950s, we saw innovation in technology across all fields, but most importantly there were many breakthroughs which led to modern medicine. On 6 March 1953, Dr. Jonas Salk announced the completion of the first successful killed-virus Polio vaccine. The vaccine was tested on about 1.6 million Canadian, American, and Finnish children in 1954. The vaccine was announced as safe on 12 April 1955.
See also
Biomedical research institution Austral University Hospital
References
External links
Extraordinary You: Case studies of Healthcare scientists in the UK's National Health Service
National Institute of Environmental Health Sciences
The US National Library of Medicine
National Health Service
Health sciences
Health care occupations
Science occupations | 0.7843 | 0.995015 | 0.78039 |
Integrated farming | Integrated farming (IF), integrated production, or integrated farm management is a whole farm management system which aims to deliver more sustainable agriculture without compromising the quality or quantity of agricultural products. Integrated farming combines modern tools and technologies with traditional practices according to a given site and situation, often employing many different cultivation techniques in a small growing area.
Definition
The International Organization of Biological Control (IOBC) describes integrated farming according to the UNI 11233-2009 European standard as a farming system where high-quality organic food, animal feed, fiber, and renewable energy are produced by using resources such as soil, water, air, and nature as well as regulating factors to farm sustainably and with as few polluting inputs as possible.
Particular emphasis is placed on an integrated organic approach which views the farm and its environmental surroundings as an intricately cross-linked whole, on the fundamental role and function of agro-ecosystems, on nutrient cycles, which are balanced and adapted to the demands of specific crops, and on the health and welfare of livestock residing on the farm. Preserving and enhancing soil fertility, maintaining and improving biodiversity, and adhering to ethical and social criteria are indispensable basic elements. Crop protection takes into account all biological, technical, and chemical methods, which then are balanced carefully with objectives to protect the environment, to maintain economic profitability, and to fulfill social or cultural requirements.
The European Initiative for Sustainable Development in Agriculture (EISA) has an Integrated Farming Framework, which provides additional explanations on key aspects of integrated farming. These include: Organization & Planning, Human & Social Capital, Energy Efficiency, Water Use & Protection, Climate Change & Air Quality, Soil Management, Crop Nutrition, Crop Health & Protection, Animal Husbandry, Health & Welfare, Landscape & Nature Conservation, and Waste Management Pollution Control.
In the UK, LEAF (Linking Environment and Farming) promotes a comparable model and defines Integrated Farm Management (IFM) as a whole-farm business approach that delivers more sustainable farming. LEAF's Integrated Farm Management consists of nine interrelated sections: Organization & Planning, Soil Management & Fertility, Crop Health & Protection, Pollution Control & By-Product Management, Animal Husbandry, Energy Efficiency, Water Management, and Landscape & Nature Conservation.
Classification
The Food and Agriculture Organization of the United Nations (FAO) promotes Integrated Pest Management (IPM) as the preferred approach to crop protection and regards it as a pillar of both sustainable intensification of crop production and pesticide risk reduction. IPM, thus, is an indispensable element of Integrated Crop Management, which in turn is an essential part of the holistic integrated farming approach towards sustainable agriculture.
In France, the Forum des Agriculteurs Responsables Respectueux de l'Environnement (FARRE) defines a set of common principles and practices to help farmers achieve these goals. These principles include:
Producing sufficient high quality food, fibre, and industrial raw materials
Meeting the demands of society
Maintaining a viable farming business
Caring for the environment
Sustaining natural resources
The practices include:
Organization and management
Monitoring and auditing
Crop protection
Animal husbandry
Soil and water management
Crop nutrition
Energy management
Waste management and pollution prevention
Wildlife and landscape management
Crop rotation and variety choice
Keller, 1986 (quoted in Lütke Entrup et al., 1998 1) highlights that integrated crop management is not to be understood as a compromise between different agricultural production systems. Rather, it must be understood as a production system with targeted, dynamic, and continuous use and development of methods based on knowledge obtained from experiences in so-called conventional farming. In addition to natural scientific findings, impulses from organic farming are also taken up.
History
Integrated Pest Management can be seen as a starting point for a holistic approach to agricultural production. Following the excessive use of crop protection chemicals, first steps in IPM were taken in fruit production at the end of the 1950s. The concept was then further developed globally in all major crops. On the basis of results of the system-oriented IPM approach, models for integrated crop management were developed. Initially, animal husbandry was not seen as part of such integrated approaches (Lütke Entrup et al., 1998 1).
In the years to follow, various national and regional initiatives and projects were formed. These include LEAF (Linking Environment And Farming) in the UK, FNL (Fördergemeinschaft Nachhaltige Landwirtschaft e.V.) in Germany, FARRE (Forum des Agriculteurs Responsables Respectueux de l'Environnement) in France, FILL (Fördergemeinschaft Integrierte Landbewirtschaftung Luxemburg) in Luxembourg, and OiB (Odling i Balans) in Sweden. However, there are few if any figures available on the uptake of integrated farming systems in the major crops throughout Europe, which has led to a recommendation by the European Economic and Social Committee in February 2014 that the EU should carry out an in-depth analysis of integrated production in Europe in order to obtain insights into the current situation and potential developments. There is evidence, however, that between 60 and 80% of pome, stone, and soft fruits were grown, controlled, and marketed according to "Integrated Production Guidelines" in 1999 in Germany.
LEAF is a sustainable farming organization established in the UK in 1991 which promotes the uptake and knowledge sharing of integrated farm management by the LEAF Network, a series of LEAF demonstration farms and innovation centres. The LEAF Marque System was established in 2003 and is an environmental assurance system recognising more sustainably farmed products. The principles of integrated farm management (IFM) underpin the requirements of LEAF Marque certification, as set out in the LEAF Marque Standard. LEAF Marque is a global system and adopts a whole farm approach, certifying the entire farm business and its products. In 2019, LEAF Marque businesses were in 29 countries, and 39%
of UK fruit and vegetables were grown by LEAF Marque-certified businesses.
Animal husbandry and integrated crop management (ICM) often are just two branches of one agricultural enterprise. In modern agriculture, animal husbandry and crop production must be understood as interlinked sectors which cannot be looked at in isolation, as the context of agricultural systems leads to tight interdependencies. Uncoupling animal husbandry from arable production (too high stocking rates) is therefore not considered in accordance with the principles and objectives of integrated farming (Lütke Entrup et al., 1998 1). Accordingly, holistic concepts for integrated farming or integrated farm management such as the EISA Integrated Farming Framework, and the concept of sustainable agriculture, are increasingly developed, promoted, and implemented at the global level.
Related to the 'sustainable intensification' of agriculture, an objective which in part is discussed controversially, efficiency of resource use becomes increasingly important today. Environmental impacts of agricultural production depend on the efficiency achieved when using natural resources and all other means of production. The input per kg of output, the output per kg of input, and the output achieved per hectare of land—a limited resource in the light of world population growth—are decisive figures for evaluating the efficiency and the environmental impact of agricultural systems. Efficiency parameters therefore offer important evidence how efficiency and environmental impacts of agriculture can be judged and where improvements can or must be made.
Against this background, documentation as well certification schemes and farm audits such as LEAF Marque in the UK and 33 other countries throughout the world become more and more important tools to evaluate—and further improve—agricultural practices. Even though being by far more product- or sector-oriented, SAI Platform principles and practices and GlobalGap for example, pursue similar approaches.
Objectives
Integrated farming is based on attention to detail, continuous improvement and managing all resources available.
Being bound to sustainable development, the underlying three dimensions economic development, social development and environmental protection are thoroughly considered in the practical implementation of integrated farming. However, the need for profitability is a decisive prerequisite: To be sustainable, the system must be profitable, as profits generate the possibility to support all activities outlined in the IF Framework.
As a management and planning strategy, integrated farming incorporates regular benchmarking of goals against results. The EISA Integrated Farming Framework idea places a strong emphasis on farmers' understanding of their own performance. Farmers become aware of accomplishments as well as inadequacies by evaluating their performance on a regular basis, and by paying attention to detail, they may continuously work on improving the entire farming operation as well as their economic performance: According to research in the United Kingdom, lowering fertilizer and chemical inputs to proportions proportionate to crop demand allowed for cost reductions ranging from £2,500 to £10,000 per year and per farm
.
Prevalence
Following first developments in the 1950s, various approaches to integrated pest management, integrated crop management, integrated production, and integrated farming were developed worldwide, including Germany, Switzerland, US, Australia, and India. As the implementation of integrated farming should be handled according to the given site and situation instead of following strict rules and recipes, the concept is applicable all over the world.
Criticism
Environmental organizations have criticized integrated farming. That is in part due to the fact that there are European Organic Regulations such as (EC) No 834/2007 or the new draft from 2014 but no comparable regulations for integrated farming. Whereas organic farming and the in Germany for example are legally protected, EU Commission has not yet considered to start working on a comparable framework or blueprint for integrated farming. When products are marketed as Controlled Integrated Produce, according control mechanisms and quality-labels are not based on national or European directives but are established and handled by private organizations and quality schemes such as LEAF Marque.
References
Further reading
Lütke Entrup, N., Onnen, O., and Teichgräber, B., 1998: Zukunftsfähige Landwirtschaft – Integrierter Landbau in Deutschland und Europa – Studie zur Entwicklung und den Perspektiven. Heft 14/1998, Fördergemeinschaft Integrierter Pflanzenbau, Bonn. . (Available in German only)
Oerke, E.-C., Dehne, H.-W., Schönbeck, F., and Weber, A., 1994: Crop Production and Crop Protection – Estimated Losses in Major Food and Cash Crops. Elsevier, Amsterdam, Lausanne, New York, Oxford, Shannon, Tokyo.
Sustainable agriculture | 0.789857 | 0.987953 | 0.780342 |
Pasteur's quadrant | Pasteur's quadrant is a classification of scientific research projects that seek fundamental understanding of scientific problems, while also having immediate use for society. Louis Pasteur's research is thought to exemplify this type of method, which bridges the gap between "basic" and "applied" research. The term was introduced by Donald E. Stokes in his book, Pasteur's Quadrant.
Other quadrants
As shown in the following table, scientific research can be classified by whether it advances human knowledge by seeking a fundamental understanding of nature, or whether it is primarily motivated by the need to solve immediate problems.
The result is three distinct classes of research:
Pure basic research, exemplified by the work of Niels Bohr, early 20th century atomic physicist.
Pure applied research, exemplified by the work of Thomas Edison, inventor.
Use-inspired basic research, described here as "Pasteur's Quadrant".
Usage
Pasteur's quadrant is useful in distinguishing various perspectives within science, engineering and technology. For example, Daniel A. Vallero and Trevor M. Letcher in their book Unraveling Environmental Disasters applied the device to disaster preparedness and response. University science programs are concerned with knowledge-building, whereas engineering programs at the same university will apply existing and emerging knowledge to address specific technical problems. Governmental agencies employ the knowledge from both to solve societal problems. Thus, the U.S. Army Corps of Engineers expects its engineers to apply general scientific principles to design and upgrade flood control systems. This entails selecting the best levee designs for the hydrologic conditions. However, the engineer would also be interested in more basic science to enhance designs in terms of water retention and soil strength. The university scientist is much like Bohr, with the major motivation being new knowledge. The governmental engineer is behaving like Edison, with the greatest interest in utility, and considerably less interest in knowledge for knowledge's sake.
The university engineering researcher's interests on the other hand, may fall between Bohr and Edison, looking to enhance both knowledge and utility. It is not likely that many single individuals fall within the Pasteur cell, since both basic and applied science are highly specialized. Thus, modern science and technology employ what might be considered a systems engineering approach, where the Pasteur cell consists of numerous researchers, professionals and practitioners to optimize solutions. Note that modifications to the quadrant model to more precisely reflect how research and development interact continue to be suggested.
References
Scientific method
Louis Pasteur | 0.800065 | 0.9753 | 0.780303 |
Evolutionary anthropology | Evolutionary anthropology, the interdisciplinary study of the evolution of human physiology and human behaviour and of the relation between hominids and non-hominid primates, builds on natural science and on social science. Various fields and disciplines of evolutionary anthropology include:
human evolution and anthropogeny
paleoanthropology and paleontology of both human and non-human primates
primatology and primate ethology
the sociocultural evolution of human behavior, including phylogenetic approaches to historical linguistics
the cultural anthropology and sociology of humans
the archaeological study of human technology and of its changes over time and space
human evolutionary genetics and changes in the human genome over time
the neuroscience, endocrinology, and neuroanthropology of human and primate cognition, culture, actions and abilities
human behavioural ecology and the interaction between humans and the environment
studies of human anatomy, physiology, molecular biology, biochemistry, and differences and changes between species, variation between human groups, and relationships to cultural factors
Evolutionary anthropology studies both the biological and the cultural evolution of humans, past and present. Based on a scientific approach, it brings together fields such as archaeology, behavioral ecology, psychology, primatology, and genetics. As a dynamic and interdisciplinary field, it draws on many lines of evidence to understand the human experience, past and present.
Studies of human biological evolution generally focus on the evolution of the human form. Cultural evolution involves the study of cultural change over time and space and frequently incorporates cultural-transmission models. Cultural evolution is not the same as biological evolution: human culture involves the transmission of cultural information (compare memetics), and such transmission can behave in ways quite distinct from human biology and genetics. The study of cultural change increasingly takes place through cladistics and genetic models.
See also
References
Anthropology
Anthropology | 0.799215 | 0.976316 | 0.780286 |
Ecological economics | Ecological economics, bioeconomics, ecolonomy, eco-economics, or ecol-econ is both a transdisciplinary and an interdisciplinary field of academic research addressing the interdependence and coevolution of human economies and natural ecosystems, both intertemporally and spatially. By treating the economy as a subsystem of Earth's larger ecosystem, and by emphasizing the preservation of natural capital, the field of ecological economics is differentiated from environmental economics, which is the mainstream economic analysis of the environment. One survey of German economists found that ecological and environmental economics are different schools of economic thought, with ecological economists emphasizing strong sustainability and rejecting the proposition that physical (human-made) capital can substitute for natural capital (see the section on weak versus strong sustainability below).
Ecological economics was founded in the 1980s as a modern discipline on the works of and interactions between various European and American academics (see the section on History and development below). The related field of green economics is in general a more politically applied form of the subject.
According to ecological economist , ecological economics is defined by its focus on nature, justice, and time. Issues of intergenerational equity, irreversibility of environmental change, uncertainty of long-term outcomes, and sustainable development guide ecological economic analysis and valuation. Ecological economists have questioned fundamental mainstream economic approaches such as cost-benefit analysis, and the separability of economic values from scientific research, contending that economics is unavoidably normative, i.e. prescriptive, rather than positive or descriptive. Positional analysis, which attempts to incorporate time and justice issues, is proposed as an alternative. Ecological economics shares several of its perspectives with feminist economics, including the focus on sustainability, nature, justice and care values. Karl Marx also commented on relationship between capital and ecology, what is now known as ecosocialism.
History and development
The antecedents of ecological economics can be traced back to the Romantics of the 19th century as well as some Enlightenment political economists of that era. Concerns over population were expressed by Thomas Malthus, while John Stuart Mill predicted the desirability of the stationary state of an economy. Mill thereby anticipated later insights of modern ecological economists, but without having had their experience of the social and ecological costs of the Post–World War II economic expansion. In 1880, Marxian economist Sergei Podolinsky attempted to theorize a labor theory of value based on embodied energy; his work was read and critiqued by Marx and Engels. Otto Neurath developed an ecological approach based on a natural economy whilst employed by the Bavarian Soviet Republic in 1919. He argued that a market system failed to take into account the needs of future generations, and that a socialist economy required calculation in kind, the tracking of all the different materials, rather than synthesising them into money as a general equivalent. In this he was criticised by neo-liberal economists such as Ludwig von Mises and Freidrich Hayek in what became known as the socialist calculation debate.
The debate on energy in economic systems can also be traced back to Nobel prize-winning radiochemist Frederick Soddy (1877–1956). In his book Wealth, Virtual Wealth and Debt (1926), Soddy criticized the prevailing belief of the economy as a perpetual motion machine, capable of generating infinite wealth—a criticism expanded upon by later ecological economists such as Nicholas Georgescu-Roegen and Herman Daly.
European predecessors of ecological economics include K. William Kapp (1950) Karl Polanyi (1944), and Romanian economist Nicholas Georgescu-Roegen (1971). Georgescu-Roegen, who would later mentor Herman Daly at Vanderbilt University, provided ecological economics with a modern conceptual framework based on the material and energy flows of economic production and consumption. His magnum opus, The Entropy Law and the Economic Process (1971), is credited by Daly as a fundamental text of the field, alongside Soddy's Wealth, Virtual Wealth and Debt. Some key concepts of what is now ecological economics are evident in the writings of Kenneth Boulding and E.F. Schumacher, whose book Small Is Beautiful – A Study of Economics as if People Mattered (1973) was published just a few years before the first edition of Herman Daly's comprehensive and persuasive Steady-State Economics (1977).
The first organized meetings of ecological economists occurred in the 1980s. These began in 1982, at the instigation of Lois Banner, with a meeting held in Sweden (including Robert Costanza, Herman Daly, Charles Hall, Bruce Hannon, H.T. Odum, and David Pimentel). Most were ecosystem ecologists or mainstream environmental economists, with the exception of Daly. In 1987, Daly and Costanza edited an issue of Ecological Modeling to test the waters. A book entitled Ecological Economics, by Joan Martinez Alier, was published later that year. Alier renewed interest in the approach developed by Otto Neurath during the interwar period. The year 1989 saw the foundation of the International Society for Ecological Economics and publication of its journal, Ecological Economics, by Elsevier. Robert Costanza was the first president of the society and first editor of the journal, which is currently edited by Richard Howarth. Other figures include ecologists C.S. Holling and H.T. Odum, biologist Gretchen Daily, and physicist Robert Ayres. In the Marxian tradition, sociologist John Bellamy Foster and CUNY geography professor David Harvey explicitly center ecological concerns in political economy.
Articles by Inge Ropke (2004, 2005) and Clive Spash (1999) cover the development and modern history of ecological economics and explain its differentiation from resource and environmental economics, as well as some of the controversy between American and European schools of thought. An article by Robert Costanza, David Stern, Lining He, and Chunbo Ma responded to a call by Mick Common to determine the foundational literature of ecological economics by using citation analysis to examine which books and articles have had the most influence on the development of the field. However, citations analysis has itself proven controversial and similar work has been criticized by Clive Spash for attempting to pre-determine what is regarded as influential in ecological economics through study design and data manipulation. In addition, the journal Ecological Economics has itself been criticized for swamping the field with mainstream economics.
Schools of thought
Various competing schools of thought exist in the field. Some are close to resource and environmental economics while others are far more heterodox in outlook. An example of the latter is the European Society for Ecological Economics. An example of the former is the Swedish Beijer International Institute of Ecological Economics. Clive Spash has argued for the classification of the ecological economics movement, and more generally work by different economic schools on the environment, into three main categories. These are the mainstream new resource economists, the new environmental pragmatists, and the more radical social ecological economists. International survey work comparing the relevance of the categories for mainstream and heterodox economists shows some clear divisions between environmental and ecological economists. A growing field of radical social-ecological theory is degrowth economics.Degrowth addresses both biophysical limits and global inequality while rejecting neoliberal economics. Degrowth prioritizes grassroots initiatives in progressive socio-ecological goals, adhering to ecological limits by shrinking the human ecological footprint (See Differences from Mainstream Economics Below). It involves an equitable downscale in both production and consumption of resources in order to adhere to biophysical limits. Degrowth draws from Marxian economics, citing the growth of efficient systems as the alienation of nature and man. Economic movements like degrowth reject the idea of growth itself. Some degrowth theorists call for an "exit of the economy". Critics of the degrowth movement include new resource economists, who point to the gaining momentum of sustainable development. These economists highlight the positive aspects of a green economy, which include equitable access to renewable energy and a commitment to eradicate global inequality through sustainable development (See Green Economics). Examples of heterodox ecological economic experiments include the Catalan Integral Cooperative and the Solidarity Economy Networks in Italy. Both of these grassroots movements use communitarian based economies and consciously reduce their ecological footprint by limiting material growth and adapting to regenerative agriculture.
Non-traditional approaches to ecological economics
Cultural and heterodox applications of economic interaction around the world have begun to be included as ecological economic practices. E.F. Schumacher introduced examples of non-western economic ideas to mainstream thought in his book, Small is Beautiful, where he addresses neoliberal economics through the lens of natural harmony in Buddhist economics. This emphasis on natural harmony is witnessed in diverse cultures across the globe. Buen Vivir is a traditional socio-economic movement in South America that rejects the western development model of economics. Meaning Good Life, Buen Vivir emphasizes harmony with nature, diverse pluralculturism, coexistence, and inseparability of nature and material. Value is not attributed to material accumulation, and it instead takes a more spiritual and communitarian approach to economic activity. Ecological Swaraj originated out of India, and is an evolving world view of human interactions within the ecosystem. This train of thought respects physical bio-limits and non-human species, pursuing equity and social justice through direct democracy and grassroots leadership. Social well-being is paired with spiritual, physical, and material well-being. These movements are unique to their region, but the values can be seen across the globe in indigenous traditions, such as the Ubuntu Philosophy in South Africa.
Differences from mainstream economics
Ecological economics differs from mainstream economics in that it heavily reflects on the ecological footprint of human interactions in the economy. This footprint is measured by the impact of human activities on natural resources and the waste generated in the process. Ecological economists aim to minimize the ecological footprint, taking into account the scarcity of global and regional resources and their accessibility to an economy. Some ecological economists prioritise adding natural capital to the typical capital asset analysis of land, labor, and financial capital. These ecological economists use tools from mathematical economics, as in mainstream economics, but may apply them more closely to the natural world. Whereas mainstream economists tend to be technological optimists, ecological economists are inclined to be technological sceptics. They reason that the natural world has a limited carrying capacity and that its resources may run out. Since destruction of important environmental resources could be practically irreversible and catastrophic, ecological economists are inclined to justify cautionary measures based on the precautionary principle. As ecological economists try to minimize these potential disasters, calculating the fallout of environmental destruction becomes a humanitarian issue as well. Already, the Global South has seen trends of mass migration due to environmental changes. Climate refugees from the Global South are adversely affected by changes in the environment, and some scholars point to global wealth inequality within the current neoliberal economic system as a source of this issue.
The most cogent example of how the different theories treat similar assets is tropical rainforest ecosystems, most obviously the Yasuni region of Ecuador. While this area has substantial deposits of bitumen it is also one of the most diverse ecosystems on Earth and some estimates establish it has over 200 undiscovered medical substances in its genomes – most of which would be destroyed by logging the forest or mining the bitumen. Effectively, the instructional capital of the genomes is undervalued by analyses that view the rainforest primarily as a source of wood, oil/tar and perhaps food. Increasingly the carbon credit for leaving the extremely carbon-intensive ("dirty") bitumen in the ground is also valued – the government of Ecuador set a price of US$350M for an oil lease with the intent of selling it to someone committed to never exercising it at all and instead preserving the rainforest.
While this natural capital and ecosystems services approach has proven popular amongst many it has also been contested as failing to address the underlying problems with mainstream economics, growth, market capitalism and monetary valuation of the environment. Critiques concern the need to create a more meaningful relationship with Nature and the non-human world than evident in the instrumentalism of shallow ecology and the environmental economists commodification of everything external to the market system.
Nature and ecology
A simple circular flow of income diagram is replaced in ecological economics by a more complex flow diagram reflecting the input of solar energy, which sustains natural inputs and environmental services which are then used as units of production. Once consumed, natural inputs pass out of the economy as pollution and waste. The potential of an environment to provide services and materials is referred to as an "environment's source function", and this function is depleted as resources are consumed or pollution contaminates the resources. The "sink function" describes an environment's ability to absorb and render harmless waste and pollution: when waste output exceeds the limit of the sink function, long-term damage occurs. Some persistent pollutants, such as some organic pollutants and nuclear waste are absorbed very slowly or not at all; ecological economists emphasize minimizing "cumulative pollutants". Pollutants affect human health and the health of the ecosystem.
The economic value of natural capital and ecosystem services is accepted by mainstream environmental economics, but is emphasized as especially important in ecological economics. Ecological economists may begin by estimating how to maintain a stable environment before assessing the cost in dollar terms. Ecological economist Robert Costanza led an attempted valuation of the global ecosystem in 1997. Initially published in Nature, the article concluded on $33 trillion with a range from $16 trillion to $54 trillion (in 1997, total global GDP was $27 trillion). Half of the value went to nutrient cycling. The open oceans, continental shelves, and estuaries had the highest total value, and the highest per-hectare values went to estuaries, swamps/floodplains, and seagrass/algae beds. The work was criticized by articles in Ecological Economics Volume 25, Issue 1, but the critics acknowledged the positive potential for economic valuation of the global ecosystem.
The Earth's carrying capacity is a central issue in ecological economics. Early economists such as Thomas Malthus pointed out the finite carrying capacity of the earth, which was also central to the MIT study Limits to Growth. Diminishing returns suggest that productivity increases will slow if major technological progress is not made. Food production may become a problem, as erosion, an impending water crisis, and soil salinity (from irrigation) reduce the productivity of agriculture. Ecological economists argue that industrial agriculture, which exacerbates these problems, is not sustainable agriculture, and are generally inclined favorably to organic farming, which also reduces the output of carbon.
Global wild fisheries are believed to have peaked and begun a decline, with valuable habitat such as estuaries in critical condition. The aquaculture or farming of piscivorous fish, like salmon, does not help solve the problem because they need to be fed products from other fish. Studies have shown that salmon farming has major negative impacts on wild salmon, as well as the forage fish that need to be caught to feed them.
Since animals are higher on the trophic level, they are less efficient sources of food energy. Reduced consumption of meat would reduce the demand for food, but as nations develop, they tend to adopt high-meat diets similar to that of the United States. Genetically modified food (GMF) a conventional solution to the problem, presents numerous problems – Bt corn produces its own Bacillus thuringiensis toxin/protein, but the pest resistance is believed to be only a matter of time.
Global warming is now widely acknowledged as a major issue, with all national scientific academies expressing agreement on the importance of the issue. As the population growth intensifies and energy demand increases, the world faces an energy crisis. Some economists and scientists forecast a global ecological crisis if energy use is not contained – the Stern report is an example. The disagreement has sparked a vigorous debate on issue of discounting and intergenerational equity.
Ethics
Mainstream economics has attempted to become a value-free 'hard science', but ecological economists argue that value-free economics is generally not realistic. Ecological economics is more willing to entertain alternative conceptions of utility, efficiency, and cost-benefits such as positional analysis or multi-criteria analysis. Ecological economics is typically viewed as economics for sustainable development, and may have goals similar to green politics.
Green economics
In international, regional, and national policy circles, the concept of the green economy grew in popularity as a response to the financial predicament at first then became a vehicle for growth and development.
The United Nations Environment Programme (UNEP) defines a 'green economy' as one that focuses on the human aspects and natural influences and an economic order that can generate high-salary jobs. In 2011, its definition was further developed as the word 'green' is made to refer to an economy that is not only resourceful and well-organized but also impartial, guaranteeing an objective shift to an economy that is low-carbon, resource-efficient, and socially-inclusive.
The ideas and studies regarding the green economy denote a fundamental shift for more effective, resourceful, environment-friendly and resource‐saving technologies that could lessen emissions and alleviate the adverse consequences of climate change, at the same time confront issues about resource exhaustion and grave environmental dilapidation.
As an indispensable requirement and vital precondition to realizing sustainable development, the Green Economy adherents robustly promote good governance. To boost local investments and foreign ventures, it is crucial to have a constant and foreseeable macroeconomic atmosphere. Likewise, such an environment will also need to be transparent and accountable. In the absence of a substantial and solid governance structure, the prospect of shifting towards a sustainable development route would be insignificant. In achieving a green economy, competent institutions and governance systems are vital in guaranteeing the efficient execution of strategies, guidelines, campaigns, and programmes.
Shifting to a Green Economy demands a fresh mindset and an innovative outlook of doing business. It likewise necessitates new capacities, skills set from labor and professionals who can competently function across sectors, and able to work as effective components within multi-disciplinary teams. To achieve this goal, vocational training packages must be developed with focus on greening the sectors. Simultaneously, the educational system needs to be assessed as well in order to fit in the environmental and social considerations of various disciplines.
Topics
Among the topics addressed by ecological economics are methodology, allocation of resources, weak versus strong sustainability, energy economics, energy accounting and balance, environmental services, cost shifting, modeling, and monetary policy.
Methodology
A primary objective of ecological economics (EE) is to ground economic thinking and practice in physical reality, especially in the laws of physics (particularly the laws of thermodynamics) and in knowledge of biological systems. It accepts as a goal the improvement of human well-being through development, and seeks to ensure achievement of this through planning for the sustainable development of ecosystems and societies. Of course the terms development and sustainable development are far from lacking controversy. Richard B. Norgaard argues traditional economics has hi-jacked the development terminology in his book Development Betrayed.
Well-being in ecological economics is also differentiated from welfare as found in mainstream economics and the 'new welfare economics' from the 1930s which informs resource and environmental economics. This entails a limited preference utilitarian conception of value i.e., Nature is valuable to our economies, that is because people will pay for its services such as clean air, clean water, encounters with wilderness, etc.
Ecological economics is distinguishable from neoclassical economics primarily by its assertion that the economy is embedded within an environmental system. Ecology deals with the energy and matter transactions of life and the Earth, and the human economy is by definition contained within this system. Ecological economists argue that neoclassical economics has ignored the environment, at best considering it to be a subset of the human economy.
The neoclassical view ignores much of what the natural sciences have taught us about the contributions of nature to the creation of wealth e.g., the planetary endowment of scarce matter and energy, along with the complex and biologically diverse ecosystems that provide goods and ecosystem services directly to human communities: micro- and macro-climate regulation, water recycling, water purification, storm water regulation, waste absorption, food and medicine production, pollination, protection from solar and cosmic radiation, the view of a starry night sky, etc.
There has then been a move to regard such things as natural capital and ecosystems functions as goods and services. However, this is far from uncontroversial within ecology or ecological economics due to the potential for narrowing down values to those found in mainstream economics and the danger of merely regarding Nature as a commodity. This has been referred to as ecologists 'selling out on Nature'. There is then a concern that ecological economics has failed to learn from the extensive literature in environmental ethics about how to structure a plural value system.
Allocation of resources
Resource and neoclassical economics focus primarily on the efficient allocation of resources and less on the two other problems of importance to ecological economics: distribution (equity), and the scale of the economy relative to the ecosystems upon which it relies. Ecological economics makes a clear distinction between growth (quantitative increase in economic output) and development (qualitative improvement of the quality of life), while arguing that neoclassical economics confuses the two. Ecological economists point out that beyond modest levels, increased per-capita consumption (the typical economic measure of "standard of living") may not always lead to improvement in human well-being, but may have harmful effects on the environment and broader societal well-being. This situation is sometimes referred to as uneconomic growth (see diagram above).
Weak versus strong sustainability
Ecological economics challenges the conventional approach towards natural resources, claiming that it undervalues natural capital by considering it as interchangeable with human-made capital—labor and technology.
The impending depletion of natural resources and increase of climate-changing greenhouse gasses should motivate us to examine how political, economic and social policies can benefit from alternative energy. Shifting dependence on fossil fuels with specific interest within just one of the above-mentioned factors easily benefits at least one other. For instance, photo voltaic (or solar) panels have a 15% efficiency when absorbing the sun's energy, but its construction demand has increased 120% within both commercial and residential properties. Additionally, this construction has led to a roughly 30% increase in work demands (Chen).
The potential for the substitution of man-made capital for natural capital is an important debate in ecological economics and the economics of sustainability.
There is a continuum of views among economists between the strongly neoclassical positions of Robert Solow and Martin Weitzman, at one extreme and the 'entropy pessimists', notably Nicholas Georgescu-Roegen and Herman Daly, at the other.
Neoclassical economists tend to maintain that man-made capital can, in principle, replace all types of natural capital. This is known as the weak sustainability view, essentially that every technology can be improved upon or replaced by innovation, and that there is a substitute for any and all scarce materials.
At the other extreme, the strong sustainability view argues that the stock of natural resources and ecological functions are irreplaceable. From the premises of strong sustainability, it follows that economic policy has a fiduciary responsibility to the greater ecological world, and that sustainable development must therefore take a different approach to valuing natural resources and ecological functions.
Recently, Stanislav Shmelev developed a new methodology for the assessment of progress at the macro scale based on multi-criteria methods, which allows consideration of different perspectives, including strong and weak sustainability or conservationists vs industrialists and aims to search for a 'middle way' by providing a strong neo-Keynesian economic push without putting excessive pressure on the natural resources, including water or producing emissions, both directly and indirectly.
Energy economics
A key concept of energy economics is net energy gain, which recognizes that all energy sources require an initial energy investment in order to produce energy. To be useful the energy return on energy invested (EROEI) has to be greater than one. The net energy gain from the production of coal, oil and gas has declined over time as the easiest to produce sources have been most heavily depleted. In traditional energy economics, surplus energy is often seen as something to be capitalized on—either by storing for future use or by converting it into economic growth.
Ecological economics generally rejects the view of energy economics that growth in the energy supply is related directly to well-being, focusing instead on biodiversity and creativity – or natural capital and individual capital, in the terminology sometimes adopted to describe these economically. In practice, ecological economics focuses primarily on the key issues of uneconomic growth and quality of life. Ecological economists are inclined to acknowledge that much of what is important in human well-being is not analyzable from a strictly economic standpoint and suggests an interdisciplinary approach combining social and natural sciences as a means to address this. When considering surplus energy, ecological economists state this could be used for activities that do not directly contribute to economic productivity but instead enhance societal and environmental well-being. This concept of dépense, as developed by Georges Bataille, offers a novel perspective on the management of surplus energy within economies. This concept encourages a shift from growth-centric models to approaches that prioritise sustainable and meaningful expenditures of excess resources.
Thermoeconomics is based on the proposition that the role of energy in biological evolution should be defined and understood through the second law of thermodynamics, but also in terms of such economic criteria as productivity, efficiency, and especially the costs and benefits (or profitability) of the various mechanisms for capturing and utilizing available energy to build biomass and do work. As a result, thermoeconomics is often discussed in the field of ecological economics, which itself is related to the fields of sustainability and sustainable development.
Exergy analysis is performed in the field of industrial ecology to use energy more efficiently. The term exergy, was coined by Zoran Rant in 1956, but the concept was developed by J. Willard Gibbs. In recent decades, utilization of exergy has spread outside of physics and engineering to the fields of industrial ecology, ecological economics, systems ecology, and energetics.
Energy accounting and balance
An energy balance can be used to track energy through a system, and is a very useful tool for determining resource use and environmental impacts, using the First and Second laws of thermodynamics, to determine how much energy is needed at each point in a system, and in what form that energy is a cost in various environmental issues. The energy accounting system keeps track of energy in, energy out, and non-useful energy versus work done, and transformations within the system.
Scientists have written and speculated on different aspects of energy accounting.<ref>Stabile, Donald R. "Veblen and the Political Economy of the Engineer: the radical thinker and engineering leaders came to technocratic ideas at the same time," American Journal of Economics and Sociology (45:1) 1986, 43-44.</ref>
Ecosystem services and their valuation
Ecological economists agree that ecosystems produce enormous flows of goods and services to human beings, playing a key role in producing well-being. At the same time, there is intense debate about how and when to place values on these benefits.
A study was carried out by Costanza and colleagues to determine the 'value' of the services provided by the environment. This was determined by averaging values obtained from a range of studies conducted in very specific context and then transferring these without regard to that context. Dollar figures were averaged to a per hectare number for different types of ecosystem e.g. wetlands, oceans. A total was then produced which came out at 33 trillion US dollars (1997 values), more than twice the total GDP of the world at the time of the study. This study was criticized by pre-ecological and even some environmental economists – for being inconsistent with assumptions of financial capital valuation – and ecological economists – for being inconsistent with an ecological economics focus on biological and physical indicators.
The whole idea of treating ecosystems as goods and services to be valued in monetary terms remains controversial. A common objection is that life is precious or priceless, but this demonstrably degrades to it being worthless within cost-benefit analysis and other standard economic methods. Reducing human bodies to financial values is a necessary part of mainstream economics and not always in the direct terms of insurance or wages. One example of this in practice is the value of a statistical life, which is a dollar value assigned to one life used to evaluate the costs of small changes in risk to life–such as exposure to one pollutant. Economics, in principle, assumes that conflict is reduced by agreeing on voluntary contractual relations and prices instead of simply fighting or coercing or tricking others into providing goods or services. In doing so, a provider agrees to surrender time and take bodily risks and other (reputation, financial) risks. Ecosystems are no different from other bodies economically except insofar as they are far less replaceable than typical labour or commodities.
Despite these issues, many ecologists and conservation biologists are pursuing ecosystem valuation. Biodiversity measures in particular appear to be the most promising way to reconcile financial and ecological values, and there are many active efforts in this regard. The growing field of biodiversity finance began to emerge in 2008 in response to many specific proposals such as the Ecuadoran Yasuni proposalMultinational Monitor, 9/2007. Accessed: December 23, 2012. or similar ones in the Congo. US news outlets treated the stories as a "threat" to "drill a park" reflecting a previously dominant view that NGOs and governments had the primary responsibility to protect ecosystems. However Peter Barnes and other commentators have recently argued that a guardianship/trustee/commons model is far more effective and takes the decisions out of the political realm.
Commodification of other ecological relations as in carbon credit and direct payments to farmers to preserve ecosystem services are likewise examples that enable private parties to play more direct roles protecting biodiversity, but is also controversial in ecological economics. The United Nations Food and Agriculture Organization achieved near-universal agreement in 2008 that such payments directly valuing ecosystem preservation and encouraging permaculture were the only practical way out of a food crisis. The holdouts were all English-speaking countries that export GMOs and promote "free trade" agreements that facilitate their own control of the world transport network: The US, UK, Canada and Australia.
Not 'externalities', but cost shifting
Ecological economics is founded upon the view that the neoclassical economics (NCE) assumption that environmental and community costs and benefits are mutually canceling "externalities" is not warranted. Joan Martinez Alier, for instance shows that the bulk of consumers are automatically excluded from having an impact upon the prices of commodities, as these consumers are future generations who have not been born yet. The assumptions behind future discounting, which assume that future goods will be cheaper than present goods, has been criticized by David Pearce and by the recent Stern Report (although the Stern report itself does employ discounting and has been criticized for this and other reasons by ecological economists such as Clive Spash).
Concerning these externalities, some like the eco-businessman Paul Hawken argue an orthodox economic line that the only reason why goods produced unsustainably are usually cheaper than goods produced sustainably is due to a hidden subsidy, paid by the non-monetized human environment, community or future generations. These arguments are developed further by Hawken, Amory and Hunter Lovins to promote their vision of an environmental capitalist utopia in Natural Capitalism: Creating the Next Industrial Revolution.
In contrast, ecological economists, like Joan Martinez-Alier, appeal to a different line of reasoning. Rather than assuming some (new) form of capitalism is the best way forward, an older ecological economic critique questions the very idea of internalizing externalities as providing some corrective to the current system. The work by Karl William Kapp explains why the concept of "externality" is a misnomer. In fact the modern business enterprise operates on the basis of shifting costs onto others as normal practice to make profits. Charles Eisenstein has argued that this method of privatising profits while socialising the costs through externalities, passing the costs to the community, to the natural environment or to future generations is inherently destructive. As social ecological economist Clive Spash has noted, externality theory fallaciously assumes environmental and social problems are minor aberrations in an otherwise perfectly functioning efficient economic system. Internalizing the odd externality does nothing to address the structural systemic problem and fails to recognize the all pervasive nature of these supposed 'externalities'.
Ecological-economic modeling
Mathematical modeling is a powerful tool that is used in ecological economic analysis. Various approaches and techniques include:Faucheux, S., Pearce, D., and Proops, J. (eds.) (1995), Models of Sustainable Development, Edward Elgar evolutionary, input-output, neo-Austrian modeling, entropy and thermodynamic models, multi-criteria, and agent-based modeling, the environmental Kuznets curve, and Stock-Flow consistent model frameworks. System dynamics and GIS are techniques applied, among other, to spatial dynamic landscape simulation modeling. The Matrix accounting methods of Christian Felber provide a more sophisticated method for identifying "the common good"
Monetary theory and policy
Ecological economics draws upon its work on resource allocation and strong sustainability to address monetary policy. Drawing upon a transdisciplinary literature, ecological economics roots its policy work in monetary theory and its goals of sustainable scale, just distribution, and efficient allocation. Ecological economics' work on monetary theory and policy can be traced to Frederick Soddy's work on money. The field considers questions such as the growth imperative of interest-bearing debt, the nature of money, and alternative policy proposals such as alternative currencies and public banking.
Criticism
Assigning monetary value to natural resources such as biodiversity, and the emergent ecosystem services is often viewed as a key process in influencing economic practices, policy, and decision-making.Dasgupta P. Nature’s role in sustaining economic development. Philos Trans R Soc Lond B Biol Sci. 2010 Jan 12;365(1537):5–11. While this idea is becoming more and more accepted among ecologists and conservationist, some argue that it is inherently false.
McCauley argues that ecological economics and the resulting ecosystem service based conservation can be harmful. He describes four main problems with this approach:
Firstly, it seems to be assumed that all ecosystem services are financially beneficial. This is undermined by a basic characteristic of ecosystems: they do not act specifically in favour of any single species. While certain services might be very useful to us, such as coastal protection from hurricanes by mangroves for example, others might cause financial or personal harm, such as wolves hunting cattle. The complexity of Eco-systems makes it challenging to weigh up the value of a given species. Wolves play a critical role in regulating prey populations; the absence of such an apex predator in the Scottish Highlands has caused the over population of deer, preventing afforestation, which increases the risk of flooding and damage to property.
Secondly, allocating monetary value to nature would make its conservation reliant on markets that fluctuate. This can lead to devaluation of services that were previously considered financially beneficial. Such is the case of the bees in a forest near former coffee plantations in Finca Santa Fe, Costa Rica. The pollination services were valued to over US$60,000 a year, but soon after the study, coffee prices dropped and the fields were replanted with pineapple. Pineapple does not require bees to be pollinated, so the value of their service dropped to zero.
Thirdly, conservation programmes for the sake of financial benefit underestimate human ingenuity to invent and replace ecosystem services by artificial means. McCauley argues that such proposals are deemed to have a short lifespan as the history of technology is about how Humanity developed artificial alternatives to nature's services and with time passing the cost of such services tend to decrease. This would also lead to the devaluation of ecosystem services.
Lastly, it should not be assumed that conserving ecosystems is always financially beneficial as opposed to alteration. In the case of the introduction of the Nile perch to Lake Victoria, the ecological consequence was decimation of native fauna. However, this same event is praised by the local communities as they gain significant financial benefits from trading the fish.
McCauley argues that, for these reasons, trying to convince decision-makers to conserve nature for monetary reasons is not the path to be followed, and instead appealing to morality is the ultimate way to campaign for the protection of nature.
See also
Agroecology
Circular economy
Critique of political economy
Deep ecology
Earth Economics (policy think tank)
Eco-economic decoupling
Eco-socialism
Ecofeminism
Ecological economists (category)
Ecological model of competition
Ecological values of mangrove
Energy quality
Harrington paradox
Green accounting
Gund Institute for Ecological Economics
Index of Sustainable Economic Welfare
International Society for Ecological Economics
Natural capital accounting
Natural resource economics
Outline of green politics
Social metabolism
Spaceship Earth
Steady-state economy
References
Further reading
Common, M. and Stagl, S. (2005). Ecological Economics: An Introduction. New York: Cambridge University Press.
Costanza, R., Cumberland, J. H., Daly, H., Goodland, R., Norgaard, R. B. (1997). An Introduction to Ecological Economics. St. Lucie Press and International Society for Ecological Economics, (e-book at the Encyclopedia of Earth)
Daly, H. (1980). Economics, Ecology, Ethics: Essays Toward a Steady-State Economy, W.H. Freeman and Company, .
Daly, H. and Townsend, K. (eds.) 1993. Valuing The Earth: Economics, Ecology, Ethics. Cambridge, Mass.; London, England: MIT Press.
Daly, H. (1994). "Steady-state Economics". In: Ecology - Key Concepts in Critical Theory, edited by C. Merchant. Humanities Press, .
Daly, H., and J. B. Cobb (1994). For the Common Good: Redirecting the Economy Toward Community, the Environment, and a Sustainable Future. Beacon Press, .
Daly, H. (1997). Beyond Growth: The Economics of Sustainable Development. Beacon Press, .
Daly, H. (2015). "Economics for a Full World." Great Transition Initiative, https://www.greattransition.org/publication/economics-for-a-full-world.
Daly, H., and J. Farley (2010). Ecological Economics: Principles and Applications. Island Press, .
Fragio, A. (2022). Historical Epistemology of Ecological Economics. Springer.
Georgescu-Roegen, N. (1999). The Entropy Law and the Economic Process. iUniverse Press, .
Greer, J. M. (2011). The Wealth of Nature: Economics as if Survival Mattered. New Society Publishers, .
Hesmyr, Atle Kultorp (2020). Civilization: Its Economic Basis, Historical Lessons and Future Prospects. Nisus Publications.
Huesemann, Michael H., and Joyce A. Huesemann (2011). Technofix: Why Technology Won't Save Us or the Environment, New Society Publishers, Gabriola Island, British Columbia, Canada, , 464 pp.
Jackson, Tim (2009). Prosperity without Growth - Economics for a finite Planet. London: Routledge/Earthscan. .
Kevlar, M. (2014). Eco-Economics on the horizon, Economics and human nature from a behavioural perspective.
Krishnan R., Harris J. M., and N. R. Goodwin (1995). A Survey of Ecological Economics. Island Press. .
Martinez-Alier, J. (1990). Ecological Economics: Energy, Environment and Society. Oxford, England: Basil Blackwell.
Martinez-Alier, J., Ropke, I. eds. (2008). Recent Developments in Ecological Economics, 2 vols., E. Elgar, Cheltenham, UK.
Soddy, F. A. (1926). Wealth, Virtual Wealth and Debt. London, England: George Allen & Unwin.
Stern, D. I. (1997). "Limits to substitution and irreversibility in production and consumption: A neoclassical interpretation of ecological economics". Ecological Economics 21(3): 197–215.
Tacconi, L. (2000). Biodiversity and Ecological Economics: Participation, Values, and Resource Management. London, UK: Earthscan Publications.
Vatn, A. (2005). Institutions and the Environment. Cheltenham: Edward Elgar.
Vianna Franco, M. P., and A. Missemer (2022). A History of Ecological Economic Thought. London & New York: Routledge.
Vinje, Victor Condorcet (2015). Economics as if Soil & Health Matters. Nisus Publications.
Walker, J. (2020). More Heat than Life: The Tangled Roots of Ecology, Energy, and Economics''. Springer.
Industrial ecology
Natural resources
Environmental social science
Environmental economics
Schools of economic thought
Political ecology | 0.787733 | 0.990519 | 0.780264 |
Terraforming | Terraforming or terraformation ("Earth-shaping") is the hypothetical process of deliberately modifying the atmosphere, temperature, surface topography or ecology of a planet, moon, or other body to be similar to the environment of Earth to make it habitable for humans to live on.
The concept of terraforming developed from both science fiction and actual science. Carl Sagan, an astronomer, proposed the planetary engineering of Venus in 1961, which is considered one of the first accounts of the concept. The term was coined by Jack Williamson in a science-fiction short story ("Collision Orbit") published in 1942 in Astounding Science Fiction.
Even if the environment of a planet could be altered deliberately, the feasibility of creating an unconstrained planetary environment that mimics Earth on another planet has yet to be verified. While Venus, Earth, Mars, and even the Moon have been studied in relation to the subject, Mars is usually considered to be the most likely candidate for terraforming. Much study has been done concerning the possibility of heating the planet and altering its atmosphere, and NASA has even hosted debates on the subject. Several potential methods for the terraforming of Mars may be within humanity's technological capabilities, but according to Martin Beech, the economic attitude of preferring short-term profits over long-term investments will not support a terraforming project.
The long timescales and practicality of terraforming are also the subject of debate. As the subject has gained traction, research has expanded to other possibilities including biological terraforming, para-terraforming, and modifying humans to better suit the environments of planets and moons. Despite this, questions still remain in areas relating to the ethics, logistics, economics, politics, and methodology of altering the environment of an extraterrestrial world, presenting issues to the implementation of the concept.
History of scholarly study
The astronomer Carl Sagan proposed the planetary engineering of Venus in an article published in the journal Science in 1961. Sagan imagined seeding the atmosphere of Venus with algae, which would convert water, nitrogen and carbon dioxide into organic compounds. As this process removed carbon dioxide from the atmosphere, the greenhouse effect would be reduced until surface temperatures dropped to "comfortable" levels. The resulting plant matter, Sagan proposed, would be pyrolyzed by the high surface temperatures of Venus, and thus be sequestered in the form of "graphite or some involatile form of carbon" on the planet's surface. However, later discoveries about the conditions on Venus made this particular approach impossible. One problem is that the clouds of Venus are composed of a highly concentrated sulfuric acid solution. Even if atmospheric algae could thrive in the hostile environment of Venus's upper atmosphere, an even more insurmountable problem is that its atmosphere is simply far too thick: the high atmospheric pressure would result in a "atmosphere of nearly pure molecular oxygen" at high pressure. This volatile combination could not be sustained through time. Any carbon that had been reduced by photosynthesis would be quickly oxidized in this atmosphere through combustion, "short-circuiting" the terraforming process.
Sagan also visualized making Mars habitable for human life in an article published in the journal Icarus, "Planetary Engineering on Mars" (1973). Three years later, NASA addressed the issue of planetary engineering officially in a study, but used the term "planetary ecosynthesis" instead. The study concluded that it was possible for Mars to support life and be made into a habitable planet. The first conference session on terraforming, then referred to as "Planetary Modeling", was organized that same year.
In March 1979, NASA engineer and author James Oberg organized the First Terraforming Colloquium, a special session at the Lunar and Planetary Science Conference in Houston. Oberg popularized the terraforming concepts discussed at the colloquium to the general public in his book New Earths (1981). Not until 1982 was the word terraforming used in the title of a published journal article. Planetologist Christopher McKay wrote "Terraforming Mars", a paper for the Journal of the British Interplanetary Society. The paper discussed the prospects of a self-regulating Martian biosphere, and the word "terraforming" has since become the preferred term.
In 1984, James Lovelock and Michael Allaby published The Greening of Mars. Lovelock's book was one of the first to describe a novel method of warming Mars, where chlorofluorocarbons (CFCs) are added to the atmosphere to produce a strong greenhouse effect.
Motivated by Lovelock's book, biophysicist Robert Haynes worked behind the scenes to promote terraforming, and contributed the neologism Ecopoiesis, forming the word from the Greek , oikos, "house", and , poiesis, "production". Ecopoiesis refers to the origin of an ecosystem. In the context of space exploration, Haynes describes ecopoiesis as the "fabrication of a sustainable ecosystem on a currently lifeless, sterile planet". Fogg defines ecopoiesis as a type of planetary engineering and is one of the first stages of terraformation. This primary stage of ecosystem creation is usually restricted to the initial seeding of microbial life. A 2019 opinion piece by Lopez, Peixoto and Rosado has reintroduced microbiology as a necessary component of any possible colonization strategy based on the principles of microbial symbiosis and their beneficial ecosystem services. As conditions approach that of Earth, plant life could be brought in, and this will accelerate the production of oxygen, theoretically making the planet eventually able to support animal life.
Aspects and definitions
In 1985, Martyn Fogg started publishing several articles on terraforming. He also served as editor for a full issue on terraforming for the Journal of the British Interplanetary Society in 1992. In his book Terraforming: Engineering Planetary Environments (1995), Fogg proposed the following definitions for different aspects related to terraforming:
Planetary engineering: the application of technology for the purpose of influencing the global properties of a planet.
Geoengineering: planetary engineering applied specifically to Earth. It includes only those macro engineering concepts that deal with the alteration of some global parameter, such as the greenhouse effect, atmospheric composition, insolation or impact flux.
Terraforming: a process of planetary engineering, specifically directed at enhancing the capacity of an extraterrestrial planetary environment to support life as we know it. The ultimate achievement in terraforming would be to create an open planetary ecosystem emulating all the functions of the biosphere of Earth, one that would be fully habitable for human beings.
Fogg also devised definitions for candidate planets of varying degrees of human compatibility:
Habitable Planet (HP): A world with an environment sufficiently similar to Earth's as to allow comfortable and free human habitation.
Biocompatible Planet (BP): A planet possessing the necessary physical parameters for life to flourish on its surface. If initially lifeless, then such a world could host a biosphere of considerable complexity without the need for terraforming.
Easily Terraformable Planet (ETP): A planet that might be rendered biocompatible, or possibly habitable, and maintained so by modest planetary engineering techniques and with the limited resources of a starship or robot precursor mission.
Fogg suggests that Mars was a biologically compatible planet in its youth, but is not now in any of these three categories, because it can only be terraformed with greater difficulty.
Habitability requirements
Planetary habitability, broadly defined as the capacity for an astronomical body to sustain life, requires that various geophysical, geochemical, and astrophysical criteria must be met before the surface of such a body is considered habitable. Modifying a planetary surface such that it is able to sustain life, particularly for humans, is generally the end-goal of the hypothetical process of terraforming. Of particular interest in the context of terraforming is the set of factors that have sustained complex, multicellular animals in addition to simpler organisms on Earth. Research and theory in this regard is a component of planetary science and the emerging discipline of astrobiology.
Classifications of the criteria of habitability can be varied, but it is generally agreed upon that the presence of water, non-extreme temperatures, and an energy source put broad constraints on habitability. Other requirements for habitability have been defined as the presence of raw materials, a solvent, and clement conditions, or elemental requirements (such as carbon, hydrogen, nitrogen, oxygen, phosphorus and sulfur), and reasonable physiochemical conditions. When applied to organisms present on the earth, including humans, these constraints can substantially narrow.
In its astrobiology roadmap, NASA has defined the principal habitability criteria as "extended regions of liquid water, conditions favorable for the assembly of complex organic molecules, and energy sources to sustain metabolism."
Temperature
The general temperature range for all life on Earth is -20°C to 122°C, set primarily by the ability of water (possibly saline, or under high pressure in the ocean bottom) to be available in liquid form. This may constitute a bounding range for the development of life on other planets, in the context of terraforming. For Earth, the temperature is set by the equilibrium of incident solar radiation absorbed and outgoing infrared radiation, including the effect of greenhouse gasses in modifying the planetary equilibrium temperature; terraforming concepts may include modifying temperature by methods including solar reflectors to increase or decrease the amount of solar illumination, and hence modify temperature.
Water
All known life requires water; thus the capacity for planetary body to sustain water is a critical aspect of habitability. The "Habitable Zone" of a solar system is generally defined as the region in which stable surface liquid water may be present on a planetary body. The boundaries of the Habitable Zone were originally defined by water loss by photolysis and hydrogen escape, setting a limit on how close a planet may be to its orbited star, and the prevalence of CO2 clouds that would increase albedo, setting an outer boundary on stable liquid water. These constraints are applicable in particular to Earth-like planets, and would not as easily apply to moons like Europa and Enceladus with ice-covered oceans, where the energy source to keep the water liquid is from tidal heating, rather than solar energy.
Energy
On the most fundamental level, the only absolute requirement of life may be thermodynamic disequilibrium, or the presence of Gibbs Free Energy. It has been argued that habitability can be conceived of as a balance between life's demand for energy and the capacity for the environment to provide such energy. For humans, energy comes in the form of sugars, fats, and proteins provided by consuming plants and animals, necessitating in turn that a habitable planet for humans can sustain such organisms.
Much of earth's biomass (~60%) relies on photosynthesis for an energy source, while a further ~40% is chemotropic. For the development of life on other planetary bodies, chemical energy may have been critical, while for sustaining life on another planetary body in our solar system, sufficiently high solar energy may also be necessary for phototrophic organisms.
Elements
On Earth, life generally requires six elements in high abundance: carbon, hydrogen, nitrogen, oxygen, phosphorus, and sulfur. These elements are considered "essential" for all known life and plentiful within biological systems. Additional elements crucial to life include the cations Mg2+, Ca2+, K+ and Na+ and the anion Cl-. Many of these elements may undergo biologically facilitated oxidation or reduction to produce usable metabolic energy.
Preliminary stages
Terraforming a planet would involve making it fit the habitability requirements listed in the previous section. For example, a planet may be too cold for liquid water to exist on its surface. Its temperature could be raised by adding greenhouse gases to the atmosphere, using orbiting mirrors to reflect more sunlight onto the planet, or lowering the albedo of the planet. Conversely, a planet too hot for liquid water could be cooled down by removing greenhouse gases (if these are present), placing a sunshade in the L1 point to reduce sunlight reaching the planet, or increasing the albedo.
Atmospheric pressure is another issue: various celestial bodies including Mars, Mercury and most moons have lower pressure than Earth. At pressures below the triple point of water (611.7 Pa), water cannot be liquid at any temperature. Human survival requires a still-higher pressure of at least 6.3 kPa, the Armstrong limit; below this pressure, exposed body fluids boil at body temperature. Furthermore, a thick atmosphere protects the surface from cosmic rays. A thin atmosphere could be thickened using gases produced locally (e.g. the Moon could be given an atmosphere of oxygen by reducing lunar rock) or gases could be imported from elsewhere.
Once conditions become more suitable for life of the introduced species, the importation of microbial life could begin. As conditions approach that of Earth, plant life could also be brought in. This would accelerate the production of oxygen, which theoretically would make the planet eventually able to support animal life.
Prospective targets
Mars
In many respects, Mars is the most Earth-like planet in the Solar System. It is thought that Mars once had a more Earth-like environment early in its history, with a thicker atmosphere and abundant water that was lost over the course of hundreds of millions of years.
The exact mechanism of this loss is still unclear, though three mechanisms, in particular, seem likely: First, whenever surface water is present, carbon dioxide reacts with rocks to form carbonates, thus drawing atmosphere off and binding it to the planetary surface. On Earth, this process is counteracted when plate tectonics works to cause volcanic eruptions that vent carbon dioxide back to the atmosphere. On Mars, the lack of such tectonic activity worked to prevent the recycling of gases locked up in sediments.
Second, the lack of a magnetosphere around Mars may have allowed the solar wind to gradually erode the atmosphere. Convection within the core of Mars, which is made mostly of iron, originally generated a magnetic field. However the dynamo ceased to function long ago, and the magnetic field of Mars has largely disappeared, probably due to "loss of core heat, solidification of most of the core, and/or changes in the mantle convection regime." Results from the NASA MAVEN mission show that the atmosphere is removed primarily due to Coronal Mass Ejection events, where outbursts of high-velocity protons from the Sun impact the atmosphere. Mars does still retain a limited magnetosphere that covers approximately 40% of its surface. Rather than uniformly covering and protecting the atmosphere from solar wind, however, the magnetic field takes the form of a collection of smaller, umbrella-shaped fields, mainly clustered together around the planet's southern hemisphere.
Finally, between approximately 4.1 and 3.8 billion years ago, asteroid impacts during the Late Heavy Bombardment caused significant changes to the surface environment of objects in the Solar System. The low gravity of Mars suggests that these impacts could have ejected much of the Martian atmosphere into deep space.
Terraforming Mars would entail two major interlaced changes: building the atmosphere and heating it. A thicker atmosphere of greenhouse gases such as carbon dioxide would trap incoming solar radiation. Because the raised temperature would add greenhouse gases to the atmosphere, the two processes would augment each other. Carbon dioxide alone would not suffice to sustain a temperature above the freezing point of water, so a mixture of specialized greenhouse molecules might be manufactured.
Venus
Terraforming Venus requires two major changes: removing most of the planet's dense carbon dioxide atmosphere, and reducing the planet's surface temperature. These goals are closely interrelated because Venus's extreme temperature may result from the greenhouse effect caused by its dense atmosphere.
Venus's atmosphere currently contains little oxygen, so an additional step would be to inject breathable O2 into the atmosphere. An early proposal for such a process comes from Carl Sagan, who suggested the injection of floating, photosynthetic bacteria into the Venusian atmosphere to reduce CO2 to organic form, and increase the atmospheric concentration of O2 in the atmosphere. This concept, however, was based in a flawed 1960s understanding of Venus's atmosphere as much lower pressure; in reality, the Venusian atmospheric pressure (93 bars) is far higher than early estimates. Sagan's idea is therefore untenable, as he later conceded.
An additional step noted by Martin Beech includes the injection of water and/or hydrogen into the planetary atmosphere; this step follows after sequestering CO2 and reducing the mass of the atmosphere. In order to combine hydrogen with O2 produced by other means, an estimated 4*1019 kg of hydrogen is necessary; this may need to be mined from another source, such as Uranus or Neptune.
Moon
Although the gravity on Earth's Moon is too low to hold an atmosphere for geological spans of time, if given one, it would retain it for spans of time that are long compared to human lifespans. Landis and others have thus proposed that it could be feasible to terraform the moon, although not all agree with that proposal. Landis estimates that a 1 PSI atmosphere of pure oxygen on the Moon would require on the order of two hundred trillion tons of oxygen, and suggests it could be produced by reducing the oxygen from an amount of lunar rock equivalent to a cube about fifty kilometers on an edge. Alternatively, he suggests that the water content of "fifty to a hundred comets" the size of Halley's comet would do the job, "assuming that the water doesn't splash away when the comets hit the moon." Likewise, Benford calculates that terraforming the moon would require "about 100 comets the size of Halley's."
Mercury
Mercury would be difficult to terraform. Beech states "There seems little prospect of terraforming Mercury such that any animals or plants might exist there," and suggests that its primary use in a terraforming project would be as a mining source for minerals. Nevertheless, terraforming has been considered. Mercury's magnetic field is only 1.1% that of Earth's, and, being closer to the Sun, any atmosphere would be stripped rapidly unless it can be protected from the solar wind. It is conjectured that Mercury's magnetic field should be much stronger, up to 30% of Earth's, if it weren't being suppressed by certain solar wind feedback effects. If some means of shielding Mercury from solar wind by placing an artificial magnetic shield at Mercury-Sun L1 (similar to the proposal for Mars), then Mercury's magnetic field could possibly grow in intensity to a point where Mercury's magnetic field could be self-sustaining provided the field wasn't made to "stall" by another solar event.
Despite being much smaller than Mars, Mercury has an escape velocity only slightly less than that of Mars due to its higher density and could, if a magnetosphere prevents atmospheric stripping, hold a nitrogen/oxygen atmosphere for millions of years.
To provide one atmosphere of pressure, roughly 1.1×1018 kilograms of gas would be required; or a somewhat lower amount if lower pressure is acceptable. Water could be delivered from the outer solar system. Once this water has been delivered, it would split the water into its constituent oxygen and hydrogen molecules, possibly using a photo-catalytic dust, with the hydrogen rapidly being lost to space. At an oxygen pressure of 0.2-0.3 bar, the atmosphere would be breathable and nitrogen may be added as required to allow for plant growth in the presence of nitrates.
Temperature management would be required, due to the equilibrium average temperature of ~159 Celsius. However, millions of square kilometers at the poles have an average temperature of 0-50 Celsius, or 32-122 Fahrenheit (i.e., an area the size of Mexico at each pole with habitable temperatures). The total habitable area could be even larger if the planetary albedo were increased from 0.12 to ~0.6, potentially increasing the habitable area. Roy proposes that the temperature could be further managed by decreasing the solar flux at Mercury to near the terrestrial value by solar sails reflecting sunlight. He calculates that 16 to 17 million sails, each with an area of one square kilometer would be needed.
Earth
It has been recently proposed that due to the effects of climate change, an interventionist program might be designed to return Earth to pre-industrial climate parameters. In order to achieve this, multiple approaches have been proposed, such as the management of solar radiation, the sequestration of carbon dioxide, and the design and release of climate altering genetically engineered organisms. These are typically referred to as geoengineering or climate engineering, rather than terraforming.
Other bodies in the Solar System
Other possible candidates for terraforming (possibly only partial or paraterraforming) include large moons of Jupiter or Saturn (Europa, Ganymede, Callisto, Enceladus, Titan), and the dwarf planet Ceres.
The moons are covered in ice, so heating them would make some of this ice sublimate into an atmosphere of water vapour, ammonia and other gases. For Jupiter's moons, the intense radiation around Jupiter would cause radiolysis of water vapour, splitting it into hydrogen and oxygen. The former would be rapidly lost to space, leaving behind the oxygen (this already occurs on the moons to a minor extent, giving them thin atmospheres of oxygen). For Saturn's moons, the water vapour could be split by using orbital mirrors to focus sunlight, causing photolysis. The ammonia could be converted to nitrogen by introducing bacteria such as Nitrosomonas, Pseudomonas and Clostridium, resulting in an Earth-like nitrogen-oxygen atmosphere. This atmosphere would protect the surface from Jupiter's radiation, but it would also be possible to clear said radiation using orbiting tethers or radio waves.
Challenges to terraforming the moons include their high amounts of ice and their low gravity. If all of the ice were fully melted, it would result in deep moon-spanning oceans, meaning any settlements would have to be floating (unless some of the ice was allowed to remain, to serve as land). Low gravity would cause atmospheric escape over time and may cause problems for human health. However, atmospheric escape would take place over spans of time that are long compared to human lifespans, as with the Moon.
One proposal for terraforming Ceres would involve heating it (using orbital mirrors, detonating thermonuclear devices or colliding small asteroids with Ceres), creating an atmosphere and deep ocean. However, this appears to be based on a misconception that Ceres' surface is icy in a similar way to the gas giant moons. In reality, Ceres' surface is "a layer of mixed ice, silicates and light strong phases best matched by hydrated salts and clathrates". It is unclear what the result of heating this up would be.
Other possibilities
Biological terraforming
Many proposals for planetary engineering involve the use of genetically engineered bacteria.
As synthetic biology matures over the coming decades it may become possible to build designer organisms from scratch that directly manufacture desired products efficiently. Lisa Nip, Ph.D. candidate at the MIT Media Lab's Molecular Machines group, said that by synthetic biology, scientists could genetically engineer humans, plants and bacteria to create Earth-like conditions on another planet.
Gary King, microbiologist at Louisiana State University studying the most extreme organisms on Earth, notes that "synthetic biology has given us a remarkable toolkit that can be used to manufacture new kinds of organisms specially suited for the systems we want to plan for" and outlines the prospects for terraforming, saying "we'll want to investigate our chosen microbes, find the genes that code for the survival and terraforming properties that we want (like radiation and drought resistance), and then use that knowledge to genetically engineer specifically Martian-designed microbes". He sees the project's biggest bottleneck in the ability to genetically tweak and tailor the right microbes, estimating that this hurdle could take "a decade or more" to be solved. He also notes that it would be best to develop "not a single kind of microbe but a suite of several that work together".
DARPA is researching the use of photosynthesizing plants, bacteria, and algae grown directly on the Mars surface that could warm up and thicken its atmosphere. In 2015 the agency and some of its research partners created an software called DTA GView − a 'Google Maps of genomes', in which genomes of several organisms can be pulled up on the program to immediately show a list of known genes and where they are located in the genome. According to Alicia Jackson, deputy director of DARPA's Biological Technologies Office, they have developed a "technological toolkit to transform not just hostile places here on Earth, but to go into space not just to visit, but to stay".
Paraterraforming
Also known as the "world house" concept, para-terraforming involves the construction of a habitable enclosure on a planet that encompasses most of the planet's usable area. The enclosure would consist of a transparent roof held one or more kilometers above the surface, pressurized with a breathable atmosphere, and anchored with tension towers and cables at regular intervals. The world house concept is similar to the concept of a domed habitat, but one which covers all (or most) of the planet.
Potential targets for paraterraforming include Mercury, the Moon, Ceres and the gas giant moons.
Adapting humans
It has also been suggested that instead of or in addition to terraforming a hostile environment humans might adapt to these places by the use of genetic engineering, biotechnology and cybernetic enhancements. This is known as pantropy.
Issues
Ethical issues
There is a philosophical debate within biology and ecology as to whether terraforming other worlds is an ethical endeavor. From the point of view of a cosmocentric ethic, this involves balancing the need for the preservation of human life against the intrinsic value of existing planetary ecologies. Lucianne Walkowicz has even called terraforming a "planetary-scale strip mining operation".
On the pro-terraforming side of the argument, there are those like Robert Zubrin, Martyn J. Fogg, Richard L. S. Taylor, and the late Carl Sagan who believe that it is humanity's moral obligation to make other worlds suitable for human life, as a continuation of the history of life-transforming the environments around it on Earth. They also point out that Earth would eventually be destroyed if nature takes its course, so that humanity faces a very long-term choice between terraforming other worlds or allowing all terrestrial life to become extinct. Terraforming totally barren planets, it is asserted, is not morally wrong as it does not affect any other life.
The opposing argument posits that terraforming would be an unethical interference in nature, and that given humanity's past treatment of Earth, other planets may be better off without human interference. Still others strike a middle ground, such as Christopher McKay, who argues that terraforming is ethically sound only once we have completely assured that an alien planet does not harbor life of its own; but that if it does, we should not try to reshape it to our own use, but we should engineer its environment to artificially nurture the alien life and help it thrive and co-evolve, or even co-exist with humans. Even this would be seen as a type of terraforming to the strictest of ecocentrists, who would say that all life has the right, in its home biosphere, to evolve without outside interference.
Economic issues
The initial cost of such projects as planetary terraforming would be massive, and the infrastructure of such an enterprise would have to be built from scratch. Such technology has not yet been developed, let alone financially feasible at the moment. John Hickman has pointed out that almost none of the current schemes for terraforming incorporate economic strategies, and most of their models and expectations seem highly optimistic.
In popular culture
Terraforming is a common concept in science fiction, ranging from television, movies and novels to video games.
A related concept from science fiction is xenoforming – a process in which aliens change the Earth or other planets to suit their own needs, already suggested in the classic The War of the Worlds (1898) of H.G. Wells.
See also
Notes
References
Dalrymple, G. Brent (2004). Ancient Earth, ancient skies: the age of Earth and its cosmic surroundings. Stanford University Press.
Faure, Gunter & Mensing, Teresa M. (2007). Introduction to planetary science: the geological perspective. Springer. .
Fogg, Martyn J. (2000). The Ethical Dimensions of Space Settlement (PDF format). Space Policy, 16, 205–211. Also presented (1999) at the 50th International Astronautical Congress, Amsterdam (IAA-99-IAA.7.1.07).
Forget, François; Costard, François & Lognonné, Philippe (2007). Planet Mars: Story of Another World. Springer. .
Kargel, Jeffrey Stuart (2004). Mars: a warmer, wetter planet. Springer. .
McKay Christopher P. & Haynes, Robert H. (1997). "Implanting Life on Mars as a Long Term Goal for Mars Exploration", in The Case for Mars IV: Considerations for Sending Humans, ed. Thomas R. Meyer (San Diego, California: American Astronautical Society/Univelt), Pp. 209–15.
Read, Peter L.; Lewis, Stephen R. (2004). The Martian climate revisited: atmosphere and environment of a desert planet. Springer. .
Sagan, Carl & Druyan, Ann (1997). Pale Blue Dot: A Vision of the Human Future in Space. Ballantine Books. .
Schubert, Gerald; Turcotte, Donald L.; Olson, Peter. (2001). Mantle convection in the Earth and planets. Cambridge University Press. .
Taylor, Richard L. S. (1992). "Paraterraforming – The world house concept". Journal of the British Interplanetary Society, vol. 45, no. 8, pp. 341–352. . .
Thompson, J. M. T. (2001). Visions of the future: astronomy and Earth science. Cambridge University Press. .
External links
New Mars forum
Terraformers Society of Canada
Visualizing the steps of Solar System terraforming
Research Paper: Technological Requirements for Terraforming Mars
Terraformers Australia
Terraformers UK
The Terraformation of Worlds
Terraformation de Mars
Fogg, Martyn J. The Terraforming Information Pages
BBC article on Charles Darwin's and Joseph Hooker's artificial ecosystem on Ascension Island that may be of interest to terraforming projects
Robotic Lunar Ecopoiesis Test Bed Principal Investigator: Paul Todd (2004)
1940s neologisms
Ecosystems
Open problems
Planetary engineering
Science fiction themes
Space colonization | 0.782495 | 0.997027 | 0.780169 |
World energy supply and consumption | World energy supply and consumption refers to the global supply of energy resources and its consumption. The system of global energy supply consists of the energy development, refinement, and trade of energy. Energy supplies may exist in various forms such as raw resources or more processed and refined forms of energy. The raw energy resources include for example coal, unprocessed oil & gas, uranium. In comparison, the refined forms of energy include for example refined oil that becomes fuel and electricity. Energy resources may be used in various different ways, depending on the specific resource (e.g. coal), and intended end use (industrial, residential, etc.). Energy production and consumption play a significant role in the global economy. It is needed in industry and global transportation. The total energy supply chain, from production to final consumption, involves many activities that cause a loss of useful energy.
As of 2022, energy consumption is still about 80% from fossil fuels. The Gulf States and Russia are major energy exporters. Their customers include for example the European Union and China, who are not producing enough energy in their own countries to satisfy their energy demand. Total energy consumption tends to increase by about 1–2% per year. More recently, renewable energy has been growing rapidly, averaging about 20% increase per year in the 2010s.
Two key problems with energy production and consumption are greenhouse gas emissions and environmental pollution. Of about 50 billion tonnes worldwide annual total greenhouse gas emissions, 36 billion tonnes of carbon dioxide was a result of energy use (almost all from fossil fuels) in 2021. Many scenarios have been envisioned to reduce greenhouse gas emissions, usually by the name of net zero emissions.
There is a clear connection between energy consumption per capita, and GDP per capita.
A significant lack of energy supplies is called an energy crisis.
Primary energy production
Primary Energy refers to first form of energy encountered, as raw resources collected directly from energy production, before any conversion or transformation of the energy occurs.
Energy production is usually classified as:
Fossil, using coal, crude oil, and natural gas;
Nuclear, using uranium;
Renewable, using biomass, geothermal, hydropower, solar, wind, tidal, wave, among others.
Primary energy assessment by IEA follows certain rules to ease measurement of different kinds of energy. These rules are controversial. Water and air flow energy that drives hydro and wind turbines, and sunlight that powers solar panels, are not taken as PE, which is set at the electric energy produced. But fossil and nuclear energy are set at the reaction heat, which is about three times the electric energy. This measurement difference can lead to underestimating the economic contribution of renewable energy.
Enerdata displays data for "Total energy / production: Coal, Oil, Gas, Biomass, Heat and Electricity" and for "Renewables / % in electricity production: Renewables, non-renewables".
The table lists worldwide PE and the countries producing most (76%) of that in 2021, using Enerdata. The amounts are rounded and given in million tonnes of oil equivalent per year (1 Mtoe = 11.63 TWh (41.9 petajoules), where 1 TWh = 109 kWh) and % of Total. Renewable is Biomass plus Heat plus renewable percentage of Electricity production (hydro, wind, solar). Nuclear is nonrenewable percentage of Electricity production. The above-mentioned underestimation of hydro, wind and solar energy, compared to nuclear and fossil energy, applies also to Enerdata.
The 2021 world total energy production of 14,800 MToe corresponds to a little over 172 PWh / year, or about 19.6 TW of power generation.
Energy conversion
Energy resources must be processed in order to make it suitable for final consumption. For example, there may be various impurities in raw coal mined or raw natural gas that was produced from an oil well that may make it unsuitable to be burned in a power plant.
Primary energy is converted in many ways to energy carriers, also known as secondary energy:
Coal mainly goes to thermal power stations. Coke is derived by destructive distillation of bituminous coal.
Crude oil goes mainly to oil refineries
Natural-gas goes to natural-gas processing plants to remove contaminants such as water, carbon dioxide and hydrogen sulfide, and to adjust the heating value. It is used as fuel gas, also in thermal power stations.
Nuclear reaction heat is used in thermal power stations.
Biomass is used directly or converted to biofuel.
Electricity generators are driven by steam or gas turbines in a thermal plant, or water turbines in a hydropower station, or wind turbines, usually in a wind farm. The invention of the solar cell in 1954 started electricity generation by solar panels, connected to a power inverter. Mass production of panels around the year 2000 made this economic.
Energy trade
Much primary and converted energy is traded among countries. The table lists countries with large difference of export and import in 2021, expressed in Mtoe. A negative value indicates that much energy import is needed for the economy. Russian gas exports were reduced a lot in 2022, as pipelines to Asia plus LNG export capacity is much less than the gas no longer sent to Europe.
Transport of energy carriers is done by tanker ship, tank truck, LNG carrier, rail freight transport, pipeline and by electric power transmission.
Total energy supply
Total energy supply (TES) indicates the sum of production and imports subtracting exports and storage changes. For the whole world TES nearly equals primary energy PE because imports and exports cancel out, but for countries TES and PE differ in quantity, and also in quality as secondary energy is involved, e.g., import of an oil refinery product. TES is all energy required to supply energy for end users.
The tables list TES and PE for some countries where these differ much, both in 2021 and TES history. Most growth of TES since 1990 occurred in Asia. The amounts are rounded and given in Mtoe. Enerdata labels TES as Total energy consumption.
25% of worldwide primary production is used for conversion and transport, and 6% for non-energy products like lubricants, asphalt and petrochemicals. In 2019 TES was 606 EJ and final consumption was 418 EJ, 69% of TES. Most of the energy lost by conversion occurs in thermal electricity plants and the energy industry own use.
Discussion about energy loss
There are different qualities of energy. Heat, especially at a relatively low temperature, is low-quality energy, whereas electricity is high-quality energy. It takes around 3 kWh of heat to produce 1 kWh of electricity. But by the same token, a kilowatt-hour of this high-quality electricity can be used to pump several kilowatt-hours of heat into a building using a heat pump. Electricity can be used in many ways in which heat cannot. So the loss of energy incurred in thermal electricity plants is not comparable to a loss due to, say, resistance in power lines, because of quality differences.
In fact, the loss in thermal plants is due to poor conversion of chemical energy of fuel to electricity by combustion. Chemical energy of fuel is not inherently low-quality; for example, conversion to electricity in fuel cells can theoretically approach 100%. So energy loss in thermal plants is real loss.
Final consumption
Total final consumption (TFC) is the worldwide consumption of energy by end-users (whereas primary energy consumption (Eurostat) or total energy supply (IEA) is total energy demand and thus also includes what the energy sector uses itself and transformation and distribution losses). This energy consists of fuel (78%) and electricity (22%). The tables list amounts, expressed in million tonnes of oil equivalent per year (1 Mtoe = 11.63 TWh) and how much of these is renewable energy. Non-energy products are not considered here. The data are of 2018. The world's renewable share of TFC was 18% in 2018: 7% traditional biomass, 3.6% hydropower and 7.4% other renewables.
In the period 2005–2017 worldwide final consumption of coal increased by 23%, of oil and gas increased by 18%, and that of electricity increased by 41%.
Fuel comes in three types: Fossil fuel is natural gas, fuel derived from petroleum (LPG, gasoline, kerosene, gas/diesel, fuel oil), or from coal (anthracite, bituminous coal, coke, blast furnace gas). Secondly, there is renewable fuel (biofuel and fuel derived from waste). And lastly, the fuel used for district heating.
The amounts of fuel in the tables are based on lower heating value.
The first table lists final consumption in the countries/regions which use most (85%), and per person as of 2018. In developing countries fuel consumption per person is low and more renewable. Canada, Venezuela and Brazil generate most electricity with hydropower.
The next table shows countries consuming most (85%) in Europe.
Energy for energy
Some fuel and electricity is used to construct, maintain and demolish/recycle installations that produce fuel and electricity, such as oil platforms, uranium isotope separators and wind turbines. For these producers to be economical the ratio of energy returned on energy invested (EROEI) or energy return on investment (EROI) should be large enough.
If the final energy delivered for consumption is E and the EROI equals R, then the net energy available is E-E/R. The percentage available energy is 100-100/R. For R>10 more than 90% is available but for R=2 only 50% and for R=1 none. This steep decline is known as the net energy cliff.
Availability of data
Many countries publish statistics on the energy supply and consumption of either their own country, of other countries of interest, or of all countries combined in one chart. One of the largest organizations in this field, the International Energy Agency (IEA), sells yearly comprehensive energy data which makes this data paywalled and difficult to access for internet users. The organization Enerdata on the other hand publishes a free Yearbook, making the data more accessible. Another trustworthy organization that provides accurate energy data, mainly referring to the USA, is the U.S. Energy Information Administration.
Trends and outlook
Due to the COVID-19 pandemic, there was a significant decline in energy usage worldwide in 2020, but total energy demand worldwide had recovered by 2021, and has hit a record high in 2022.
In 2022, consumers worldwide spent nearly USD 10 trillion on energy, averaging more than USD 1,200 per person. This reflects a 20% increase over the previous five-year average, highlighting the significant economic impact and the increasing financial burden of energy consumption on a global scale.
IEA scenarios
In World Energy Outlook 2023 the IEA notes that "We are on track to see all fossil fuels peak before 2030". The IEA presents three scenarios:
The Stated Policies Scenario (STEPS) provides an outlook based on the latest policy settings. The share of fossil fuel in global energy supply – stuck for decades around 80% – starts to edge downwards and reaches 73% by 2030. This undercuts the rationale for any increase in fossil fuel investment. Renewables are set to contribute 80% of new power capacity to 2030, with solar PV alone accounting for more than half. The STEPS sees a peak in energy-related CO2 emissions in the mid-2020s but emissions remain high enough to push up global average temperatures to around 2.4 °C in 2100. Total energy demand continues to increase through to 2050. Total energy investment remains at about US$3 trillion per year.
The Announced Pledges Scenario (APS) assumes all national energy and climate targets made by governments are met in full and on time. The APS is associated with a temperature rise of 1.7 °C in 2100 (with a 50% probability). Total energy investment rises to about US$4 trillion per year after 2030.
The Net Zero Emissions by 2050 (NZE) Scenario limits global warming to 1.5 °C. The share of fossil fuel reaches 62% in 2030. Methane emissions from fossil fuel supply cuts by 75% in 2030. Total energy investment rises to almost US$5 trillion per year after 2030. Clean energy investment needs to rise everywhere, but the steepest increases are needed in emerging market and developing economies other than China, requiring enhanced international support. The share of electricity in final consumption exceeds 50% by 2050 in NZE. The share of nuclear power in electricity generation remains broadly stable over time in all scenarios, about 9%.
The IEA's "Electricity 2024" report details a 2.2% growth in global electricity demand for 2023, forecasting an annual increase of 3.4% through 2026, with notable contributions from emerging economies like China and India, despite a slump in advanced economies due to economic and inflationary pressures. The report underscores the significant impact of data centers, artificial intelligence and cryptocurrency, projecting a potential doubling of electricity consumption to 1,000 TWh by 2026, which is on par with Japan's current usage. Notably, 85% of the additional demand is expected to originate from China and India, with India's demand alone predicted to grow over 6% annually until 2026, driven by economic expansion and increasing air conditioning use.
Southeast Asia's electricity demand is also forecasted to climb by 5% annually through 2026. In the United States, a decrease was seen in 2023, but a moderate rise is anticipated in the coming years, largely fueled by data centers. The report also anticipates that a surge in electricity generation from low-emissions sources will meet the global demand growth over the next three years, with renewable energy sources predicted to surpass coal by early 2025.
Alternative scenarios
The goal set in the Paris Agreement to limit climate change will be difficult to achieve. Various scenarios for achieving the Paris Climate Agreement Goals have been developed, using IEA data but proposing transition to nearly 100% renewables by mid-century, along with steps such as reforestation. Nuclear power and carbon capture are excluded in these scenarios. The researchers say the costs will be far less than the $5 trillion per year governments currently spend subsidizing the fossil fuel industries responsible for climate change.
In the +2.0 C (global warming) Scenario total primary energy demand in 2040 can be 450 EJ = 10,755 Mtoe, or 400 EJ = 9560 Mtoe in the +1.5 Scenario, well below the current production. Renewable sources can increase their share to 300 EJ in the +2.0 C Scenario or 330 EJ in the +1.5 Scenario in 2040. In 2050 renewables can cover nearly all energy demand. Non-energy consumption will still include fossil fuels.
Global electricity generation from renewable energy sources will reach 88% by 2040 and 100% by 2050 in the alternative scenarios. "New" renewables—mainly wind, solar and geothermal energy—will contribute 83% of the total electricity generated. The average annual investment required between 2015 and 2050, including costs for additional power plants to produce hydrogen and synthetic fuels and for plant replacement, will be around $1.4 trillion.
Shifts from domestic aviation to rail and from road to rail are needed. Passenger car use must decrease in the OECD countries (but increase in developing world regions) after 2020. The passenger car use decline will be partly compensated by strong increase in public transport rail and bus systems.
CO2 emission can reduce from 32 Gt in 2015 to 7 Gt (+2.0 Scenario) or 2.7 Gt (+1.5 Scenario) in 2040, and to zero in 2050.
See also
Lists
List of countries by energy intensity
List of countries by electricity consumption
List of countries by electricity production
List of countries by energy consumption per capita
List of countries by greenhouse gas emissions
List of countries by energy consumption and production
Notes
References
External links
Enerdata - World Energy & Climate Statistics
International Energy Outlook, by the U.S. Energy Information Administration
World Energy Outlook from the IEA
Energy policy
Energy economics
Energy by region
Energy consumption
Human ecology
Global environmental issues
World
Energy production | 0.783072 | 0.996134 | 0.780044 |
Capability approach | The capability approach (also referred to as the capabilities approach) is a normative approach to human welfare that concentrates on the actual capability of persons to achieve lives they value rather than solely having a right or freedom to do so. It was conceived in the 1980s as an alternative approach to welfare economics.
In this approach, Amartya Sen and Martha Nussbaum combine a range of ideas that were previously excluded from (or inadequately formulated in) traditional approaches to welfare economics. The core focus of the capability approach is improving access to the tools people use to live a fulfilling life.
Assessing capability
Sen initially argued for five components to assess capability:
The importance of real freedoms in the assessment of a person's advantage
Individual differences in the ability to transform resources into valuable activities
The multi-variate nature of activities giving rise to wellbeing
A balance of materialistic and nonmaterialistic factors in evaluating human welfare
Concern for the distribution of opportunities within society
Subsequently, in collaboration with political philosopher Martha Nussbaum, development economist Sudhir Anand and economic theorist James Foster, Sen has helped propel the capabilities approach to appear as a policy paradigm in debates concerning human development; his research inspired the creation of the UN's Human Development Index (a popular measure of human development that captures capabilities in health, education, and income). Additionally, the approach has been operationalized to have a high income country focus by Paul Anand and colleagues. Sen also founded the Human Development and Capability Association in 2004 in order to further promote discussion, education, and research on the human development and capability approach. Since then, the approach has been much discussed by political theorists, philosophers, and a range of social scientists, including those with a particular interest in human health.
The approach emphasizes functional capabilities ("substantive freedoms", such as the ability to live to old age, engage in economic transactions, or participate in political activities); these are construed in terms of the substantive freedoms people have reason to value, instead of utility (happiness, desire-fulfillment or choice) or access to resources (income, commodities, assets). An approach to wellbeing using utility can be found in utilitarianism, while access to resources is advocated by the Rawlsian approach.
Poverty is understood as capability-deprivation. It is noteworthy that proponents emphasize not only how humans function, but their access to capabilities "to achieve outcomes that they value and have reason to value". Everyone could be deprived of capabilities in many ways, e.g. by ignorance, government oppression, lack of financial resources, or false consciousness.
This approach to human well-being emphasizes the importance of freedom of choice, individual heterogeneity and the multi-dimensional nature of welfare. In significant respects, the approach is consistent with the handling of choice within conventional microeconomics consumer theory, although its conceptual foundations enable it to acknowledge the existence of claims, like rights, which normatively dominate utility-based claims (see ).
Key terms
Functionings
In the most basic sense, functionings consist of "beings and doings". As a result, living may be seen as a set of interrelated functions. Essentially, functionings are the states and activities constitutive of a person's being. Examples of functionings can vary from elementary things, such as being healthy, having a good job, and being safe, to more complex states, such as being happy, having self-respect, and being calm. Moreover, Amartya Sen contends that functionings are crucial to an adequate understanding of the capability approach; capability is conceptualized as a reflection of the freedom to achieve valuable functionings.
In other words, functionings are the subjects of the capabilities referred to in the approach: what we are capable of, want to be capable of, or should be capable of being and/or do. Therefore, a person's chosen combination of functionings, what they are and do, is part of their overall capability set — the functionings they were able to do. Yet, functionings can also be conceptualized in a way that signifies an individual's capabilities. Eating, starving, and fasting would all be considered functionings, but the functioning of fasting differs significantly from that of starving because fasting, unlike starving, involves a choice and is understood as choosing to fast despite the presence of other options. Consequently, an understanding of what constitutes functionings is inherently tied together with an understanding of capabilities, as defined by this approach.
Capabilities
Capabilities are the alternative combinations of functionings that are feasible for a person to achieve. Formulations of capability have two parts: functionings and opportunity freedom — the substantive freedom to pursue different functioning combinations. Ultimately, capabilities denote a person's opportunity and ability to generate valuable outcomes, taking into account relevant personal characteristics and external factors. The important part of this definition is the "freedom to achieve", because if freedom had only instrumental value (valuable as a means to achieve an end) and no intrinsic value (valuable in and of itself) to a person's well-being, then the value of the capability set as a whole would simply be defined by the value of a person's actual combination of functionings. Such a definition would not acknowledge the entirety of what a person is capable of doing and their resulting current state due to the nature of the options available to them. Consequently, the capability set outlined by this approach is not merely concerned with achievements; rather, freedom of choice, in and of itself, is of direct importance to a person's quality of life.
For example, the difference between fasting and starving, on person's well-being, is whether the person is choosing not to eat. In this example, the functioning is starving but the capability to obtain an adequate amount of food is the key element in evaluating well-being between individuals in the two states. In sum, having a lifestyle is not the same as choosing it; well-being depends on how that lifestyle came to be. More formally, while the combination of a person's functionings represents their actual achievements, their capability set represents their opportunity freedom — their freedom to choose between alternative combinations of functionings.
In addition to being the result of capabilities, some functionings are also a prerequisite for capabilities, i.e., there is a dual role of some functionings as both ends and instruments. Examples of functionings that are a direct requirement for capabilities are good nourishment, mental and physical health, and education.
Nussbaum further distinguishes between internal capabilities that are personal abilities, and combined capabilities that are "defined as internal capabilities together with the social/political/economic conditions in which functioning can actually be chosen". She points out that the notion of (combined) capability "combines internal preparedness with external opportunity in a complicated way, so that measurement is likely to be no easy task."
An extension of the capabilities approach was published in 2013 in Freedom, Responsibility and Economics of the Person. This book explores the interconnected concepts of person, responsibility and freedom in economics, moral philosophy and politics. It tries to reconcile the rationality and morality of individuals. It presents a methodological reflection (phenomenology versus Kantian thought) with the aim to re-humanise the person, through actions, and through the values and norms that lead to corresponding rights and obligations that must be ordered. The book extends the capabilities approach in a critical form. In particular, it considers freedom in relation to responsibility, that is, the capacity of people to apply moral constraints to themselves. By contrast, Sen's capability approach considers freedom as a purely functional rationality of choice.
Agency
Amartya Sen defines an agent as someone who acts and brings about change, whose achievement can be evaluated in terms of his or her own values and goals. This differs from a common use of the term "agent" sometimes used in economics and game theory to mean a person acting on someone else's behalf. Agency depends on the ability to personally choose the functionings one values, a choice that may not correlate with personal well-being. For example, when a person chooses to engage in fasting, they are exercising their ability to pursue a goal they value, though such a choice may not positively affect physical well-being. Sen explains that a person as an agent need not be guided by a pursuit of well-being; agency achievement considers a person's success in terms of their pursuit of the whole of their goals.
For the purposes of the capability approach, agency primarily refers to a person's role as a member of society, with the ability to participate in economic, social, and political actions. Therefore, agency is crucial in assessing one's capabilities and any economic, social, or political barriers to one's achieving substantive freedoms. Concern for agency stresses that participation, public debate, democratic practice, and empowerment, should be fostered alongside well-being.
Alkire and Deneulin pointed out that agency goes together with the expansion of valuable freedoms. That is, in order to be agents of their lives, people need the freedom to be educated, speak in public without fear, express themselves, associate, etc.; conversely, people can establish such an environment by being agents. In summary, the agency aspect is important in assessing what a person can do in line with his or her conception of the good.
Nussbaum's central capabilities
Nussbaum (2000) frames these basic principles in terms of 10 capabilities, i.e. real opportunities based on personal and social circumstance. She claims that a political order can only be considered as being decent if this order secures at least a threshold level of these 10 capabilities to all inhabitants. Nussbaum's capabilities approach is centered around the notion of individual human dignity. Nussbaum emphasizes that this approach is necessary since even individuals within a family unit can have vastly different needs. Given Nussbaum's contention that the goal of the capabilities approach is to produce capabilities for each and every person, the capabilities below belong to individual persons, rather than to groups. The capabilities approach has been very influential in development policy where it has shaped the evolution of the human development index (HDI), has been much discussed in philosophy, and is increasingly influential in a range of social sciences.
More recently, the approach has been criticized for being grounded in the liberal notion of freedom:
The core capabilities Nussbaum argues should be supported by all democracies are:
Life. Being able to live to the end of a human life of normal length; not dying prematurely, or before one's life is so reduced as to be not worth living.
Bodily Health. Being able to have good health, including reproductive health; to be adequately nourished; to have adequate shelter.
Bodily integrity. Being able to move freely from place to place; to be secure against violent assault, including sexual assault and domestic violence; having opportunities for sexual satisfaction and for choice in matters of reproduction.
Senses, Imagination, and Thought. Being able to use the senses, to imagine, think, and reason—and to do these things in a "truly human" way, a way informed and cultivated by an adequate education, including, but by no means limited to, literacy and basic mathematical and scientific training. Being able to use imagination and thought in connection with experiencing and producing works and events of one's own choice, religious, literary, musical, and so forth. Being able to use one's mind in ways protected by guarantees of freedom of expression with respect to both political and artistic speech, and freedom of religious exercise. Being able to have pleasurable experiences and to avoid non-beneficial pain.
Emotions. Being able to have attachments to things and people outside ourselves; to love those who love and care for us, to grieve at their absence; in general, to love, to grieve, to experience longing, gratitude, and justified anger. Not having one's emotional development blighted by fear and anxiety. (Supporting this capability means supporting forms of human association that can be shown to be crucial in their development.)
Practical Reason. Being able to form a conception of the good and to engage in critical reflection about the planning of one's life. (This entails protection for the liberty of conscience and religious observance.)
Affiliation.
Being able to live with and toward others, to recognize and show concern for other humans, to engage in various forms of social interaction; to be able to imagine the situation of another. (Protecting this capability means protecting institutions that constitute and nourish such forms of affiliation, and also protecting the freedom of assembly and political speech.)
Having the social bases of self-respect and non-humiliation; being able to be treated as a dignified being whose worth is equal to that of others. This entails provisions of non-discrimination on the basis of race, sex, sexual orientation, ethnicity, caste, religion, national origin and species.
Other Species. Being able to live with concern for and in relation to animals, plants, and the world of nature.
Play. Being able to laugh, to play, to enjoy recreational activities.
Control over one's Environment.
Political. Being able to participate effectively in political choices that govern one's life; having the right of political participation, protections of free speech and association.
Material. Being able to hold property (both land and movable goods), and having property rights on an equal basis with others; having the right to seek employment on an equal basis with others; having the freedom from unwarranted search and seizure. In work, being able to work as a human, exercising practical reason and entering into meaningful relationships of mutual recognition with other workers.
Although Nussbaum did not claim her list as definite and unchanging, she strongly advocated for outlining a list of central human capabilities. On the other hand, Sen refuses to supply a specific list of capabilities. Sen argues that an exact list and weights would be too difficult to define. For one, it requires specifying the context of use of capabilities, which could vary. Also, Sen argues that part of the richness of the capabilities approach is its insistence on the need for open valuational scrutiny for making social judgments. He is disinclined to in any way devalue the domain of reasoning in the public sphere. Instead, Sen argues that the task of weighing various capabilities should be left to the ethical and political considerations of each society based on public reasoning. Along with concerns raised about Nussbaum's list, Alkire and Black also argue that Nussbaum's methodology "runs counter to an essential thrust of the capabilities approach which has been the attempt to redirect development theory away from a reductive focus on a minimally decent life towards a more holistic account of human well-being for all people."
That said, applications to development are discussed in Sen (1999), Nussbaum (2000), and Clark (2002, 2005), and are now numerous to the point where the capabilities approach is widely accepted as a paradigm in development. The programme of work operationalising the capability approach by Anand and colleagues draws heavily on Nussbaum's list as a relatively comprehensive, high-level account of the space in which human well-being or life quality is experienced. This work argues that the subitems on Nussbaum's list are too distinct to be monitored by single question and that a dashboard of some 40-50 indicators is required to inform the development of empirical work.
Measurement of capabilities
The measurement of capabilities was previously thought to be a particular barrier to the implementation and use of the approach. However, two particular lines of work, in research and policy have sought to show that meaningful indicators of what individuals (and in some cases governments) are able to do can be developed and used to generate a range of insights.
In 1990, the UN Human Development report published the first such exercise which focused on health, education and income which were equally weighted to generate the Human Development Index. At the same time, and subsequently, researchers recognizing that these three areas covered only certain elements of life quality have sought to develop more comprehensive measures. A major project in this area has been the 'capabilities measurement project' in which Anand has led teams of philosophers, economists and social scientists to generate that gives a full and direct implement of the approach drawing particular on the key relations and concepts developed in Sen (1985) but also on work to do with the content of the approach. The earliest work in this project developed a set of around 50 capability indicators which were used to develop a picture of quality of life and deprivation in the UK. Subsequently, Anand and colleagues have developed datasets for the US, UK and Italy in which all the elements of Sen's framework are reflected in data which permits all three key equations, for functionings, experience and capabilities, to be estimated.
In a series of papers, they have shown that both their primary data and some secondary datasets can be used to shed light on the production and distribution of life quality for working age adults, those in retirement, very young children, those vulnerable to domestic violence, migrants, excluded traveler communities and the disabled. They use these applications to argue that the capability framework is a particularly good fit for understanding quality of life across the life course and that it provides a relatively universal grammar for understanding the elements of human well-being.
Monetary vs. non-monetary measures of well-being
Monetary and non-monetary measures of well-being are ideal when used to complement each other. Understanding the various aspects of economic development process not only helps address issues of inequality and lags in human development, but also helps to pinpoint where countries lag, which once addressed can further promote well-being and advancement. As the Organisation for Economic Co-operation and Development (OECD) (2006) notes: Well-being has several dimensions of which monetary factors are only one. They are nevertheless an important one, since richer economies are better placed to create and maintain other well-being-enhancing conditions, such as a clean environment, the likelihood that the average person will have a right to 10 years or more of education, and lead a comparatively long and healthy life. Well-being will also be increased by institutions that enable citizens to feel that they control their own lives, and that investment of their time and resources will be rewarded. In turn, this will lead to higher incomes in a virtuous circle. Simon Kuznets, the developer of GNP, cautioned against using the measure as an indicator of overall welfare, which speaks to the unintended use of output-based measures as indicators of human welfare.
Critique of output-based measures
The use of GDP and GNP as an approximation of well-being and development have been critiqued widely, because they are often misused as indicators of well-being and human development when in fact they are only telling about the economic capacity of a country or an average income level when expressed on a per person basis. In particular, feminist economics and environmental economics offer a number of critiques. Critics in these fields typically discuss gender inequalities, insufficient representation of environmental costs of productions and general issues of misusing an output-based measure for unintended purposes. In sum, the conclusion of Capabilities Approach is that people do not just value monetary income, and that development is linked to various indicators of life satisfaction and hence are important in measuring well-being. Development policies strive to create an environment for people to live long, healthy creative lives.
Feminist critiques
Nussbaum highlights some of the problematic assumptions and conclusions of output-based approaches to development. First, she notes that GNP and GDP do not consider special requirements to help the most vulnerable, such as women. Specifically, Nussbaum mentions that output-based approaches ignore the distribution of needs for the varying circumstances of people, for example a pregnant woman needs more resources than a non-pregnant woman or a single man.
Also, output-based measures ignore unpaid work, which includes child rearing and the societal advantages that result from a mother's work. Marilyn Waring, a political economist and activist for women's rights, elaborates on the example of a mother engaged in child care, domestic care and producing few goods for the informal market, all of which are usually done simultaneously. These activities provide economic benefits, but are not valued in national accounting systems; this suggests that the definition of unemployment used in output-based measures is inappropriate. (See the article on Feminist economics, section "Well-being").
Environmental critiques
Another critique by Waring is that the output-based measures ignore negative effects of economic growth and so commodities that lower social welfare, such as nuclear weapons, and oil extraction which causes spills, are considered a good input. The "anti-bads" or the defensive expenditures to fight "bads" are not counted as a deduction in accounting systems (p. 11). Furthermore, natural resources are treated as limitless and negative outputs such as pollution and associated health risks, are not deducted from the measures.
Technical and misinterpretation critiques
When GNP and GDP were developed, their intended use was not for measuring human well-being; the intended use was as an indicator of economic growth, and that does not necessarily translate into human well-being. Kuznets has often made this point, in his words, "distinctions must be kept in mind between quantity and quality of growth, between costs and returns and between the short and long run. Goals for more growth should specify more growth of what and for what" (p. 9).
Nussbaum also points out that GNP and GDP omit income distribution and the opportunity or ability to turn resources into activities (this critique stems directly from Capabilities Approach). Kuznets terms this as a problem of "obtaining an unduplicated total of all output", (p. 15) this suggests that people are only seen as consumers and not as potential producers, hence any products purchased by an individual are not seen as "being consumed in the productive process of turning out other goods" (p. 15)
These accounting measures also fail to capture all forms of work and only focus on "engagement in work 'for pay or profit, (p. 133) leaving out contributions to a society and economy, like volunteer work and subsistence farming. Kuznets provides the example of the process by which farmers devote time and energy to bringing virgin land into cultivation. Furthermore, GNP and GDP only account for monetary exchanges, and place no value on some important intangibles such as leisure time.
Shift to alternative measures
Capabilities Approach has been highly influential thus far in human development theories and valuational methods of capturing capabilities, the theory has led to the creation of the HDI, IHDI and GII and their uses among international organizations such as the United Nations and others. In companies, capabilities are included in Key Development Indicators, or KDIs as measures of development, including employee development. In 1990 in the Human Development Report (HDR)commissioned by the UNDP set out to create a distribution-sensitive development measure.
This measure was created to rival the more traditional metrics of GDP and GNP, which had previously been used to measure level of development in a given country, but which did not contain provisions for terms of distribution. The resulting measure was entitled the Human Development Index, created by Mahbub ul Haq in collaboration with Sen and others. The purpose was to create an indicator of human development, especially one that would provide a general assessment and critique of global human development to shed light on persistent inequality, poverty and other capability deprivations despite high levels of GDP growth.
Currently the HDI continues to be used in the Human Development Report in addition to many other measures (based on theoretical perspectives of Capabilities) that have been developed and used by the United Nations. Among these indices are the Gender-related Development Index (GDI), the Gender Empowerment Measure (GEM), introduced in 1995, and the more recent Gender Inequality Index (GII) and the Inequality-adjusted Human Development Index (IHDI), both adopted in 2010.
Capabilities-based indices
The following are a few of the major indices that were created based on the theoretical grounds of Capabilities Approach.
Human development index
The Human Development Index takes into consideration a number of development and well-being factors that are not taken into account in the calculation of GDP and GNP. The Human Development Index is calculated using the indicators of life expectancy, adult literacy, school enrollment, and logarithmic transformations of per-capita income. Moreover, it is noted that the HDI "is a weighted average of income adjusted for distributions and purchasing power, life expectancy, literacy and health" (p. 16)
The HDI is calculated for individual countries with a value between 0 and 1 and is "interpreted...as the ultimate development that has been attained by that nation" (p. 17). Currently, the 2011 Human Development Report also includes the Inequality-adjusted Human Development Index which accounts for exactly the same things that the HDI considers however the IHDI has all three dimensions (long and healthy life, knowledge and a decent standard of living) adjusted for inequalities in the distribution of each dimension across the population.
Gender-related development index
The Gender-related Development Index is defined as a "distribution-sensitive measure that accounts for the human development impact of existing gender gaps in the three components of the HDI" (p. 243). In this way, the GDI accounts for shortcomings in the HDI in terms of gender, because it re-evaluates a country's score in the three areas of the HDI based on perceived gender gaps, and penalizes the score of the country if, indeed, large gender disparities in those areas exist. This index is used in unison with the HDI and therefore also captures the elements of capabilities that the HDI holds. In addition, it considers women's capabilities which has been a focus in much of Sen's and Nussbaum's work (to list a few: Nussbaum, 2004a; Nussbaum, 2004b; Sen, 2001; Sen, 1990.)
Gender empowerment measure
The Gender Empowerment Measure (GEM) is considerably more specialized than the GDI. The GEM focuses particularly on the relative empowerment of women in a given country. The empowerment of women is measured by evaluating women's employment in high-ranking economic positions, seats in parliament, and share of household income. Notably this measurement captures more of Nussbaum's 10 Central Capabilities, such as, Senses, Imagination and Thought; Affiliation; and Control Over One's Environment.
Gender inequality index
In the 2013 Human Development Report the Gender Inequality Index, which was introduced in 2011, continues to adjust the GDI and the GEM. This composite measurement uses three dimensions: reproductive health, empowerment, and labor force participation. When constructing the index the following criteria were key: conceptual relevance to definitions of human development and theory; Non-ambiguity so that the index is easily interpreted; Reliability of data that is standardized and collected/processed by a trustworthy organization; No redundancy found in other indicators; and lastly Power of discrimination, where distribution is well distinguished among countries and there is no "bunching" among top and bottom countries (p. 10). This index also captures some of Nussbaum's 10 Central Capabilities (Senses, Imagination and Thought; Affiliation; and Control Over Ones Environment).
Other measures
In 1997, the UNDP introduced the Human Poverty Index (HPI), which is aimed at measuring poverty in both industrialized and developing countries. The HPI is a "nonincome-based" measure of poverty (p. 100) which focuses on "human outcomes in terms of choices and opportunities that a person faces" (p. 99). In support of this index, Sakiko Fukuda-Parr—a development economist and past Director of The Human Development Report Office—differentiates between income poverty and human poverty. Human poverty can be interpreted as deprivations to lead a long healthy and creative life with a decent standard of living.
Economic evaluation in health care
The capability approach is being developed and increasingly applied in health economics, for use in cost-effectiveness analysis. It is seen as an alternative to existing preference-based measures of health-related quality of life (for example the EQ-5D) that focus on functioning, and can be applied within the framework of quality-adjusted life years (QALYs). A number of measures have been created for use in particular contexts such as older people, public health and mental health, as well as more generic capability-based outcome measures. Caution remains when measures do not explicitly rule out people's adaption to their circumstances, for example to physical health problems.
Alternative measures of well-being
As noted above, to a great extent, Nussbaum's Central Human Capabilities address issues of equality, political freedom, creativity and the right to the self, as do the various indices that are based on capabilities. It is evident that these measures are very subjective, but this fact is in the essence of defining quality of life according to Nussbaum and Sen. Nussbaum refers to Sen in saying that, although measures of well-being may be problematic in comparative, quantifiable models due to their subjective matter, the protection of and commitment to human development are too important of matters to be left on the sidelines of economic progress. Well-being and quality of life are too important to be left without intentional focus towards political change.
Measures such as the HDI, GDI, GEM, GII, IHDI and the like are crucial in targeting issues of well-being and indicators of quality of life. Anand, et al. (2009) can be summarized as demonstrating that it is possible to measure capabilities within the conventions applied to standard household survey design, contrary to earlier doubts about the ability to operationalise the capabilities approach.
Contrast with other approaches
Utility-based or subjective approaches
Much of conventional welfare economics today is grounded in a utilitarian approach according to the classical Benthamite form of utilitarianism, in which the most desirable action is the one that best increases peoples' psychological happiness or satisfaction. The "utility" of a person stands for some measure of his or her pleasure or happiness. Some merits associated with this approach to measuring well-being are that it recognizes the importance of taking account of the results of social arrangements in judging them and the need to pay attention to the well-being of the people involved when judging social arrangements and their results. Amartya Sen, however, argues this view has three main deficiencies: distributional indifference; neglect of rights, freedoms, and other non-utility concerns; and adaptation and mental conditioning.
Distributional indifference refers to a utilitarian indifference between different the distributions of utility, so long as the sum total is the same (note that the utilitarian is indifferent to the distribution of happiness, not income or wealth—the utilitarian approach would generally prefer, all else being equal, more materially equal societies assuming diminishing marginal utility). Sen argues that we may "want to pay attention not just to "aggregate" magnitudes, but also to extents of inequalities in happiness". Sen also argues that while the utilitarian approach attaches no intrinsic value (ethics) to claims of rights and freedoms, some people value these things independently of their contribution to utility.
Lastly, Amartya Sen makes the argument that the utilitarian view of individual well-being can be easily swayed by mental conditioning and peoples' happiness adapting to oppressive situations. The utility calculus can essentially be unfair to those who have come to terms with their deprivation as a means for survival, adjusting their desires and expectations. The capability approach, on the other hand, doesn't fall victim to these same criticisms because it acknowledges inequalities by focusing on equalizing people's capabilities, rather than happiness. It stresses the intrinsic importance of rights and freedoms when evaluating well-being, and it avoids overlooking deprivation by focusing on capabilities and opportunities, not state of mind.
Resource-based approaches
Another common approach in conventional economics, in economic policy and judging development, has traditionally been to focus on income and resources. These sorts of approaches to development focus on increasing resources, such as assets, property rights, or basic needs. However, measuring resources is fundamentally different from measuring functionings, such as the case in which people don't have the capability to use their resources in the means they see fit. Arguably, the main difficulty in a resource- or income-based approach to well-being lies in personal heterogeneities, namely the diversity of human beings.
Different amounts of income are needed for different individuals to enjoy similar capabilities, such as an individual with severe disabilities whose treatment to ensure the fulfillment of basic capabilities may require dramatically more income compared to an able-bodied person. All sorts of differences, such as differences in age, gender, talents, etc. can make two people have extremely divergent opportunities of quality of life, even when equipped with exactly the same commodities. Additionally, other contingent circumstances which affect what an individual can make of a given set of resources include environmental diversities (in geographic sense), variations in social climate, differences in relational perspectives, and distribution within the family.
The capability approach, however, seeks to consider all such circumstances when evaluating people's actual capabilities. Furthermore, there are things people value other than increased resources. In some cases, maximizing resources may even be objectionable. As was recognized in the 1990 Human Development Report, the basic objective of development is to create an enabling environment for people to live long, healthy, and creative lives. This end is often lost in the immediate concern with the accumulation of commodities and financial wealth that are only a means to expansion of capabilities. Overall, though resources and income have a profound effect on what can or cannot be done, the capability approach recognizes that they are not the only things to be considered when judging well-being, switching the focus from a means to a good life to the freedom to achieve actual improvements in lives, which one has reason to value.
The Capability Approach to Education
The capability approach has also impacted educational discourse. Rather than seeing the success of an education system based on the measurable achievements of students, such as scores in examinations or assessments, educational success through a capabilities perspective can be seen through the capabilities that such an education enables. Through an education programme a student is able to acquire knowledge, skills, values and understanding and this enables a young person to think in new ways, to ‘be’, to develop agency in society and make decisions. These are not easily ‘measurable’ in the same way examination results are, but can be seen to be an important outcome of an educational programme.
A number of writers have explored what these education ‘capabilities’ might be. Terzi's list focuses on the minimum entitlement of education for students with disabilities—these include Literacy, Numeracy, Sociality and Participation among others. Walker, working in Higher Education offers Practical Reason, Emotional Resilience, Knowledge and imagination. Hinchcliffe offers a set of capabilities for students of Humanities subjects, including critical examination and judgement, narrative imagination, recognition/ concern for others (citizenship in a globalised world).
Further exploration of the capability approach to education has sought to explore the role that subject disciplines play in the generation of subject specific capabilities, drawing on the ideas of Powerful Knowledge from Michael Young and the Sociology of Education. Geography as a school subject has explored these as ‘GeoCapabilities’.
See also
Demographic economics
Economic development
Ethics of care
Human Development and Capability Association
International Association for Feminist Economics
International development
Journal of Human Development and Capabilities
Important publications in development economics
Oxford Poverty and Human Development Initiative
Sustainable development
UN Human Development Index
Welfare economics
Women's education and development
References
Further reading
Hardback.
Draft pdf version.
Pdf. E-book.
Reprinted in
Also reprinted in
External links
Human Development and Capability Association
Journal of Human Development
The Measurement of Human Capabilities
Oxford Poverty & Human Development Initiative (OPHI)
Development studies
Sociological theories
Welfare economics
Development economics | 0.785091 | 0.993517 | 0.780001 |
Green economy | A green economy is an economy that aims at reducing environmental risks and ecological scarcities, and that aims for sustainable development without degrading the environment. It is closely related with ecological economics, but has a more politically applied focus. The 2011 UNEP Green Economy Report argues "that to be green, an economy must not only be efficient, but also fair. Fairness implies recognizing global and country level equity dimensions, particularly in assuring a Just Transition to an economy that is low-carbon, resource efficient, and socially inclusive."
A feature distinguishing it from prior economic regimes is the direct valuation of natural capital and ecological services as having economic value (see The Economics of Ecosystems and Biodiversity and Bank of Natural Capital) and a full cost accounting regime in which costs externalized onto society via ecosystems are reliably traced back to, and accounted for as liabilities of, the entity that does the harm or neglects an asset.
Green sticker and ecolabel practices have emerged as consumer facing indicators of friendliness to the environment and sustainable development. Many industries are starting to adopt these standards as a way to promote their greening practices in a globalizing economy. Also known as sustainability standards, these standards are special rules that guarantee the products bought do not hurt the environment and the people that make them. The number of these standards has grown recently and they can now help build a new, greener economy. They focus on economic sectors like forestry, farming, mining or fishing, among others; concentrate on environmental factors like protecting water sources and biodiversity, or reducing greenhouse gas emissions; support social protections and workers’ rights; and home in on specific parts of production processes.
Green economists and economics
Green economics is loosely defined as any theory of economics by which an economy is considered to be component of the ecosystem in which it resides (after Lynn Margulis). A holistic approach to the subject is typical, such that economic ideas are commingled with any number of other subjects, depending on the particular theorist. Proponents of feminism, postmodernism, the environmental movement, peace movement, Green politics, green anarchism and anti-globalization movement have used the term to describe very different ideas, all external to mainstream economics.
According to Büscher, the increasing liberalisation of politics since the 1990s has meant that biodiversity must 'legitimise itself' in economic terms. Many non-governmental organisations, governments, banks, companies and so forth have started to claim the right to Define and defend biodiversity and in a distinctly neoliberal manner that subjects the concept's social, political, and ecological dimensions to their value as determined by capitalist markets.
Some economists view green economics as a branch or subfield of more established schools. For instance, it can be regarded as classical economics where the traditional land is generalized to natural capital and has some attributes in common with labor and physical capital (since natural capital assets like rivers directly substitute for human-made ones such as canals). Or, it can be viewed as Marxist economics with nature represented as a form of Lumpenproletariat, an exploited base of non-human workers providing surplus value to the human economy, or as a branch of neoclassical economics in which the price of life for developing vs. developed nations is held steady at a ratio reflecting a balance of power and that of non-human life is very low.
An increasing commitment by the UNEP (and national governments such as the UK) to the ideas of natural capital and full cost accounting under the banner 'green economy' could blur distinctions between the schools and redefine them all as variations of "green economics". As of 2010 the Bretton Woods institutions (notably the World Bank and International Monetary Fund (via its "Green Fund" initiative) responsible for global monetary policy have stated a clear intention to move towards biodiversity valuation and a more official and universal biodiversity finance.
The UNEP 2011 Green Economy Report informs that "based on existing studies, the annual financing demand to green the global economy was estimated to be in the range US$1.05 to US$2.59 trillion. To place this demand in perspective, it is about one-tenth of total global investment per year, as measured by global Gross Capital Formation."
At COP26, the European Investment Bank announced a set of just transition common principles agreed upon with multilateral development banks, which also align with the Paris Agreement. The principles refer to focusing financing on the transition to net zero carbon economies, while keeping socioeconomic effects in mind, along with policy engagement and plans for inclusion and gender equality, all aiming to deliver long-term economic transformation.
The African Development Bank, Asian Development Bank, Islamic Development Bank, Council of Europe Development Bank, Asian Infrastructure Investment Bank, European Bank for Reconstruction and Development, New Development Bank, and Inter-American Development Bank are among the multilateral development banks that have vowed to uphold the principles of climate change mitigation and a Just Transition. The World Bank Group also contributed.
Definition
Karl Burkart defined a green economy as based on six main sectors:
Renewable energy
Green buildings
Sustainable transport
Water management
Waste management
Land management
The International Chamber of Commerce (ICC), representing global business, defines the green economy as "an economy in which economic growth and environmental responsibility work together in a mutually reinforcing fashion while supporting progress on social development".
In 2012, the ICC published the Green Economy Roadmap, containing contributions from international experts consulted bi-yearly. The Roadmap represents a comprehensive and multidisciplinary effort to clarify and frame the concept of "green economy". It highlights the role of business in bringing solutions to global challenges. It sets out the following 10 conditions which relate to business/intra-industry and collaborative action for a transition towards a green economy:
Open and competitive markets
Metrics, accounting, and reporting
Finance and investment
Awareness
Life cycle approach
Resource efficiency and decoupling
Employment
Education and skills
Governance and partnership
Integrated policy and decision-making
Finance and investing
Green growth
Approximately 57% of businesses responding to a survey are investing in energy efficiency, 64% in reducing and recycling trash, and 32% in new, less polluting industries and technologies. Roughly 40% of businesses made investments in energy efficiency in 2021.
Ecological measurements
Measuring economic output and progress is done through the use of economic index indicators. Green indices emerged from the need to measure human ecological impact, efficiency sectors like transport, energy, buildings and tourism, as well as the investment flows targeted to areas like renewable energy and cleantech innovation.
2016 - 2022 Green Score City Index is an ongoing study measuring the anthropogenic impact human activity has on nature.
2010 - 2018 Global Green Economy Index™ (GGEI), published by consultancy Dual Citizen LLC is in its 6th edition. It measures the green economic performance and perceptions of it in 130 countries along four main dimensions of leadership & climate change, efficiency sectors, markets & investment and the environment.
2009 - 2013 Circles of Sustainability project scored 5 cities in 5 separate countries.
2009 - 2012 Green City Index A global study commissioned by Siemens
Ecological footprint measurements are a way to gauge anthropogenic impact and are another standard used by municipal governments.
Green energy issues
Green economies require a transition to green energy generation based on renewable energy to replace fossil fuels as well as energy conservation and efficient energy use. Renewables, like solar energy and wind energy, may eliminate the use of fossil fuels for electricity by 2035 and replace fossil fuel usage altogether by 2050.
The market failure to respond to environmental protection and climate protection needs can be attributed to high external costs and high initial costs for research, development, and marketing of green energy sources and green products. The green economy may need government subsidies as market incentives to motivate firms to invest and produce green products and services. The German Renewable Energy Act, legislations of many other member states of the European Union and the American Recovery and Reinvestment Act of 2009, all provide such market incentives. However, other experts argue that green strategies can be highly profitable for corporations that understand the business case for sustainability and can market green products and services beyond the traditional green consumer.
In the United States, it seemed as though the nuclear industry was coming to an end by the mid-1990s. Until 2013, there had been no new nuclear power facilities built since 1977. One reason was due to the economic reliance on fossil fuel-based energy sources. Additionally, there was a public fear of nuclear energy due to the Three Mile Island accident and the Chernobyl disaster. The Bush administration passed the 2005 Energy Bill that granted the nuclear industry around 10 million dollars to encourage research and development efforts. With the increasing threat of climate change, nuclear energy has been highlighted as an option to work to decarbonize the atmosphere and reverse climate change. Nuclear power forces environmentalists and citizens around the world to weigh the pro and cons of using nuclear power as a renewable energy source. The controversial nature of nuclear power has the potential to split the green economy movement into two branches— anti-nuclear and pro-nuclear.
According to a European climate survey, 63% of EU residents, 59% of Britons, 50% of Americans and 60% of Chinese respondents are in favor of switching to renewable energy. As of 2021, 18% of Americans are in favor of natural gas as a source of energy. For Britons and EU citizens nuclear energy is a more popular energy alternative.
After the COVID-19 pandemic, Eastern European and Central Asian businesses fall behind their Southern European counterparts in terms of the average quality of their green management practices, notably in terms of specified energy consumption and emissions objectives.
External variables, such as consumer pressure and energy taxes, are more relevant than firm-level features, such as size and age, in influencing the quality of green management practices. Firms with less financial limitations and stronger green management practices are more likely to invest in a bigger variety of green initiatives. Energy efficiency investments are good to both the bottom line and the environment.
The shift to greener energy and the adoption of more climate regulations are expected to have a 30% positive impact on businesses, mostly through new business prospects, and a 30% negative impact, according to businesses that took part in a survey in 2022. A little over 40% of the same businesses do not anticipate the transition to greener alternatives to alter their operations.
Criticism
A number of organisations and individuals have criticised aspects of the 'Green Economy', particularly the mainstream conceptions of it based on using price mechanisms to protect nature, arguing that this will extend corporate control into new areas from forestry to water. Venezuelan professor Edgardo Lander says that the UNEP's report, Towards a Green Economy, while well-intentioned "ignores the fact that the capacity of existing political systems to establish regulations and restrictions to the free operation of the markets – even when a large majority of the population call for them – is seriously limited by the political and financial power of the corporations."
Ulrich Hoffmann, in a paper for UNCTAD also says that the focus on Green Economy and "green growth" in particular, "based on an evolutionary (and often reductionist) approach will not be sufficient to cope with the complexities of [[climate
change]]" and "may rather give much false hope and excuses to do nothing really fundamental that can bring about a U-turn of global greenhouse gas emissions. Clive Spash, an ecological economist, has criticised the use of economic growth to address environmental losses, and argued that the Green Economy, as advocated by the UN, is not a new approach at all and is actually a diversion from the real drivers of environmental crisis. He has also criticised the UN's project on the economics of ecosystems and biodiversity (TEEB), and the basis for valuing ecosystems services in monetary terms.
See also
References
External links
Green Growth Knowledge Platform
Green Economy Coalition
UNEP – Green Economy
Schools of economic thought
Industrial ecology
Natural resources
Resource economics
Economy by field | 0.783559 | 0.995408 | 0.779961 |
Deep ecology | Deep ecology is an environmental philosophy that promotes the inherent worth of all living beings regardless of their instrumental utility to human needs, and argues that modern human societies should be restructured in accordance with such ideas.
Deep ecologists argue that the natural world is a complex of relationships in which the existence of organisms is dependent on the existence of others within ecosystems. They argue that non-vital human interference with or destruction of the natural world poses a threat not only to humans, but to all organisms that make up the natural order.
Deep ecology's core principle is the belief that the living environment as a whole should be respected and regarded as having certain basic moral and legal rights to live and flourish, independent of its instrumental benefits for human use. Deep ecology is often framed in terms of the idea of a much broader sociality: it recognizes diverse communities of life on Earth that are composed not only through biotic factors but also, where applicable, through ethical relations, that is, the valuing of other beings as more than just resources. It is described as "deep" because it is regarded as looking more deeply into the reality of humanity's relationship with the natural world, arriving at philosophically more profound conclusions than those of mainstream environmentalism. The movement does not subscribe to anthropocentric environmentalism (which is concerned with conservation of the environment only for exploitation by and for human purposes), since deep ecology is grounded in a different set of philosophical assumptions. Deep ecology takes a holistic view of the world humans live in and seeks to apply to life the understanding that the separate parts of the ecosystem (including humans) function as a whole. The philosophy addresses core principles of different environmental and green movements and advocates a system of environmental ethics advocating wilderness preservation, non-coercive policies encouraging human population decline, and simple living.
Origins and history
In his original 1973 deep ecology paper, Arne Næss stated that he was inspired by ecologists who were studying the ecosystems throughout the world. Næss also made clear that he felt the real motivation to 'free nature' was spiritual and intuitive. 'Your motivation comes from your total view or your philosophical, religious opinions,' he said, 'so that you feel, when you are working in favour of free nature, you are working for something within your self, that ... demands changes. So you are motivated from what I call 'deeper premises'.
In a 2014 essay, environmentalist George Sessions identified three people active in the 1960s whom he considered foundational to the movement: author and conservationist Rachel Carson, environmentalist David Brower, and biologist Paul R. Ehrlich. Sessions considers the publication of Carson's 1962 seminal book Silent Spring as the beginning of the contemporary deep ecology movement. Næss also considered Carson the originator of the movement, stating "Eureka, I have found it" upon encountering her writings.
Another event in the 1960s which have been proposed as foundational to the movement are the images of the Earth floating in space taken by the Apollo astronauts.
Principles
Deep ecology proposes an embracing of ecological ideas and environmental ethics (that is, proposals about how humans should relate to nature). It is also a social movement based on a holistic vision of the world. Deep ecologists hold that the survival of any part is dependent upon the well-being of the whole, and criticise the narrative of human supremacy, which they say has not been a feature of most cultures throughout human evolution. Deep ecology presents an eco-centric (Earth-centred) view, rather than the anthropocentric (human-centred) view, developed in its most recent form by philosophers of the Enlightenment, such as Newton, Bacon, and Descartes. Proponents of deep ecology oppose the narrative that man is separate from nature, is in charge of nature, or is the steward of nature, or that nature exists as a resource to be freely exploited. They cite the fact that indigenous peoples under-exploited their environment and retained a sustainable society for thousands of years, as evidence that human societies are not necessarily destructive by nature. They believe that the current materialist paradigm must be replaced - as Næss pointed out, this involves more than merely getting rid of capitalism and the concept of economic growth, or 'progress', that is critically endangering the biosphere. 'We need changes in society such that reason and emotion support each other,' he said. '... not only a change in a technological and economic system, but a change that touches all the fundamental aspects of industrial societies. This is what I mean by a change of 'system'.
Deep ecologists believe that the damage to natural systems sustained since the industrial revolution now threatens social collapse and possible extinction of humans, and are striving to bring about the kind of ideological, economic and technological changes Næss mentioned. Deep ecology claims that ecosystems can absorb damage only within certain parameters, and contends that civilization endangers the biodiversity of the Earth. Deep ecologists have suggested that the human population must be substantially reduced, but advocate a gradual decrease in population rather than any apocalyptic solution In a 1982 interview, Arne Næss commented that a global population of 100 million (0.1 billion) would be desirable. However, others have argued that a population of 1 - 2 billion would be compatible with the deep ecological worldview. Deep ecology eschews traditional left wing-right wing politics, but is viewed as radical ('Deep Green') in its opposition to capitalism, and its advocacy of an ecological paradigm. Unlike conservation, deep ecology does not advocate the controlled preservation of the landbase, but rather 'non-interference' with natural diversity except for vital needs. In citing 'humans' as being responsible for excessive environmental destruction, deep ecologists actually refer to 'humans within civilization, especially industrial civilization', accepting the fact that the vast majority of humans who have ever lived did not live in environmentally destructive societies – the excessive damage to the biosphere has been sustained mostly over the past hundred years.
In 1985, Bill Devall and George Sessions summed up their understanding of the concept of deep ecology with the following eight points:
The well-being of human and nonhuman life on earth is of intrinsic value irrespective of its value to humans.
The diversity of life-forms is part of this value.
Humans have no right to reduce this diversity except to satisfy vital human needs
The flourishing of human and nonhuman life is compatible with a substantial decrease in human population.
Humans have interfered with nature to a critical level already, and interference is worsening.
Policies must be changed, affecting current economic, technological and ideological structures.
This ideological change should focus on an appreciation of the quality of life rather than adhering to an increasingly high standard of living.
All those who agree with the above tenets have an obligation to implement them.
Development
The phrase "Deep Ecology" first appeared in a 1973 article by the Norwegian philosopher Arne Næss. Næss referred to "biospherical egalitarianism-in principle", which he explained was "an intuitively clear and obvious value axiom. Its restriction to humans is ... anthropocentrism with detrimental effects upon the life quality of humans themselves... The attempt to ignore our dependence and to establish a master-slave role has contributed to the alienation of man from himself." Næss added that from a deep ecology point of view "the right of all forms [of life] to live is a universal right which cannot be quantified. No single species of living being has more of this particular right to live and unfold than any other species".
Sources
Deep ecology is an eco-philosophy derived from intuitive ethical principles. It does not claim to be a science, although it is based generally on the new physics, which, in the early 20th century, undermined the reductionist approach and the notion of objectivity, demonstrating that humans are an integral part of nature; this is a common concept always held by primal peoples. Devall and Sessions, however, note that the work of many ecologists has encouraged the adoption of an "ecological consciousness", quoting environmentalist Aldo Leopold's view that such a consciousness "changes the role of Homo sapiens from conqueror of the land community to plain member and citizen of it." Though some detractors assert that deep ecology is based on the discredited idea of the "balance of nature", deep ecologists have made no such claim. They do not dispute the theory that human cultures can have a benevolent effect on the landbase, only the idea of the control of nature, or human supremacy, which is the central pillar of the industrial paradigm. The tenets of deep ecology state that humans have no right to interfere with natural diversity except for vital needs: the distinction between "vital" and "other needs" cannot be drawn precisely. Deep ecologists reject any mechanical or computer model of nature, and see the Earth as a living organism, which should be treated and understood accordingly.
Aspects
Environmental education
In 2010, Richard Kahn promoted the movement of ecopedagogy, proposing using radical environmental activism as an educational principle to teach students to support "earth democracy" which promotes the rights of animals, plants, fungi, algae and bacteria. The biologist Dr. Stephan Harding has developed the concept of "holistic science", based on principles of ecology and deep ecology. In contrast with materialist, reductionist science, holistic science studies natural systems as a living whole. He writes:
Spirituality
Deep ecologist and physicist Frijof Capra has said that '[Deep] ecology and spirituality are fundamentally connected because deep ecological awareness is, ultimately, spiritual awareness.'
Arne Næss commented that he was inspired by the work of Spinoza and Gandhi, both of whom based their values on grounds of religious feeling and experience. Though he regarded deep ecology as a spiritual philosophy, he explained that he was not a 'believer' in the sense of following any particular articles of religious dogma. ' ... it is quite correct to say that I have sometimes been called religious or spiritual, 'he said, 'because I believe that living creatures have an intrinsic worth of their own, and also that there are fundamental intuitions about what is unjust.'.
Næss criticised the Judeo-Christian tradition, stating the Bible's "arrogance of stewardship consists in the idea of superiority which underlies the thought that we exist to watch over nature like a highly respected middleman between the Creator and Creation". Næss further criticizes the reformation's view of creation as property to be put into maximum productive use.
However, Næss added that while he felt the word 'God' was 'too loaded with preconceived ideas', he accepted Spinoza's idea of God as 'immanent' - 'a single creative force'... 'constantly creating the world by being the creative force in Nature'. He did not, he said, 'exclude the possibility that Christian theological principles are true in a certain sense ...'.
Joanna Macy in "the Work that Reconnects" integrates Buddhist philosophy with a deep ecological viewpoint.
Criticisms
Eurocentric bias
Guha and Martínez Alier critique the four defining characteristics of deep ecology. First, because deep ecologists believe that environmental movements must shift from an anthropocentric to an ecocentric approach, they fail to recognize the two most fundamental ecological crises facing the world: overconsumption in the global north and increasing militarization. Second, deep ecology's emphasis on wilderness provides impetus for the imperialist yearning of the West. Third, deep ecology appropriates Eastern traditions, characterizes Eastern spiritual beliefs as monolithic, and denies agency to Eastern peoples. And fourth, because deep ecology equates environmental protection with wilderness preservation its radical elements are confined within the American wilderness preservationist movement.
While deep ecologists accept that overconsumption and militarization are major issues, they point out that the impulse to save wilderness is intuitive and has no connection with imperialism. This claim by Guha and Martínez Alier, in particular, closely resembles statements made, for instance, by Brazilian president Jair Bolsonaro declaring Brazil's right to cut down the Amazon Rainforest. 'The Amazon belongs to Brazil and European countries can mind their own business because they have already destroyed their own environment.' The inference is clearly that, since European countries have already destroyed their environment, Brazil also has the right to do so: deep ecological values should not apply to them, as they have not yet had their 'turn' at maximum economic growth.
With regard to 'appropriating spiritual beliefs' Arne Næss pointed out that the essence of deep ecology is the belief that 'all living creatures have their own intrinsic value, a value irrespective of the use they might have for mankind.' Næss stated that supporters of the deep ecology movement came from various different religious and spiritual traditions, and were united in this one belief, albeit basing it on various different values.
Knowledge of nonhuman interests
Animal rights activists state that for an entity to require intrinsic rights, it must have interests. Deep ecologists are criticised for insisting they can somehow understand the thoughts and interests of non-humans such as plants or protists, which they claim thus proves that non-human lifeforms have intelligence. For example, a single-celled bacteria might move towards a certain chemical stimulation, although such movement might be rationally explained, a deep ecologist might say that this was all invalid because according to his better understanding of the situation that the intention formulated by this particular bacteria was informed by its deep desire to succeed in life. One criticism of this belief is that the interests that a deep ecologist attributes to non-human organisms such as survival, reproduction, growth, and prosperity are really human interests. Deep ecologists refute this criticism by pointing out first that 'survival' 'reproduction' 'growth' and 'prosperity'(flourishing) are accepted attributes of all living organisms: 'to succeed in life', depending on how one defines 'success' could certainly be construed as the aim of all life. In addition, the plethora of recent work on mimesis. Thomas Nagel, in "What Is It Like to Be a Bat?" (first published 1974), suggests, "[B]lind people are able to detect objects near them by a form of a sonar, using vocal clicks or taps of a cane. Perhaps if one knew what that was like, one could by extension imagine roughly what it was like to possess the much more refined sonar of a bat." Others such as David Abram have said that consciousness is not specific to humans, but a property of the totality of the universe of which humans are a manifestation.
Deep versus Shallowness
When Arne Næss coined the term deep ecology, he compared it favourably with shallow ecology which he criticized for its utilitarian and anthropocentric attitude to nature and for its materialist and consumer-oriented outlook, describing its "central objective" as "the health and affluence of people in the developed countries." William D. Grey believes that developing a non-anthropocentric set of values is "a hopeless quest". He seeks an improved "shallow" view. Deep ecologists point out, however, that "shallow ecology" (resource management conservation) is counter-productive, since it serves mainly to support capitalism, the means through which industrial civilization destroys the biosphere. The eco-centric view thus only becomes 'hopeless' within the structures and ideology of civilization. Outside it, however, a non-anthropocentric world view has characterised most 'primal' cultures since time immemorial, and, in fact, obtained in many indigenous groups until the industrial revolution and after. Some cultures still hold this view today. As such, the eco-centric narrative is not alien to humans, and may be seen as the normative ethos in human evolution. Grey's view represents the reformist discourse that deep ecology has rejected from the beginning.
Misanthropy
Social ecologist Murray Bookchin interpreted deep ecology as being misanthropic, due in part to the characterization of humanity by David Foreman, of the environmental advocacy group Earth First!, as a "pathological infestation on the Earth". Bookchin mentions that some, like Foreman, defend misanthropic measures such as organising the rapid genocide of most of humanity. In response, deep ecologists have argued that Foreman's statement clashes with the core narrative of deep ecology, the first tenet of which stresses the intrinsic value of both nonhuman and human life. Arne Næss suggested a slow decrease in human population over an extended period, not genocide.
Bookchin's second major criticism is that deep ecology fails to link environmental crises with authoritarianism and hierarchy. He suggests that deep ecologists fail to recognise the potential for humans to solve environmental issues. In response, deep ecologists have argued that industrial civilization, with its class hierarchy, is the sole source of the ecological crisis. The eco-centric worldview precludes any acceptance of social class or authority based on social status. Deep ecologists believe that since ecological problems are created by industrial civilization the only solution is the deconstruction of the culture itself.
Sciencism
Daniel Botkin concludes that although deep ecology challenges the assumptions of western philosophy, and should be taken seriously, it derives from a misunderstanding of scientific information and conclusions based on this misunderstanding, which are in turn used as justification for its ideology. It begins with an ideology and is political and social in focus. Botkin has also criticized Næss's assertion that all species are morally equal and his disparaging description of pioneering species. Deep ecologists counter this criticism by asserting that a concern with political and social values is primary, since the destruction of natural diversity stems directly from the social structure of civilization, and cannot be halted by reforms within the system. They also cite the work of environmentalists and activists such as Rachel Carson, Aldo Leopold, John Livingston, and others as being influential, and are occasionally critical of the way the science of ecology has been misused.
Utopianism
Eco-critic Jonathan Bate has called deep ecologists 'utopians', pointing out that 'utopia' actually means 'nowhere' and quoting Rousseau's claim that "the state of nature no longer exists and perhaps never did and probably never will." Bate asks how a planet crowded with cities
Bates' criticism rests partly on the idea that industrial civilization and the technics it depends on are themselves 'natural' because they are made by humans. Deep ecologists have indicated that the concept of technics being 'natural' and therefore 'morally neutral' is a delusion of industrial civilization: there can be nothing 'neutral' about nuclear weapons, for instance, whose sole purpose is large scale destruction. Historian Lewis Mumford, divides technology into 'democratic' and 'authoritarian' technics ('technics' includes both technical and cultural aspects of technology). While 'democratic' technics, available to small communities, may be neutral, 'authoritarian' technics, available only to large-scale, hierarchical, authoritarian, societies, are not. Such technics are unsustainable, and need to be abandoned, as supported by point #6 of the deep ecology platform.
With reference to the degree to which landscapes are natural, Peter Wohlleben draws a temporal line (roughly equivalent to the development of Mumford's 'authoritarian' technics) at the agricultural revolution, about 8000 BC, when "selective farming practices began to change species." This is also the time when the landscape began to be intentionally transformed into an ecosystem completely devoted to meeting human needs.
Concerning Hobbes's pronouncement on 'the state of nature', deep ecologists and others have commented that it is false and was made simply to legitimize the idea of a putative 'social contract' by which some humans are subordinate to others. There is no evidence that members of primal societies, employing 'democratic technics', lived shorter lives than those in civilization (at least before the 20th century); their lives were the opposite of solitary, since they lived in close-knit communities, and while 'poverty' is a social relation non-existent in sharing cultures, 'ignorant' and 'brutish' both equate to the term 'savage' used by colonials of primal peoples, referring to the absence of authoritarian technics in their cultures. Justice, political liberty and altruism are characteristic of egalitarian primal societies rather than civilization, which is defined by class hierarchies and is therefore by definition unjust, immoral, and lacking in altruism.
Links with other philosophies
Peter Singer critiques anthropocentrism and advocates for animals to be given rights. However, Singer has disagreed with deep ecology's belief in the intrinsic value of nature separate from questions of suffering. Zimmerman groups deep ecology with feminism and civil rights movements. Nelson contrasts it with ecofeminism. The links with animal rights are perhaps the strongest, as "proponents of such ideas argue that 'all life has intrinsic value'".
David Foreman, the co-founder of the radical direct-action movement Earth First!, has said he is an advocate for deep ecology. At one point Arne Næss also engaged in direct action when he chained himself to rocks in front of Mardalsfossen, a waterfall in a Norwegian fjord, in a successful protest against the building of a dam.
Some have linked the movement to green anarchism as evidenced in a compilation of essays titled Deep Ecology & Anarchism.
Further, the movement is related to cosmopolitan localism that has been proposed as a structural framework to organize production by prioritising socio-ecological well-being over corporate profits, over-production and excess consumption.
The object-oriented ontologist Timothy Morton has explored similar ideas in the books Ecology without Nature: Rethinking Environmental Aesthetics (2009) and Dark Ecology: For a Logic of Future Coexistence (2016).
See also
Biocentrism (ethics)
Biophilia hypothesis
Biotic ethics
Coupled human-environment system
Earth liberation
Ecocentrism
Ecofascism
Ecological civilization
Ecosocialism
Ecosophy
Gaianism
Intrinsic value (animal ethics)
Negative population growth
OpenAirPhilosophy
Voluntary human extinction movement
Hierarchy theory
Scale (analytical tool)
References
Additional sources
Bender, F. L. 2003. The Culture of Extinction: Toward a Philosophy of Deep Ecology Amherst, New York: Humanity Books.
Katz, E., A. Light, et al. 2000. Beneath the Surface: Critical Essays in the Philosophy of Deep Ecology Cambridge, Mass.: MIT Press.
LaChapelle, D. 1992. Sacred Land, Sacred Sex: Rapture of the Deep Durango: Kivakí Press.
Passmore, J. 1974. Man's Responsibility for Nature London: Duckworth.
Drengson, Alan. "The Deep Ecology Movement." The Green Majority, CIUT 89.5 FM, University of Toronto, 6 June 2008.
Further reading
Glasser, Harold (ed.) 2005. The Selected Works of Arne Næss, Volumes 1-10. Springer, . (review)
Keulartz, Jozef 1998. Struggle for nature : a critique of radical ecology, London [etc.] : Routledge.
Linkola, Pentti 2011. Can Life Prevail? UK: Arktos Media, 2nd Revised ed.
Marc R., Fellenz. "9. Ecophilosophy: Deep Ecology And Ecofeminism." The Moral Menagerie : Philosophy and Animal Rights. 158. Champaign: University of Illinois Press, 2007.
Tobias, Michael (ed.) 1988 (1984). Deep Ecology. Avant Books. .
Environmental ethics
Anti-capitalism
Political ecology
Arne Næss | 0.783309 | 0.995722 | 0.779958 |
Indigenous science | Indigenous science is the application and intersection of Indigenous knowledge and science. In ecology, this is sometimes termed traditional ecological knowledge. Indigenous science involves the knowledge systems and practices of Indigenous peoples, which are rooted in their cultural traditions and relationships to their indigenous context. It follows the same methods of Western science including (but not limited to): observation, prediction, interpretation, questioning. The knowledge and information that Indigenous people have was often devalued by white European and American scientists and explorers. However, there has been a growing recognition of the benefits of incorporating Indigenous perspectives and knowledge particularly in fields such as ecology and environmental management.
Traditional and scientific
Indigenous knowledge and experiences are often passed down orally from generation to generation. Indigenous knowledge has an empirical basis and has traditionally been used to predict and understand the world. Such knowledge has informed studies of human management of natural processes.
In ecology
Indigenous science is related to the term "traditional ecological knowledge" or "TEK" which is specific category of Indigenous science.
The study of ecology focuses on the relationships and patterns between organisms in their environment. TEK is place-based, so the information and understanding are context-dependent. One example of such work is ethnobiology which employs Indigenous knowledge and botany to identify and classify species. TEK has been used to provide perspectives on matters such as how a declining fish population affects nature, the food web, and coastal ecosystems.
Indigenous science has helped to address ecological challenges including the restoration of salmon, management of seabird harvests, outbreaks of hantavirus, and addressing wildfires.
Place based sciences
Indigenous science may offer a different perspective from what is traditionally thought of as "science". In particular, Indigenous science is tied to territory, cultural practices, and experiences/teachings in explicit ways that are often absent in normal scientific discourse.
Collaboration between Indigenous communities and research scientists has been described as a kind of "indigenizing" of the scientific method with Indigenous-led projects and community work enacted as a starting point for the collaborations.
Climatology studies have made use of traditional knowledge (Qaujimajatuqangit) among the Inuit when studying long-term changes in sea ice.
As well as in ecology, Indigenous knowledge has been used in biological areas including animal behaviour, evolution, physiology, life history, morphology, wildlife conservation, wildlife health, and taxonomy.
Indigenous technologies
The definition of technology is "the application of scientific knowledge for practical purposes, especially in industry.". Examples of Indigenous technologies that were developed for specific use based on their location and culture include: clam gardens, fish weirs, and culturally modified trees (CMTs). Indigenous technologies are available in a wide range of subjects such as: agri- and mari-culture, fishing, forest management and resource exploitation, atmospheric, and land based management techniques. Chaco Canyon is an example of land-based Indigenous technologies which show keen insight into the scientific and mathematical underpinnings.
Technology by area
The American Southeast
Agriculture in the southeast was based on a mixed-crop, shifting cultivation system growing corn, beans, and squash together in the same mounds; an inter-cropping system known as the three sisters. In this horticultural technique, each plant offers something to the others, thus improving the crop yield. Corn is a high-caloric food, supported by the beans, which provide nitrogen from nitrogen-fixing bacteria that live on their roots, and squash provide ground cover (suppresses weeds and keeps soil moist). Other crops incorporated in the inter-cropping system included sunflowers or grains like barley or maygrass.
Notable scholars
Nancy C. Maryboy
Karlie Noon
Lydia Jennings
Ian Saem Majnep
Robin Wall Kimmerer
References
Oral tradition
History of science
Traditional knowledge
Indigenous culture | 0.798066 | 0.977302 | 0.779951 |
Steps to an Ecology of Mind | Steps to an Ecology of Mind is a collection of Gregory Bateson's short works over his long and varied career. Subject matter includes essays on anthropology, cybernetics, psychiatry, and epistemology. It was originally published by Ballantine Books in 1972 (republished 2000 with foreword by Mary Catherine Bateson).
Part I: Metalogues
The book begins with a series of metalogues, which take the form of conversations with his daughter Mary Catherine Bateson. The metalogues are mostly thought exercises with titles such as "What is an Instinct" and "How Much Do You Know." In the metalogues, the playful dialectic structure itself is closely related to the subject matter of the piece.
DEFINITION: A metalogue is a conversation about some problematic subject. This conversation should be such that not only do the participants discuss the problem but the structure of the conversation as a whole is also relevant to the same subject. Only some of the conversations here presented achieve this double format.
Notably, the history of evolutionary theory is inevitably a metalogue between man and nature, in which the creation and interaction of ideas must necessarily exemplify evolutionary process.
Why Do Things Get in a Muddle? (01948, previously unpublished)
Why Do Frenchmen? (01951, Impulse ; 01953, ETC: A Review of General Semantics, Vol. X)
About Games and Being Serious (01953, ETC: A Review of General Semantics, Vol. X)
How Much Do You Know? (01953, ETC: A Review of General Semantics, Vol. X)
Why Do Things Have Outlines? (01953, ETC: A Review of General Semantics, Vol. XI)
Why a Swan? (01954, Impulse)
What Is an Instinct? (01969, Sebeok, Approaches to Animal Communication)
Part II: Form and Pattern in Anthropology
Part II is a collection of anthropological writings, many of which were written while he was married to Margaret Mead.
Culture Contact and Schismogenesis (01935, Man, Article 199, Vol. XXXV)
Experiments in Thinking About Observed Ethnological Material (01940, Seventh Conference on Methods in Philosophy and the Sciences ; 01941, Philosophy of Science, Vol. 8, No. 1)
Morale and National Character (01942, Civilian Morale, Watson)
Bali: The Value System of a Steady State (01949, Social Structure: Studies Presented to A.R. Radcliffe-Brown, Fortes)
Style, Grace, and Information in Primitive Art (01967, A Study of Primitive Art, Forge)
Part III: Form and Pathology in Relationship
Part III is devoted to the theme of "Form and Pathology in Relationships." His essay on alcoholism examines the alcoholic state of mind, and the methodology of Alcoholics Anonymous within the framework of the then-nascent field of cybernetics.
Social Planning and the Concept of Deutero-Learning was a "comment on Margaret Mead's article "The Comparative Study of Culture and the Purposive Cultivation of Democratic Values," 01942, Science, Philosophy and Religion, Second Symposium)
A Theory of Play and Fantasy (01954, A.P.A. Regional Research Conference in Mexico City, March 11 ; 01955, A.P.A. Psychiatric Research Reports)
Epidemiology of a Schizophrenia (edited version of a talk, "How the Deviant Sees His Society," from 01955, at a conference on "The Epidemiology of Mental Health," Brighton, Utah)
Toward a Theory of Schizophrenia (01956, Behavioral Science, Vol. I, No. 4)
The Group Dynamics of Schizophrenia (01960)
Minimal Requirements for a Theory of Schizophrenia (01959)
Double Bind, 1969 (01969)
The Logical Categories of Learning and Communication (01968)
The Cybernetics of "Self": A Theory of Alcoholism (01971)
Part IV: Biology and Evolution
On Empty-Headedness Among Biologists and State Boards of Education (in BioScience, Vol. 20, 1970)
The Role of Somatic Change in Evolution (in the journal of Evolution, Vol 17, 1963)
Problems in Cetacean and Other Mammalian Communication (appeared as Chapter 25, pp. 569–799, in Whales, Dolphins and Purpoises, edited by Kenneth S. Norris, University of California Press, 1966)
A Re-examination of "Bateson's Rule" (accepted for publication in the Journal of Genetics)
Part V: Epistemology and Ecology.
Cybernetic Explanation (from the American Behavioral Scientist, Vol. 10, No. 8, April 1967, pp. 29–32)
Redundancy and Coding (appeared as Chapter 22 in Animal Communication: Techniques of Study and Results of Research, edited by Thomas A. Sebeok, 1968, Indiana University Press)
Conscious Purpose Versus Nature (this lecture was given in August, 1968, to the London Conference on the Dialectics of Liberation, appearing in a book of the same name, Penguin Books)
Effects of Conscious Purpose on Human Adaptation (prepared as the Bateson's position paper for Wenner-Gren Foundation Conference on "Effects of Conscious Purpose on Human Adaptation". Bateson chaired the conference held in Burg Wartenstein, Austria, July 17–24, 1968)
Form, Substance, and Difference (the Nineteenth Annual Korzbski Memorial Lecture, January 9, 1970, under the auspices of the Institute of General Semantics; appeared in the General Semantics'' Bulletin, No. 37, 1970)
Part VI: Crisis in the Ecology of Mind
From Versailles to Cybernetics (previously unpublished. This lecture was given 21 April 1966, to the "Two Worlds Symposium" at (CSU) Sacramento State College)
Pathologies of Epistemology (given at the Second Conference on Mental Health in Asia and the Pacific, 1969, at the East–West Center, Hawaii, appearing in the report of that conference)
The Roots of Ecological Crisis (testimony on behalf of the University of Hawaii Committee on Ecology and Man, presented in March 1970)
Ecology and Flexibility in Urban Civilization (written for a conference convened by Bateson in October 1970 on "Restructuring the Ecology of a Great City" and subsequently edited)
See also
Double bind
Information ecology
Philosophy of mind
Social sustainability
Systems philosophy
Systems theory
Notes and references
1972 books
Anthropology books
Cognitive science literature
Systems theory books
University of Chicago Press books | 0.799082 | 0.976018 | 0.779918 |
Phase-out of fossil fuel vehicles | A phase-out of fossil fuel vehicles are proposed bans or discouragement (for example via taxes) on the sale of new fossil-fuel powered vehicles or use of existing fossil-fuel powered vehicles, as well the encouragement of using other forms of transportation. Vehicles that are powered by fossil fuels, such as gasoline (petrol), diesel, kerosene, and fuel oil are set to be phased out by a number of countries. It is one of the three most important parts of the general fossil fuel phase-out process, the others being the phase-out of fossil fuel power plants for electricity generation and decarbonisation of industry.
Many countries and cities around the world have stated they will ban the sale of passenger vehicles (primarily cars and buses) powered by fossil fuels such as petrol, liquefied petroleum gas, and diesel at some time in the future. Synonyms for the bans include phrases like "banning gas cars", "banning petrol cars", "the petrol and diesel car ban", or simply "the diesel ban". Another method of phase-out is the use of zero-emission zones in cities.
Background
Reasons for banning the further sale of fossil fuel vehicles include: reducing health risks from pollution particulates, notably diesel PM10s, and other emissions, notably nitrogen oxides; meeting national greenhouse gas, such as CO2, targets under international agreements such as the Kyoto Protocol and the Paris Agreement; or energy independence. The intent to ban vehicles powered by fossil fuels is attractive to governments as it offers a simpler compliance target, compared with a carbon tax or phase-out of fossil fuels.
The automotive industry is working to introduce electric vehicles to adapt to bans with varying success and it is seen by some in the industry as a possible source of money in a declining market. A 2020 study from the Eindhoven University of Technology showed that the manufacturing emissions of batteries of new electric cars are much smaller than what was assumed in the 2017 IVL study (around 75 kg /kWh) and that the lifespan of lithium batteries is also much longer than previously thought (at least 12 years with a mileage of 15,000 km annually): they are cleaner than internal combustion cars powered by diesel or petrol.
There is some opposition to simply moving from fossil-fuel-powered cars to electric cars, as they would still require a large proportion of urban land. On the other hand, there are many types of (electric) vehicles that take up little space, such as (cargo) bicycles and electric motorcycles and scooters. Making cycling and walking over short distances, especially in urban areas, more attractive and feasible with measures such as removing roads and parking spaces and improving cycling infrastructure and footpaths (including pavements), provides a partial alternative to replacing all fossil-fuelled vehicles with electric vehicles. Although there are as yet very few completely carfree cities (such as Venice), several are banning all cars in parts of the city, such as city centers.
Methods
The banning of fossil-fuelled vehicles of a defined scope requires authorities to enact legislation that restricts them in a certain way. Proposed methods include:
A prohibition on further sales or registration of new vehicles powered with specific fuels from a certain date in a certain area. At the date of implementation, existing vehicles would remain legal to drive on public highways.
A prohibition on the importation of new vehicles powered with specific fuels from a certain date into a certain area. This is planned in countries such as Denmark and Israel; however, some countries, such as Israel, have no legislation on the subject.
A prohibition on any use of certain vehicles powered with specific fuels from a certain date within a certain area. Restrictions such as these are already in place in many European cities, usually in the context of their low-emission zones (LEZs).
Fuel cell (electric) vehicles (FCVs or FCEVs) also allow running on (some) non-fossil fuels (i.e., hydrogen, ethanol, methanol, ).
Cities generally use the introduction of low-emission zones (LEZs) or zero-emission zones (ZEZs), sometimes with an accompanying air quality certificate sticker such as Crit'air (France), to restrict the use of fossil-fuelled cars in some or all of its territory. These zones are growing in number, size, and strictness. Some city bans in countries such as Italy, Germany, and Switzerland are only temporarily activated during particular times of the day, during winter, or when there is a smog alert (for example, in Italy in January 2020); these do not directly contribute to the phase-out of fossil fuel vehicles, but they make owning and using such vehicles less attractive as their utility is restricted and the cost of driving them increases.
Some countries have given consumers various incentives such as subsidies or tax breaks to stimulate the purchase of electric vehicles, while fossil-fuelled vehicles are taxed increasingly heavily.
Helped by government incentives, Norway became the first country to have the majority of new vehicles sold in 2021 be electric. In January 2022, 88 per cent of new vehicles sold in the country were electric, and based upon current trends, they would most likely hit the goal of no new fossil fuel cars being sold by 2025.
Places with planned fossil-fuel vehicle restrictions
International
At the 2021 United Nations Climate Change Conference held in Glasgow multiple governments and companies signed a non-legally-binding declaration to accelerate the transition to 100% zero emission cars and vans (the Glasgow Declaration). They wanted all new cars and vans to not emit any greenhouse gas at the tailpipe by 2035 in leading markets and by 2040 globally. The United States and China (the biggest car markets) did not sign and neither did Germany (the biggest car market in the EU). Also absent from the list of signatories were major car manufacturers Volkswagen, Toyota, Renault-Nissan and Hyundai-Kia.
European Union
In 2018, Denmark proposed an EU-wide prohibition on petrol and diesel cars, but that turned out to be contrary to EU regulations. In October 2019, Denmark made a proposal for phasing out fossil fuel vehicles on the member state level by 2030 which was supported by 10 other EU member states.
In July 2021, France opposed a ban on combustion-powered cars and in particular on hybrid vehicles.
In July 2021, the European Commission proposed a 100% reduction of emissions for new sales of cars and vans as of 2035. On 8 June 2022, the European Parliament voted in favour of the proposal of the European Commission, but agreement with the European Union member states was necessary before a final law could be passed. On 22 June 2022, German Finance Minister Christian Lindner stated that his government would refuse to agree on the ban. But on 29 June 2022, after 16 hours of negotiations, all climate ministers of the 27 EU member states agreed to the commission's proposal (part of the 'Fit for 55' package) to effectively ban the sale of new internal combustion vehicles by 2035 (through '[introducing] a 100% emissions reduction target by 2035 for new cars and vans'). Germany backed the 2035 target, asking the Commission whether hybrid vehicles or -neutral fuels could also comply with the proposal; Frans Timmermans responded that the Commission kept an "open mind", but at the time 'hybrids did not deliver sufficient emissions cuts and alternative fuels were prohibitively expensive.' The law for "zero CO2 emissions for new cars and vans in 2035" was approved by the European Parliament on 14 February 2023.
Italy's industry minister called on the EU to reassess its 2035 ban on petrol and diesel cars, suggesting an earlier review for clarity. The Italian government pushed for greater flexibility in achieving decarbonization goals and a more gradual transition from combustion engines.
Countries
Countries with proposed bans or implementing 100% sales of zero-emissions vehicles include China (including Hong Kong and Macau), Japan, Singapore, the UK, South Korea, Iceland, Denmark, Sweden, Norway, Slovenia, Germany, Italy, France, Belgium, the Netherlands, Portugal, Canada, the 12 U.S. states that adhered to California's Zero-Emission Vehicle (ZEV) Program, Sri Lanka, Cabo Verde, and Costa Rica.
Some politicians in some countries have made broad announcements but have implemented no legislation and therefore there is no phase-out and no binding legislation. Ireland, for example, had made announcements but ultimately did not ban diesel nor petrol vehicles.
The International Energy Agency predicted in 2021 that 70% of India's new car sales will be fossil powered in 2030, despite earlier government announcements that were discarded in 2018. In November 2021, the Indian government was amongst 30 national governments and six major automakers who pledged to phase out the sale of all new petrol and diesel vehicles by 2040 worldwide, and by 2035 in "leading markets".
Cities and territories
Some cities or territories have planned or taken measures to partially or entirely phase out fossil fuel vehicles earlier than their national governments. In some cases, this is achieved through local or regional government initiatives, in other cases through legal challenges brought on by citizens or civil organisations enforcing partial phase-outs based on the right to clean air.
Some cities listed have signed the Fossil Fuel Free Streets Declaration, committing to banning emitting vehicles by 2030, but this does not necessarily have the force of law in those jurisdictions. The bans typically apply to a select number of streets in the urban centre of the city where most people live, not to its entire territory. Some cities take a gradual approach to prohibit the most polluting categories of vehicles first, then the next-most polluting, all the way up to a complete ban on all fossil-fuel vehicles; some cities have not yet set a deadline for a complete ban, and/or are waiting for the national government to set such a date.
In California, emissions requirements for automakers to be permitted to sell any vehicles in the state were expected to force 15% of new vehicles offered for sale between 2018 and 2025 to be zero emission. Much cleaner emissions and increased efficiency in petrol engines mean this will be met with just 8% of ZEV vehicles. The "Ditching Dirt Diesel" law SB 44 sponsored by Nancy Skinner and adopted on 20 September 2019 requires the California Air Resources Board (CARB) to "create a comprehensive strategy for deploying medium- and heavy-duty vehicles" to make California meet federal ambient air quality standards, and 'establish goals and spur technology advancements for reducing GHG emissions from the medium- and heavy-duty vehicle sectors by 2030 and 2050'. It stops short of directly requiring a phase-out of all diesel vehicles by 2050 (as the original bill did), but it would be the most obvious means of achieving the reduction goals. In August 2022, California Governor Gavin Newsom signed off on a new EV mandate. The plan's targets are 35% ZEV market share by 2026, 68% by 2030, and 100% by 2035. This plan is accompanied by supporting funding for infrastructure and ZEV rebates totaling $10 billion. Newsom has stated his commitment to keep California at the forefront of zero-emission transportation.
In the European Union, Council Directive 96/62/EC on ambient air quality assessment and management and Directive 2008/50/EC on ambient air quality form the legal basis for EU citizens' right to clean air. On 25 July 2008 in the case Dieter Janecek v Freistaat Bayern CURIA, the European Court of Justice ruled that under Directive 96/62/EC citizens have the right to require national authorities to implement a short-term action plan that aims to maintain or achieve compliance to air quality limit values. The ruling of the German Federal Administrative Court in Leipzig on 5 September 2013 significantly strengthened the right of environmental associations and consumer protection organisations to sue local authorities to enforce compliance with air quality limits throughout an entire city. The Administrative Court of Wiesbaden declared on 30 June 2015 that financial or economic aspects were not a valid excuse to refrain from taking measures to ensure that the limit values were observed, the Administrative Court of Düsseldorf ruled on 13 September 2016 that driving bans on certain diesel vehicles were legally possible to comply with the limit values as quickly as possible, and on 26 July 2017, the Administrative Court of Stuttgart ordered the state of Baden-Württemberg to consider a year-round ban on diesel-powered vehicles. By mid-February 2018, citizens in the EU member states the Czech Republic, France, Germany, Hungary, Italy, Romania, Slovakia, Spain, and the United Kingdom were suing their governments for violating the limit of 40 micrograms per cubic meter of breathable air as stipulated in the Ambient Air Quality Directive.
A landmark ruling by the German Federal Administrative Court in Leipzig on 27 February 2018 declared that the cities of Stuttgart and Düsseldorf were allowed to legally prohibit older, more polluting diesel vehicles from driving in zones worst affected by pollution, rejecting appeals made by German states against the bans imposed by the two cities' local courts. The case was strongly influenced by the ongoing Volkswagen emissions scandal (also known as Dieselgate), which in 2015 revealed that many Volkswagen diesel engines were deceptively tested and marketed as much cleaner than they were. The decision was predicted to set a precedent for other places in the country and in Europe. Indeed, the ruling triggered a wave of dozens of local diesel restrictions, brought about by Environmental Action Germany (DUH) suing city authorities and winning legal challenges across Germany. While some groups and parties such as the AfD again tried to overturn them, others such as the Greens advocated for a national phaseout of diesel cars by 2030. On 13 December 2018, the European Court of Justice overturned a 2016 European Commission relaxation of car emission limits to 168 mg/km, which the Court declared illegal. This allowed the cities of Brussels, Madrid, and Paris, who had filed the complaint, to proceed with their plans to also reject Euro 6 diesel vehicles from their urban centres, based on the original 80 mg/km limit set by EU law.
Manufacturer fossil-fuel phase-out plans
In 2017, Volvo announced plans to phase out internal combustion-only vehicle production by 2019, after which all new cars manufactured by Volvo will either be fully electric or electric hybrids. In 2020, the Volvo Group with other truck makers including DAF Trucks, Daimler AG, Ford, Iveco, MAN SE, and Scania AB pledged to end diesel truck sales by 2040.
In 2018, Volkswagen Group's strategy chief said "the year 2026 will be the last product start on a combustion engine platform" for its core brand, Volkswagen.
In 2021, General Motors announced plans to go fully electric by 2035. In the same year, the CEO of Jaguar Land Rover, Thierry Bolloré also claimed it would "achieve zero tailpipe emissions by 2036" and that its Jaguar brand would be electric-only by 2025. By March, Volvo Cars announced that by 2030 it "intends to only sell fully electric cars and phase out any car in its global portfolio with an internal combustion engine, including hybrids". In April 2021, Honda announced that it will stop selling gas-powered vehicles by 2040. In July 2021, Mercedes-Benz announced that its new vehicle platforms will be EV-only by 2025. In Oct 2021, Rolls-Royce announced that it will be fully electric by 2030. In November 2021, at 2021 United Nations Climate Change Conference, car manufacturers including BYD Auto, Ford Motor Company, General Motors, Jaguar, Land Rover, Mercedes-Benz and Volvo have committed to "work towards all sales of new cars and vans being zero emission globally by 2040, and by no later than 2035 in leading markets".
In 2022, Maserati announced its plans to offer full-electric variants of all its models by 2025 and its intention to halt production of combustion engine vehicles by 2030.
In 2023, Nissan announced the commitment to end combustion engine vehicle sales in Europe by 2030.
The following table shows manufacturer pledges of the top global automaker corporations.
Electric vehicle market shares by country
The sale of 5% electric vehicles is commonly regarded as a "tipping point" at which sales are likely to continually increase on a standard "S curve" pattern. At the end of 2023, 31 countries (including most EU countries, China and the US) had reached well over 5% of the market as electric, 15 countries were over 20%, and two were over 50%. However, Japan, India, Brazil, Mexico and Indonesia, which are among the 15 largest car markets, notably had not reached 5%.
Railways
Germany: While railway electrification is often pursued for reasons unrelated to the emissions caused by fossil fuels, there has been an increased push in the 21st century in countries such as Germany to replace diesel locomotives with alternatives such as battery electric multiple units, hydrogen fuel trains like the Alstom Coradia iLint or overhead wire electrification.
Switzerland: pursued electrification because importing coal for steam locomotives had proven difficult during the World Wars but Switzerland has plenty of domestic hydropower resources to power electric trains.
Israel: Israel Railways which had no electrified mainline rail services prior to 2018 when the Tel Aviv-Jerusalem railway became the first line to see electric train operation, plans to electrify most or all of its network and to phase out diesel locomotives and diesel multiple units. The project was further accelerated in 2020 as the temporary shutdown of rail traffic due to the COVID-19 pandemic in Israel allowed faster construction and ERTMS level 2 was being rolled out. However, in 2019 Israel Railways ordered diesel powered rolling stock to replace the ageing IC3 trains with media reports citing delays in the electrification program as the main reason.
United States: In the San Francisco Bay Area, the Caltrain Electrification program approved in 2016 is nearing completion. Caltrain is the commuter rail line generally connecting San Francisco to San Jose through San Mateo County. Despite having no electric locomotives previously, Caltrain's infrastructure has successfully implemented electric support. Funding was awarded in 2018, and train assembly and testing completed in 2022. In a multi-stage phase out plan, the new electric train cars will supplement and eventually replace diesel powered locomotives by 2024.
Netherlands: Most railway lines in the Netherlands were equipped with overhead wires just before or just after World War II, allowing electric trains to start running. Many regional railway lines did not receive such overhead wires, so diesel trains still run there today. As of April 2024, three regional railway lines are being electrified; a further 400 kilometres of rail is still transporting passengers with diesel locomotives.
Shipping
Emissions will be banned from Norway's World Heritage Sites Geirangerfjord and Nærøyfjord from 2026.
Besides boats driven by batteries or indeed trolley boats, there have been several attempts to adapt nuclear marine propulsion which has been a part of the military naval forces of many countries for decades in the form of nuclear submarines, nuclear aircraft carriers and nuclear icebreakers to civilian uses. While prototypes like Otto Hahn (ship) (German) NS Savannah (American) and RV Mirai (Japan) were built, the only non-icebreaker nuclear powered ship to remain in civilian service is the Russian Sevmorput built in the late 1980s by the Soviet Union. The Soviet Union and its successor state Russia also maintains a fleet of nuclear icebreakers to keep the Northern Sea Route open.
Sail ships and oars rely on renewable resources rather than fossil fuels (wind and human muscle-power respectively) but have disadvantages in terms of speed and labour-costs and have thus been phased out of virtually all commercial uses. There are some attempts to use wind-powered ships for commercial purposes, but as of 2022 they have remained marginal.
Aviation
Norway, and possibly some other Scandinavian countries, are aiming for all domestic flights to be emission-free by 2040. A major obstacle to decarbonising air travel is the low energy density of current and foreseeable battery technology. Thus alternatives to electric planes such as so called sustainable aviation fuels or e-fuels (fuels derived from electrochemical conversion of substances like water and carbon dioxide into hydrocarbons) are also proposed as a future replacement of current jet fuels. In 2021 the first production scale plant for e-fuels to be used in aviation opened in northern Germany. Production capacity is planned to reach 8 barrels a day by 2022. Lufthansa will be among the chief users of the synthetic fuel produced in the new facility. Germany's plan to transform aviation to net zero carbon emissions relies heavily on e-fuels.
Besides the need to rapidly scale up currently minuscule production capacity, the main obstacles to wider deployment of sustainable aviation fuels and e-Fuels are their much higher cost in the absence of meaningful carbon pricing in aviation. Furthermore, with current CORSIA regulations for sustainable aviation fuels allowing up to 90% of emissions compared to conventional fuels, even those options are currently far from carbon neutral.
There were attempts at building nuclear-powered aircraft during the Cold War, which unlike nuclear marine propulsion never got very far and were always only proposed for military uses. As of 2022 no country or private enterprise is seriously pursuing nuclear propulsion for passenger aircraft.
However, short haul, low demand routes can be easily flown using electric aircraft, and manufacturers such as Heart Aerospace are planning to introduce them with United Airlines in 2026.
Unintended side-effects
Second-hand vehicle dumping
From the European Union, there is already an export market which includes millions of used cars which are sent to Eastern Europe and the Caucasus, Central Asia and Africa. According to UNECE, the global on-road vehicle fleet is to double by 2050 (from 1.2 billion to 2.5 billion, see introduction), with most future car purchases taking place in developing countries. Some experts predict that the number of vehicles in developing countries will increase by 4 or 5-fold by 2050 (compared to current car use levels), and that the majority of these will be second-hand. There are currently no global or even regional agreements that rationalise and govern the flow of second-hand vehicles. Others say that new electric 2-wheelers may sell widely in developing countries as they are affordable.
Internal combustion engine cars that may no longer comply to local environmental standards are exported to developing countries, where legislation on vehicle emissions is often less strict. In addition, in some developing countries, such as Uganda, the average age of a car imported is already 16.5 years and it will likely be driven for another 20 years. In such cases, fuel efficiency levels of these vehicles become worse as they age. In addition, national vehicle inspection requirements vary widely depending on the country.
Potential solutions
Export prohibitions: Some propose that the European Union could implement a rule that does not allow the most polluting cars to leave the EU. The European Union itself is of the opinion that it "should stop exporting its waste outside of the EU" and it will therefore "revisit the rules on waste shipments and illegal exports".
Import prohibitions: This includes used vehicle bans, used vehicle import age limits, taxation and inspection tests as a precondition to vehicle registration.
Convert fossil fuel vehicles to electric: , this is expensive, so it tends to only be done for classic cars.
Mandatory recycling: The European Commission is considering plans to introduce rules on mandatory recycled content in specific product groups for packaging, vehicles, construction materials and batteries, for instance. The EU announced a new Circular Economy Action Plan in March 2020, and it mentioned that the Commission will also propose to revise the rules on end-of-life vehicles with a view to promoting more circular business models.
Scrappage programs: Governments can offer a premium to owners to have their fossil fuel vehicles voluntarily scrapped and to buy a cleaner vehicle from that money (if they so choose). For example, the city of Ghent offers a scrapping premium of €1,000 for diesel vehicles and €750 for petrol vehicles; as of December 2019, the city had allocated €1.2 million for this purpose to the scrapping fund.
Mobility transition
In Germany, activists have coined the term Verkehrswende (mobility transition, analogous to "Energiewende", energy transition) for a project of not only changing the motive power of cars (from fossil fuels to renewable power sources) but the entire mobility system to one of walkability, complete streets, public transit, electrified railways and bicycle infrastructure.
There is similar research being done in the United States around the term mobility justice. Geologist Dr. Jason Henderson of University of California, San Francisco argues that supporting electric vehicles while neglecting compact city design and public transportation will lead to car-oriented city design. This comes with numerous sustainability issues that disproportionately affect disadvantaged communities such as environmental gentrification, less low-income housing, and unequal access to the benefits of electric vehicle adoption. In addition, the production of electric vehicles can come at the price of laborers in other countries, and the environmental costs there are seldom taken into account when calculating the environmental benefits of electric vehicles. According to mobility justice critiques, relying primarily on electric vehicles for the phase out of fossil fuels comes at an opportunity cost of investing in other types of sustainable transportation such as bike lanes, safe walking spaces, electric trains, and electric buses.
See also
Fuel substitution: central lever to be deployed in decarbonising transport
Alternative fuel vehicle: many of which use an internal combustion engine
Directive 2008/50/EC, a 2010 EU directive limiting NO2 emissions, which is the subject of many legal challenges across Europe
Electric vehicle conversion: removing the engine of an internal combustion-powered vehicle and replacing it with an electric motor, creating reduced manufacturing emissions (as most car parts are reused) and costs compared to manufacturing/buying a new one
Electrofuel: a type of synthetic fuel made from electricity (e.g., made using wind, water or solar power), many of which can be burnt in internal combustion engines
Environmental impact of aviation
Flexible-fuel vehicle and dual-fuel vehicle: have an internal combustion engine and can run on multiple fuels, sometimes even combining renewable/bio fuels and fossil fuels
Fossil fuel lobby
Fuel cell vehicle: vehicles that generate electricity using oxygen from the air and compressed hydrogen
Hydrogen internal combustion engine vehicle: burns hydrogen in an internal combustion engine
Leapfrogging
Smart mobility
Short-haul flight ban
Coal phase-out
Fossil fuel phase-out
Phase-out of gas boilers
Plastic bans
Notes
References
Energy-related lists
Health-related lists
Low-carbon economy
Technological change
Technological phase-outs
2020s in transport
Electric vehicles
Fossil fuels
Fossil fuel phase-out | 0.783643 | 0.995146 | 0.779839 |
Agricultural biotechnology | Agricultural biotechnology, also known as agritech, is an area of agricultural science involving the use of scientific tools and techniques, including genetic engineering, molecular markers, molecular diagnostics, vaccines, and tissue culture, to modify living organisms: plants, animals, and microorganisms. Crop biotechnology is one aspect of agricultural biotechnology which has been greatly developed upon in recent times. Desired trait are exported from a particular species of Crop to an entirely different species. These transgene crops possess desirable characteristics in terms of flavor, color of flowers, growth rate, size of harvested products and resistance to diseases and pests.
History
Farmers have manipulated plants and animals through selective breeding for decades of thousands of years in order to create desired traits. In the 20th century, a surge in technology resulted in an increase in agricultural biotechnology through the selection of traits like the increased yield, pest resistance, drought resistance, and herbicide resistance. The first food product produced through biotechnology was sold in 1990, and by 2003, 7 million farmers were utilizing biotech crops. More than 85% of these farmers were located in developing countries.
Crop modification techniques
Traditional breeding
Traditional crossbreeding has been used for centuries to improve crop quality and quantity. Crossbreeding mates two sexually compatible species to create a new and special variety with the desired traits of the parents. For example, the honeycrisp apple exhibits a specific texture and flavor due to the crossbreeding of its parents. In traditional practices, pollen from one plant is placed on the female part of another, which leads to a hybrid that contains genetic information from both parent plants. Plant breeders select the plants with the traits they're looking to pass on and continue to breed those plants. Note that crossbreeding can only be utilized within the same or closely related species.
Mutagenesis
Mutations can occur randomly in the DNA of any organism. In order to create variety within crops, scientists can randomly induce mutations within plants. Mutagenesis uses radioactivity to induce random mutations in the hopes of stumbling upon the desired trait. Scientists can use mutating chemicals such as ethyl methanesulfonate, or radioactivity to create random mutations within the DNA. Atomic gardens are used to mutate crops. A radioactive core is located in the center of a circular garden and raised out of the ground to radiate the surrounding crops, generating mutations within a certain radius. Mutagenesis through radiation was the process used to produce ruby red grapefruits.
Polyploidy
Polyploidy can be induced to modify the number of chromosomes in a crop in order to influence its fertility or size. Usually, organisms have two sets of chromosomes, otherwise known as a diploidy. However, either naturally or through the use of chemicals, that number of chromosomes can change, resulting in fertility changes or size modification within the crop. Seedless watermelons are created in this manner; a 4-set chromosome watermelon is crossed with a 2-set chromosome watermelon to create a sterile (seedless) watermelon with three sets of chromosomes.
Protoplast fusion
Protoplast fusion is the joining of cells or cell components to transfer traits between species. For example, the trait of male sterility is transferred from radishes to red cabbages by protoplast fusion. This male sterility helps plant breeders make hybrid crops.
RNA interference
RNA interference (RNAIi) is the process in which a cell's RNA to protein mechanism is turned down or off in order to suppress genes. This method of genetic modification works by interfering with messenger RNA to stop the synthesis of proteins, effectively silencing a gene.
Transgenics
Transgenics involves the insertion of one piece of DNA into another organism's DNA in order to introduce new genes into the original organism. This addition of genes into an organism's genetic material creates a new variety with desired traits. The DNA must be prepared and packaged in a test tube and then inserted into the new organism. New genetic information can be inserted with gene guns/biolistics. An example of a gene gun transgenic is the rainbow papaya, which is modified with a gene that gives it resistance to the papaya ringspot virus.
Genome editing
Genome editing is the use of an enzyme system to modify the DNA directly within the cell. Genome editing is used to develop herbicide resistant canola to help farmers control weeds.
Improved nutritional content
Agricultural biotechnology has been used to improve the nutritional content of a variety of crops in an effort to meet the needs of an increasing population. Genetic engineering can produce crops with a higher concentration of vitamins. For example, golden rice contains three genes that allow plants to produce compounds that are converted to vitamin A in the human body. This nutritionally improved rice is designed to combat the world's leading cause of blindness—vitamin A deficiency. Similarly, the Banana 21 project has worked to improve the nutrition in bananas to combat micronutrient deficiencies in Uganda. By genetically modifying bananas to contain vitamin A and iron, Banana 21 has helped foster a solution to micronutrient deficiencies through the vessel of a staple food and major starch source in Africa. Additionally, crops can be engineered to reduce toxicity or to produce varieties with removed allergens.
Genes and traits of interest for crops
Agronomic traits
Insect resistance
One highly sought after trait is insect resistance. This trait increases a crop's resistance to pests and allows for a higher yield. An example of this trait are crops that are genetically engineered to make insecticidal proteins originally discovered in (Bacillus thuringiensis). Bacillus thuringiensis is a bacterium that produces insect repelling proteins that are non-harmful to humans. The genes responsible for this insect resistance have been isolated and introduced into many crops. Bt corn and cotton are now commonplace, and cowpeas, sunflower, soybeans, tomatoes, tobacco, walnut, sugar cane, and rice are all being studied in relation to Bt.
Herbicide tolerance
Weeds have proven to be an issue for farmers for thousands of years; they compete for soil nutrients, water, and sunlight and prove deadly to crops. Biotechnology has offered a solution in the form of herbicide tolerance. Chemical herbicides are sprayed directly on plants in order to kill weeds and therefore competition, and herbicide resistant crops have to the opportunity to flourish.
Disease resistance
Often, crops are afflicted by disease spread through insects (like aphids). Spreading disease among crop plants is incredibly difficult to control and was previously only managed by completely removing the affected crop. The field of agricultural biotechnology offers a solution through genetically engineering virus resistance. Developing GE disease-resistant crops now include cassava, maize, and sweet potato.
Temperature tolerance
Agricultural biotechnology can also provide a solution for plants in extreme temperature conditions. In order to maximize yield and prevent crop death, genes can be engineered that help to regulate cold and heat tolerance. For example, tobacco plants have been genetically modified to be more tolerant to hot and cold conditions, with genes originally found in Carica papaya. Other traits include water use efficiency, nitrogen use efficiency and salt tolerance.
Quality traits
Quality traits include increased nutritional or dietary value, improved food processing and storage, or the elimination of toxins and allergens in crop plants.
Common GMO crops
Currently, only a small number of genetically modified crops are available for purchase and consumption in the United States. The USDA has approved soybeans, corn, canola, sugar beets, papaya, squash, alfalfa, cotton, apples, and potatoes. GMO apples (arctic apples) are non-browning apples and eliminate the need for anti-browning treatments, reduce food waste, and bring out flavor. The production of Bt cotton has skyrocketed in India, with 10 million hectares planted for the first time in 2011, resulting in a 50% insecticide application reduction. In 2014, Indian and Chinese farmers planted more than 15 million hectares of Bt cotton.
Safety testing and government regulations
Agricultural biotechnology regulation in the US falls under three main government agencies: The Department of Agriculture (USDA), the Environmental Protection Agency (EPA), and the Food and Drug Administration (FDA). The USDA must approve the release of any new GMOs, EPA controls the regulation of insecticide, and the FDA evaluates the safety of a particular crop sent to market. On average, it takes nearly 13 years and $130 million of research and development for a genetically modified organism to come to market. The regulation process takes up to 8 years in the United States. The safety of GMOs has become a topic of debate worldwide, but scientific articles are being conducted to test the safety of consuming GMOs in addition to the FDA's work. In one such article, it was concluded that Bt rice did not adversely affect digestion and did not induce horizontal gene transfer.
References
Momoh James Osamede (2016). Crop Biotechnology in Nigeria. Procedure for Postgraduate workshop, UNIBEN, Nigeria 27 April 2016. BENIN CITY, Nigeria
Biotechnology
Biotechnology
Life sciences industry | 0.789284 | 0.987965 | 0.779785 |
Marine biology | Marine biology is the scientific study of the biology of marine life, organisms that inhabit the sea. Given that in biology many phyla, families and genera have some species that live in the sea and others that live on land, marine biology classifies species based on the environment rather than on taxonomy.
A large proportion of all life on Earth lives in the ocean. The exact size of this "large proportion" is unknown, since many ocean species are still to be discovered. The ocean is a complex three-dimensional world, covering approximately 71% of the Earth's surface. The habitats studied in marine biology include everything from the tiny layers of surface water in which organisms and abiotic items may be trapped in surface tension between the ocean and atmosphere, to the depths of the oceanic trenches, sometimes 10,000 meters or more beneath the surface of the ocean. Specific habitats include estuaries, coral reefs, kelp forests, seagrass meadows, the surrounds of seamounts and thermal vents, tidepools, muddy, sandy and rocky bottoms, and the open ocean (pelagic) zone, where solid objects are rare and the surface of the water is the only visible boundary. The organisms studied range from microscopic phytoplankton and zooplankton to huge cetaceans (whales) in length. Marine ecology is the study of how marine organisms interact with each other and the environment.
Marine life is a vast resource, providing food, medicine, and raw materials, in addition to helping to support recreation and tourism all over the world. At a fundamental level, marine life helps determine the very nature of our planet. Marine organisms contribute significantly to the oxygen cycle, and are involved in the regulation of the Earth's climate. Shorelines are in part shaped and protected by marine life, and some marine organisms even help create new land.
Many species are economically important to humans, including both finfish and shellfish. It is also becoming understood that the well-being of marine organisms and other organisms are linked in fundamental ways. The human body of knowledge regarding the relationship between life in the sea and important cycles is rapidly growing, with new discoveries being made nearly every day. These cycles include those of matter (such as the carbon cycle) and of air (such as Earth's respiration, and movement of energy through ecosystems including the ocean). Large areas beneath the ocean surface still remain effectively unexplored.
Biological oceanography
Marine biology can be contrasted with biological oceanography. Marine life is a field of study both in marine biology and in biological oceanography. Biological oceanography is the study of how organisms affect and are affected by the physics, chemistry, and geology of the oceanographic system. Biological oceanography mostly focuses on the microorganisms within the ocean; looking at how they are affected by their environment and how that affects larger marine creatures and their ecosystem. Biological oceanography is similar to marine biology, but it studies ocean life from a different perspective. Biological oceanography takes a bottom up approach in terms of the food web, while marine biology studies the ocean from a top down perspective. Biological oceanography mainly focuses on the ecosystem of the ocean with an emphasis on plankton: their diversity (morphology, nutritional sources, motility, and metabolism); their productivity and how that plays a role in the global carbon cycle; and their distribution (predation and life cycle). Biological oceanography also investigates the role of microbes in food webs, and how humans impact the ecosystems in the oceans.
Marine habitats
Marine habitats can be divided into coastal and open ocean habitats. Coastal habitats are found in the area that extends from the shoreline to the edge of the continental shelf. Most marine life is found in coastal habitats, even though the shelf area occupies only seven percent of the total ocean area. Open ocean habitats are found in the deep ocean beyond the edge of the continental shelf. Alternatively, marine habitats can be divided into pelagic and demersal habitats. Pelagic habitats are found near the surface or in the open water column, away from the bottom of the ocean and affected by ocean currents, while demersal habitats are near or on the bottom. Marine habitats can be modified by their inhabitants. Some marine organisms, like corals, kelp and sea grasses, are ecosystem engineers which reshape the marine environment to the point where they create further habitat for other organisms.
Intertidal and near shore
Intertidal zones, the areas that are close to the shore, are constantly being exposed and covered by the ocean's tides. A huge array of life can be found within this zone. Shore habitats span from the upper intertidal zones to the area where land vegetation takes prominence. It can be underwater anywhere from daily to very infrequently. Many species here are scavengers, living off of sea life that is washed up on the shore. Many land animals also make much use of the shore and intertidal habitats. A subgroup of organisms in this habitat bores and grinds exposed rock through the process of bioerosion.
Estuaries
Estuaries are also near shore and influenced by the tides. An estuary is a partially enclosed coastal body of water with one or more rivers or streams flowing into it and with a free connection to the open sea. Estuaries form a transition zone between freshwater river environments and saltwater maritime environments. They are subject both to marine influences—such as tides, waves, and the influx of saline water—and to riverine influences—such as flows of fresh water and sediment. The shifting flows of both sea water and fresh water provide high levels of nutrients both in the water column and in sediment, making estuaries among the most productive natural habitats in the world.
Reefs
Reefs comprise some of the densest and most diverse habitats in the world. The best-known types of reefs are tropical coral reefs which exist in most tropical waters; however, reefs can also exist in cold water. Reefs are built up by corals and other calcium-depositing animals, usually on top of a rocky outcrop on the ocean floor. Reefs can also grow on other surfaces, which has made it possible to create artificial reefs. Coral reefs also support a huge community of life, including the corals themselves, their symbiotic zooxanthellae, tropical fish and many other organisms.
Much attention in marine biology is focused on coral reefs and the El Niño weather phenomenon. In 1998, coral reefs experienced the most severe mass bleaching events on record, when vast expanses of reefs across the world died because sea surface temperatures rose well above normal. Some reefs are recovering, but scientists say that between 50% and 70% of the world's coral reefs are now endangered and predict that global warming could exacerbate this trend.
Open ocean
The open ocean is relatively unproductive because of a lack of nutrients, yet because it is so vast, in total it produces the most primary productivity. The open ocean is separated into different zones, and the different zones each have different ecologies. Zones which vary according to their depth include the epipelagic, mesopelagic, bathypelagic, abyssopelagic, and hadopelagic zones. Zones which vary by the amount of light they receive include the photic and aphotic zones. Much of the aphotic zone's energy is supplied by the open ocean in the form of detritus.
Deep sea and trenches
The deepest recorded oceanic trench measured to date is the Mariana Trench, near the Philippines, in the Pacific Ocean at . At such depths, water pressure is extreme and there is no sunlight, but some life still exists. A white flatfish, a shrimp and a jellyfish were seen by the American crew of the bathyscaphe Trieste when it dove to the bottom in 1960. In general, the deep sea is considered to start at the aphotic zone, the point where sunlight loses its power of transference through the water. Many life forms that live at these depths have the ability to create their own light known as bio-luminescence. Marine life also flourishes around seamounts that rise from the depths, where fish and other sea life congregate to spawn and feed. Hydrothermal vents along the mid-ocean ridge spreading centers act as oases, as do their opposites, cold seeps. Such places support unique biomes and many new microbes and other lifeforms have been discovered at these locations.There is still much more to learn about the deeper parts of the ocean.
Marine life
In biology, many phyla, families and genera have some species that live in the sea and others that live on land. Marine biology classifies species based on their environment rather than their taxonomy. For this reason, marine biology encompasses not only organisms that live only in a marine environment, but also other organisms whose lives revolve around the sea.
Microscopic life
As inhabitants of the largest environment on Earth, microbial marine systems drive changes in every global system. Microbes are responsible for virtually all photosynthesis that occurs in the ocean, as well as the cycling of carbon, nitrogen, phosphorus and other nutrients and trace elements.
Microscopic life undersea is incredibly diverse and still poorly understood. For example, the role of viruses in marine ecosystems is barely being explored even in the beginning of the 21st century.
The role of phytoplankton is better understood due to their critical position as the most numerous primary producers on Earth. Phytoplankton are categorized into cyanobacteria (also called blue-green algae/bacteria), various types of algae (red, green, brown, and yellow-green), diatoms, dinoflagellates, euglenoids, coccolithophorids, cryptomonads, chrysophytes, chlorophytes, prasinophytes, and silicoflagellates.
Zooplankton tend to be somewhat larger, and not all are microscopic. Many Protozoa are zooplankton, including dinoflagellates, zooflagellates, foraminiferans, and radiolarians. Some of these (such as dinoflagellates) are also phytoplankton; the distinction between plants and animals often breaks down in very small organisms. Other zooplankton include cnidarians, ctenophores, chaetognaths, molluscs, arthropods, urochordates, and annelids such as polychaetes. Many larger animals begin their life as zooplankton before they become large enough to take their familiar forms. Two examples are fish larvae and sea stars (also called starfish).
Plants and algae
Microscopic algae and plants provide important habitats for life, sometimes acting as hiding places for larval forms of larger fish and foraging places for invertebrates.
Algal life is widespread and very diverse under the ocean. Microscopic photosynthetic algae contribute a larger proportion of the world's photosynthetic output than all the terrestrial forests combined. Most of the niche occupied by sub plants on land is actually occupied by macroscopic algae in the ocean, such as Sargassum and kelp, which are commonly known as seaweeds that create kelp forests.
Plants that survive in the sea are often found in shallow waters, such as the seagrasses (examples of which are eelgrass, Zostera, and turtle grass, Thalassia). These plants have adapted to the high salinity of the ocean environment. The intertidal zone is also a good place to find plant life in the sea, where mangroves or cordgrass or beach grass might grow.
Invertebrates
As on land, invertebrates, or animals that lack a backbone, make up a huge portion of all life in the sea. Invertebrate sea life includes Cnidaria such as jellyfish and sea anemones; Ctenophora; sea worms including the phyla Platyhelminthes, Nemertea, Annelida, Sipuncula, Echiura, Chaetognatha, and Phoronida; Mollusca including shellfish, squid, octopus; Arthropoda including Chelicerata and Crustacea; Porifera; Bryozoa; Echinodermata including starfish; and Urochordata including sea squirts or tunicates.
Fungi
Over 10,000 species of fungi are known from marine environments. These are parasitic on marine algae or animals, or are saprobes on algae, corals, protozoan cysts, sea grasses, wood and other substrata, and can also be found in sea foam. Spores of many species have special appendages which facilitate attachment to the substratum. A very diverse range of unusual secondary metabolites is produced by marine fungi.
Vertebrates
Fish
A reported 33,400 species of fish, including bony and cartilaginous fish, had been described by 2016, more than all other vertebrates combined. About 60% of fish species live in saltwater.
Reptiles
Reptiles which inhabit or frequent the sea include sea turtles, sea snakes, terrapins, the marine iguana, and the saltwater crocodile. Most extant marine reptiles, except for some sea snakes, are oviparous and need to return to land to lay their eggs. Thus most species, excluding sea turtles, spend most of their lives on or near land rather than in the ocean. Despite their marine adaptations, most sea snakes prefer shallow waters nearby land, around islands, especially waters that are somewhat sheltered, as well as near estuaries. Some extinct marine reptiles, such as ichthyosaurs, evolved to be viviparous and had no requirement to return to land.
Birds
Birds adapted to living in the marine environment are often called seabirds. Examples include albatross, penguins, gannets, and auks. Although they spend most of their lives in the ocean, species such as gulls can often be found thousands of miles inland.
Mammals
There are five main types of marine mammals: cetaceans (toothed whales and baleen whales); sirenians such as manatees; pinnipeds including seals and the walrus; sea otters; and the
polar bear. All are air-breathing, meaning that while some such as the sperm whale can dive for prolonged periods, all must return to the surface to breathe.
Subfields
The marine ecosystem is large, and thus there are many sub-fields of marine biology. Most involve studying specializations of particular animal groups, such as phycology, invertebrate zoology and ichthyology. Other subfields study the physical effects of continual immersion in sea water and the ocean in general, adaptation to a salty environment, and the effects of changing various oceanic properties on marine life. A subfield of marine biology studies the relationships between oceans and ocean life, and global warming and environmental issues (such as carbon dioxide displacement). Recent marine biotechnology has focused largely on marine biomolecules, especially proteins, that may have uses in medicine or engineering. Marine environments are the home to many exotic biological materials that may inspire biomimetic materials.
Through constant monitoring of the ocean, there have been discoveries of marine life which could be used to create remedies for certain diseases such as cancer and leukemia. In addition, Ziconotide, an approved drug used to treat pain, was created from a snail which resides in the ocean.
Related fields
Marine biology is a branch of biology. It is closely linked to oceanography, especially biological oceanography, and may be regarded as a sub-field of marine science. It also encompasses many ideas from ecology. Fisheries science and marine conservation can be considered partial offshoots of marine biology (as well as environmental studies). Marine chemistry, physical oceanography and atmospheric sciences are also closely related to this field.
Distribution factors
An active research topic in marine biology is to discover and map the life cycles of various species and where they spend their time. Technologies that aid in this discovery include pop-up satellite archival tags, acoustic tags, and a variety of other data loggers. Marine biologists study how the ocean currents, tides and many other oceanic factors affect ocean life forms, including their growth, distribution and well-being. This has only recently become technically feasible with advances in GPS and newer underwater visual devices.
Most ocean life breeds in specific places, nests in others, spends time as juveniles in still others, and in maturity in yet others. Scientists know little about where many species spend different parts of their life cycles especially in the infant and juvenile years. For example, it is still largely unknown where juvenile sea turtles and some sharks in the first year of their life travel. Recent advances in underwater tracking devices are illuminating what we know about marine organisms that live at great ocean depths. The information that pop-up satellite archival tags gives aids in fishing closures for certain times of the year and the development of marine protected areas. This data is important to both scientists and fishermen because they are discovering that, by restricting commercial fishing in one small area, they can have a large impact in maintaining a healthy fish population in a much larger area.
History
The study of marine biology dates to Aristotle (384–322 BC), who made many observations of life in the sea around Lesbos, laying the foundation for many future discoveries. In 1768, Samuel Gottlieb Gmelin (1744–1774) published the Historia Fucorum, the first work dedicated to marine algae and the first book on marine biology to use the new binomial nomenclature of Linnaeus. It included elaborate illustrations of seaweed and marine algae on folded leaves. The British naturalist Edward Forbes (1815–1854) is generally regarded as the founder of the science of marine biology. The pace of oceanographic and marine biology studies quickly accelerated during the course of the 19th century.
The observations made in the first studies of marine biology fueled the Age of Discovery and exploration that followed. During this time, a vast amount of knowledge was gained about the life that exists in the oceans of the world. Many voyages contributed significantly to this pool of knowledge. Among the most significant were the voyages of where Charles Darwin came up with his theories of evolution and on the formation of coral reefs. Another important expedition was undertaken by HMS Challenger, where findings were made of unexpectedly high species diversity among fauna stimulating much theorizing by population ecologists on how such varieties of life could be maintained in what was thought to be such a hostile environment. This era was important for the history of marine biology but naturalists were still limited in their studies because they lacked technology that would allow them to adequately examine species that lived in deep parts of the oceans.
The creation of marine laboratories was important because it allowed marine biologists to conduct research and process their specimens from expeditions. The oldest marine laboratory in the world, Station biologique de Roscoff, was established in Concarneau, France founded by the College of France in 1859. In the United States, Scripps Institution of Oceanography dates back to 1903, while the prominent Woods Hole Oceanographic Institute was founded in 1930. The development of technology such as sound navigation and ranging, scuba diving gear, submersibles and remotely operated vehicles allowed marine biologists to discover and explore life in deep oceans that was once thought to not exist. Public interest in the subject continued to develop in the post-war years with the publication of Rachel Carson's sea trilogy (1941-1955).
See also
Acoustic ecology
Aquaculture
Bathymetry
Biological oceanography
Effects of climate change on oceans
Freshwater biology
Modular ocean model
Oceanic basin
Oceanic climate
Phycology
Lists
Glossary of ecology
Index of biology articles
Large marine ecosystem
List of ecologists
List of marine biologists
List of marine ecoregions (WWF)
Outline of biology
Outline of ecology
References
Further references
Morrissey J and Sumich J (2011) Introduction to the Biology of Marine Life Jones & Bartlett Publishers. .
Mladenov, Philip V., Marine Biology: A Very Short Introduction, 2nd edn (Oxford, 2020; online edn, Very Short Introductions online, Feb. 2020), http://dx.doi.org/10.1093/actrade/9780198841715.001.0001, accessed 21 Jun. 2020.
External links
Smithsonian Ocean Portal
Marine Conservation Society
Marine Ecology – an evolutionary perspective
Free special issue: Marine Biology in Time and Space
Creatures of the deep ocean – National Geographic documentary, 2010.
Exploris
Freshwater and Marine Image Bank – From the University of Washington Library
Marine Training Portal – Portal grouping training initiatives in the field of Marine Biology
Biological oceanography
Fisheries science
Oceanographical terminology | 0.782448 | 0.996569 | 0.779764 |
Jevons paradox | In economics, the Jevons paradox (; sometimes Jevons effect) occurs when technological progress increases the efficiency with which a resource is used (reducing the amount necessary for any one use), but the falling cost of use induces increases in demand enough that resource use is increased, rather than reduced. Governments, both historical and modern, typically expect that energy efficiency gains will lower energy consumption, rather than expecting the Jevons paradox.
In 1865, the English economist William Stanley Jevons observed that technological improvements that increased the efficiency of coal use led to the increased consumption of coal in a wide range of industries. He argued that, contrary to common intuition, technological progress could not be relied upon to reduce fuel consumption.
The issue has been re-examined by modern economists studying consumption rebound effects from improved energy efficiency. In addition to reducing the amount needed for a given use, improved efficiency also lowers the relative cost of using a resource, which increases the quantity demanded. This may counteract (to some extent) the reduction in use from improved efficiency. Additionally, improved efficiency increases real incomes and accelerates economic growth, further increasing the demand for resources. The Jevons paradox occurs when the effect from increased demand predominates, and the improved efficiency results in a faster rate of resource utilization.
Considerable debate exists about the size of the rebound in energy efficiency and the relevance of the Jevons paradox to energy conservation. Some dismiss the effect, while others worry that it may be self-defeating to pursue sustainability by increasing energy efficiency. Some environmental economists have proposed that efficiency gains be coupled with conservation policies that keep the cost of use the same (or higher) to avoid the Jevons paradox. Conservation policies that increase cost of use (such as cap and trade or green taxes) can be used to control the rebound effect.
History
The Jevons paradox was first described by the English economist William Stanley Jevons in his 1865 book The Coal Question. Jevons observed that England's consumption of coal soared after James Watt introduced the Watt steam engine, which greatly improved the efficiency of the coal-fired steam engine from Thomas Newcomen's earlier design. Watt's innovations made coal a more cost-effective power source, leading to the increased use of the steam engine in a wide range of industries. This in turn increased total coal consumption, even as the amount of coal required for any particular application fell. Jevons argued that improvements in fuel efficiency tend to increase (rather than decrease) fuel use, writing: "It is a confusion of ideas to suppose that the economical use of fuel is equivalent to diminished consumption. The very contrary is the truth."
At that time, many in Britain worried that coal reserves were rapidly dwindling, but some experts opined that improving technology would reduce coal consumption. Jevons argued that this view was incorrect, as further increases in efficiency would tend to increase the use of coal. Hence, improving technology would tend to increase the rate at which England's coal deposits were being depleted, and could not be relied upon to solve the problem.
Although Jevons originally focused on coal, the concept has since been extended to other resources, e.g., water usage. The Jevons paradox is also found in socio-hydrology, in the safe development paradox called the reservoir effect, where construction of a reservoir to reduce the risk of water shortage can instead exacerbate that risk, as increased water availability leads to more development and hence more water consumption.
Cause
Economists have observed that consumers tend to travel more when their cars are more fuel efficient, causing a 'rebound' in the demand for fuel. An increase in the efficiency with which a resource (e.g. fuel) is used causes a decrease in the cost of using that resource when measured in terms of what it can achieve (e.g. travel). Generally speaking, a decrease in the cost (or price) of a good or service will increase the quantity demanded (the law of demand). With a lower cost for travel, consumers will travel more, increasing the demand for fuel. This increase in demand is known as the rebound effect, and it may or may not be large enough to offset the original drop in fuel use from the increased efficiency. The Jevons paradox occurs when the rebound effect is greater than 100%, exceeding the original efficiency gains.
The size of the direct rebound effect is dependent on the price elasticity of demand for the good. In a perfectly competitive market where fuel is the sole input used, if the price of fuel remains constant but efficiency is doubled, the effective price of travel would be halved (twice as much travel can be purchased). If in response, the amount of travel purchased more than doubles (i.e. demand is price elastic), then fuel consumption would increase, and the Jevons paradox would occur. If demand is price inelastic, the amount of travel purchased would less than double, and fuel consumption would decrease. However, goods and services generally use more than one type of input (e.g. fuel, labour, machinery), and other factors besides input cost may also affect price. These factors tend to reduce the rebound effect, making the Jevons paradox less likely to occur.
Khazzoom–Brookes postulate
In the 1980s, economists Daniel Khazzoom and Leonard Brookes revisited the Jevons paradox for the case of society's energy use. Brookes, then chief economist at the UK Atomic Energy Authority, argued that attempts to reduce energy consumption by increasing energy efficiency would simply raise demand for energy in the economy as a whole. Khazzoom focused on the narrower point that the potential for rebound was ignored in mandatory performance standards for domestic appliances being set by the California Energy Commission.
In 1992, the economist Harry Saunders dubbed the hypothesis that improvements in energy efficiency work to increase (rather than decrease) energy consumption the Khazzoom–Brookes postulate, and argued that the hypothesis is broadly supported by neoclassical growth theory (the mainstream economic theory of capital accumulation, technological progress and long-run economic growth). Saunders showed that the Khazzoom–Brookes postulate occurs in the neoclassical growth model under a wide range of assumptions.
According to Saunders, increased energy efficiency tends to increase energy consumption by two means. First, increased energy efficiency makes the use of energy relatively cheaper, thus encouraging increased use (the direct rebound effect). Second, increased energy efficiency increases real incomes and leads to increased economic growth, which pulls up energy use for the whole economy. At the microeconomic level (looking at an individual market), even with the rebound effect, improvements in energy efficiency usually result in reduced energy consumption. That is, the rebound effect is usually less than 100%. However, at the macroeconomic level, more efficient (and hence comparatively cheaper) energy leads to faster economic growth, which increases energy use throughout the economy. Saunders argued that taking into account both microeconomic and macroeconomic effects, the technological progress that improves energy efficiency will tend to increase overall energy use.
Energy conservation policy
Jevons warned that fuel efficiency gains tend to increase fuel use. However, this does not imply that improved fuel efficiency is worthless if the Jevons paradox occurs; higher fuel efficiency enables greater production and a higher material quality of life. For example, a more efficient steam engine allowed the cheaper transport of goods and people that contributed to the Industrial Revolution. Nonetheless, if the Khazzoom–Brookes postulate is correct, increased fuel efficiency, by itself, will not reduce the rate of depletion of fossil fuels.
There is considerable debate about whether the Khazzoom-Brookes Postulate is correct, and of the relevance of the Jevons paradox to energy conservation policy. Most governments, environmentalists and NGOs pursue policies that improve efficiency, holding that these policies will lower resource consumption and reduce environmental problems. Others, including many environmental economists, doubt this 'efficiency strategy' towards sustainability, and worry that efficiency gains may in fact lead to higher production and consumption. They hold that for resource use to fall, efficiency gains should be coupled with other policies that limit resource use. However, other environmental economists argue that, while the Jevons paradox may occur in some situations, the empirical evidence for its widespread applicability is limited.
The Jevons paradox is sometimes used to argue that energy conservation efforts are futile, for example, that more efficient use of oil will lead to increased demand, and will not slow the arrival or the effects of peak oil. This argument is usually presented as a reason not to enact environmental policies or pursue fuel efficiency (e.g. if cars are more efficient, it will simply lead to more driving). Several points have been raised against this argument. First, in the context of a mature market such as for oil in developed countries, the direct rebound effect is usually small, and so increased fuel efficiency usually reduces resource use, other conditions remaining constant. Second, even if increased efficiency does not reduce the total amount of fuel used, there remain other benefits associated with improved efficiency. For example, increased fuel efficiency may mitigate the price increases, shortages and disruptions in the global economy associated with crude oil depletion. Third, environmental economists have pointed out that fuel use will unambiguously decrease if increased efficiency is coupled with an intervention (e.g. a fuel tax) that keeps the cost of fuel use the same or higher.
The Jevons paradox indicates that increased efficiency by itself may not reduce fuel use, and that sustainable energy policy must rely on other types of government interventions as well. As the imposition of conservation standards or other government interventions that increase cost-of-use do not display the Jevons paradox, they can be used to control the rebound effect. To ensure that efficiency-enhancing technological improvements reduce fuel use, efficiency gains can be paired with government intervention that reduces demand (e.g. green taxes, cap and trade, or higher emissions standards). The ecological economists Mathis Wackernagel and William Rees have suggested that any cost savings from efficiency gains be "taxed away or otherwise removed from further economic circulation. Preferably they should be captured for reinvestment in natural capital rehabilitation." By mitigating the economic effects of government interventions designed to promote ecologically sustainable activities, efficiency-improving technological progress may make the imposition of these interventions more palatable, and more likely to be implemented.
Other examples
Agriculture
Increasing the yield of a crop, such as wheat, for a given area will reduce the area required to achieve the same total yield. However, increasing efficiency may make it more profitable to grow wheat and lead farmers to convert land to the production of wheat, thereby increasing land use instead.
See also
Andy and Bill's law, new software will always consume any increase in computing power that new hardware can provide
Diminishing returns
Downs–Thomson paradox, increasing road capacity can make traffic congestion worse
Tragedy of the commons, a phenomenon in which common resources to which access is not regulated tend to become depleted
Wirth's law, faster hardware can trigger the development of less-efficient software
Dutch Disease, strong revenue from a dominant sector renders other sectors uncompetitive and starves them
References
Further reading
Eponymous paradoxes
Paradoxes in economics
Industrial ecology
Energy policy
Energy conservation
Environmental social science concepts | 0.781141 | 0.998217 | 0.779748 |
Polycrisis | The term polycrisis, also referred to as a metacrisis or permacrisis, describes a complex situation where multiple, interconnected crises converge and amplify each other, resulting in a predicament which is difficult to manage or resolve. Unlike single crises which may have clear causes and solutions, a polycrisis involves overlapping and interdependent issues, making it a more pervasive and enduring state of instability. This concept reflects growing concerns about the sustainability and viability of contemporary socio-economic, political, and ecological systems.
The concept was coined in the 1990s but became popular in the 2020s to refer to the effects of the COVID-19 pandemic, war, surging debt levels, inflation, climate change, resource depletion, growing inequality, artificial intelligence and synthetic biology, and democratic backsliding.
Critics of the term have characterized it as a buzzword or a distraction from more concrete causes of the crises.
Background
The idea of a polycrisis has its roots in the recognition that modern societies face not just isolated problems but a series of interconnected challenges that could lead to cascading failures if not addressed as such. The term emphasizes the multifaceted nature of these crises, which can include economic inequality, political instability, environmental degradation, and social unrest, all reinforcing one another. The interconnectedness of these crises means that solutions in one area can often lead to unintended consequences in another, creating a feedback loop that exacerbates the overall situation.
The concept of polycrisis captures the complexity and interconnectedness of the challenges facing humanity in the 21st century. It underscores the need for new ways of thinking and acting that go beyond traditional problem-solving methods. As humanity grapples with multiple, overlapping crises, the recognition of polycrisis offers both a warning and an opportunity to forge a more sustainable and resilient future.
Components
Ecological overshoot & limits to growth
The concept of polycrisis aligns with the warnings issued in the Limits to Growth report, which suggested that unchecked economic growth and resource consumption would eventually surpass the Earth's carrying capacity. Human ecological overshoot—using resources faster than they can be replenished—has led to environmental degradation, climate change, and biodiversity loss, which in turn threaten the stability and continuity of human societies.
Socio-political instability
During the late 20th and early 21st centuries, it has become increasingly evident that liberal democracies exhibit stark internal contradictions, such as that of egalitarian ideals versus imperialistic practices, which undermine their legitimacy as leaders of the "rules-based" liberal international order. The rise of right-wing populism and the erosion of the Western social contract reflect a growing popular dissatisfaction with the political and economic systems in the West. These political shifts are often fueled by economic inequalities, perceived threats to national identity and social status, and disillusionment with traditional political elites.
Technological & economic disparities
The concentration of wealth and power among a small elite, as highlighted in works like Douglas Rushkoff's Survival of the Richest, contributes to the polycrisis by exacerbating social inequalities and undermining potential collective action to address the issues. The increasing gap between the wealthy and the rest of society raises questions about the sustainability of current economic models and the fairness of technological advancements that primarily benefit the elite.
Philosophical & existential dimensions
The polycrisis also involves a deeper, philosophical reckoning with humanity's place in the world. As articulated in Vanessa Machado de Oliveira’s Hospicing Modernity, there is a small but growing awareness of the limits of human control and the need to accept ecological and biological realities. This fundamentally challenges the anthropocentric and individualistic narratives that have historically underpinned Western thought.
Responses & criticism
Critics of the polycrisis narrative argue that it can lead to fatalism and inaction, suggesting instead a focus on practical, incremental changes that can build resilience and adaptability.
Various thought leaders and figureheads in the technology space have aligned themselves with effective accelerationism and have forcefully critiqued concepts related to the polycrisis, arguing that the way to solve most, if not all, of the problems facing humanity is through further economic growth and the acceleration of tech development and deployment. In 2023, venture capitalist and tech magnate Marc Andreessen published the Techno-Optimist Manifesto, arguing that technology is what creates wealth and happiness.
Various scholars and thought leaders have proposed different frameworks for understanding and responding to the polycrisis. Some advocate for a radical rethinking of modernity and a transition towards more sustainable and equitable ways of living. This includes adopting ecological wisdom from Indigenous cultures, reimagining economic systems, and embracing a deeper connection with the natural world.
See also
References
Crisis | 0.794771 | 0.981075 | 0.77973 |
Subsets and Splits