text
stringlengths 258
162k
| id
stringlengths 47
47
| metadata
dict |
---|---|---|
Skip to main content
It looks like you're using Internet Explorer 11 or older. This website works best with modern browsers such as the latest versions of Chrome, Firefox, Safari, and Edge. If you continue with this browser, you may see unexpected results.
Click on the title to go to the full text article.
Advertising Topic Page
Any of various methods used by a company to increase the sales of its products or services or to promote a brand name. Advertising is also used by organizations and individuals to communicate an idea or image, to recruit staff, to publicize an event, or to locate an item or commodity.
Aesthetics, or aesthetic, is often used as a synonym for art in general. But then we might ask what art is. The origin of the word is helpful here. In ancient Greek, aisthesis (the root of ‘aesthetic’) means ‘feeling’ and corresponds to the German, Gefühl, a term which Immanuel Kant (1724-1804) used to evoke the idea of inner feeling. Art would then be the sphere where inner feeling is evoked, rather than being the sensations evoked by an external source.
Calligraphy Topic Page
[Gr.,=beautiful writing], skilled penmanship practiced as a fine art. See also inscription; paleography.
In Europe two sorts of handwriting came into being very early. Cursive script was used for letters and records, while far more polished writing styles, called uncials, were used for literary works. Both styles can be seen in papyrus fragments from the 4th cent. B.C. After the 1st cent. A.D. the development of the half uncial or minuscule letter from the Roman capital gave rise to an extraordinarily beautiful and long-lasting calligraphy.
Each color has three characteristics (see Fig. 1)
It has a hue or color name, and is a part of a color group such as blue or blue-greenish, or orange or yellow-orange.
It has a value or lightness, so that a pale yellow would have high value or lightness, and as the lightness is reduced the color becomes darker until it eventually becomes black.
3. It has a saturation, which is also called the chroma or intensity. A true red would have full saturation, and as the saturation is reduced the color becomes lighter until it eventually becomes white.
Color is usually described in terms of a fixed set of color relationships. For an artist, the simplest color relationship is represented in the color wheel of 12 colors made by mixing the three primary colors of red, yellow and blue (see Fig. 2). Each is placed one-third of the distance around the color wheel. These represent the colors that are purest and cannot be created by mixing other colors.
In art, the arrangement of elements within an artwork to give a desired effect, often described as pleasing (unified and appealing to the eye) or expressive (intended to evoke a particular mood, feeling, or idea). The elements of pleasing compositions are usually held together by placing them in an imaginary plan, either a quadrant, sequential, or asymmetrical form. Linear compositions arrange the elements along a diagonal, or a system of radiating or curving lines, or in the layout of a simple geometrical figure such as the triangle.
The computer deeply affects the way today's art is produced, disseminated, and valued. Until recently, images were created through acts of human perception either through skills based on eye-hand coordination or through the lens of copying processes such as photography, cinematic film, or video where what is seen is recorded through various chemical or electronic processes. Increasingly, however, images reside only in the database of a computer, causing a break with the visual means of representation available to artists for creating art since the Renaissance.
Use of computers to display and manipulate information in pictorial form. Input may be achieved by scanning an image, by drawing with a mouse or stylus on a graphics tablet, or by drawing directly on the screen with a light pen.
Computer Graphics: Principles
Computer graphics consists of two steps: describing an image and then displaying it. There are a number of other issues that also come into play with computer graphics (or simply graphics). Is the image an individual one or part of a series of images in an animation? Is the image to be static, or can a computer user interact with a software package to change it? Is the goal of the image to be highly realistic because it will be viewed at length, or is it just to give a quick impression of the scene?
From the 1860s with the creation of the first advertising agencies, to the 1880s when the LINOTYPE and MONOTYPE typesetting machines were developed, the US established a tradition of creative and technical ingenuity that continued throughout the 20th c. The ideals of the ARTS AND CRAFTS MOVEMENT in England were embraced by four great US type designers and printers: Daniel Berkeley UPDIKE, Frederic W. GOUDY, Bruce ROGERS and William Addison DWIGGINS.
Any type of picture or decoration used in conjunction with a text to embellish its appearance or to clarify its meaning. Illustration is as old as writing, with both originating in the pictograph. With the advent of printing, the art of hand-painted illumination declined as a means of book illustration.
Means of producing reproductions of written material or images in multiple copies. There are four traditional types of printing: relief printing (with which this article is mainly concerned), intaglio, lithography, and screen process printing. Relief printing encompasses type, stereotype, electrotype, and letterpress. Flexographic printing is a form of rotary letterpress printing using flexible rubber plates and rapid-drying inks.
The terms font and typeface are not quite synonyms. To a professional typesetter, a font is a specific typeface at a specific size. In computing, however, the distinction is often lost because the user can resize any typeface at will. But for the purposes of this article, a typeface is a character set having a particular styled appearance, regardless of size or attributes such as italic or bold. A typefont, or just font, is a typeface of specific size and attribute, such as 12-point italic Helvetica. A point is 1/72 of an inch. Common typefonts used with personal computers are the italic and bold variants of the Times Roman, Arial, Chicago, and Courier New typefaces, but there are literally hundreds of specialized fonts that can be added to the menu of those available for use with word processors, Web browsers, and other software.
The design of letterforms for use as typefaces, and the selection of typefaces for typeset documents. Typefaces are designed and used for a multitude of purposes (eg books, newspapers, stationery, handbills) and for use on various kinds of typesetting equipment, in both print and non-print media (such as film and television). | <urn:uuid:87eb9717-7cb3-4f6c-8592-72846b331f43> | {
"dump": "CC-MAIN-2020-45",
"url": "https://library.ccis.edu/design/background",
"date": "2020-10-19T23:47:27",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107867463.6/warc/CC-MAIN-20201019232613-20201020022613-00037.warc.gz",
"language": "en",
"language_score": 0.9488667249679565,
"token_count": 1471,
"score": 3.15625,
"int_score": 3
} |
Claude Monet coined the term “Impressionism” in 1874 with his famous painting Impression, Sunrise. Many years later, when he was around 65-years-old, the painter started experiencing changes in the way he perceived colors – they appeared to be less and less intense.
In 1912, Monet was diagnosed with a nuclear cataract in both eyes. The condition reflected on his work. His paintings showed a change, with the use of whites, greens and blues shifting towards “muddier” purple and yellow tones.
Some of his paintings of water lilies and willows, completed in the period between 1918 and 1922, perfectly manifest the effect of his eye condition. In fact, after 1915, Monet’s work became more abstract with a pronounced usage of red and yellow tones placed on the canvas by larger brush strokes.
His recognizable sense of atmosphere and light disappeared, just as the light blues, typical for his earlier work. He complained of perceiving reds as muddy, dull pinks, and other objects as yellow.
In 1913 Monet visited the German ophthalmologist Richard Liebreich, the chair of ophthalmology at St Thomas’ Hospital. The doctor had a keen interest in art and had published an article on the effect of eye disease on the painters Turner and Mulready.
Liebreich recommended cataract surgery for the right eye but the painter refused, arguing, according to Monet or the triumph of Impressionism by D. Wildenstein (2010): “I prefer to make the most of my poor sight and even give up painting if necessary, but at least be able to see a little of these things that I love.”
Unfortunately, he grew very unhappy with his works saying that they were getting more darkened. He tried his best by keeping a strict order on his palette and labeling his tubes of paint but his condition was worsening.
In a letter dated August 11, 1922 to his friend G. or J. Bernheim-Jeune, he wrote: “To think I was getting on so well, more absorbed than I’ve ever been and expecting to achieve something, but I was forced to change my tune and give up a lot of promising beginnings and abandon the rest; and on top of that, my poor eyesight makes me see everything in a complete fog. It’s very beautiful all the same, and it’s this which I’d love to have been able to convey. All in all, I am very unhappy.”
Even though Monet was very skeptical to cataract surgery, and frightened due to the unsuccessful results of it on his contemporary artists Mary Cassatt and Honore Daumier, surgery became an inevitable solution as his condition progressed.
In 1923, at the age of 82, the painter finally underwent two eye surgeries. Initially, after the operation, Monet was very disappointed and depressed. He was supposed to rest but resting for him was losing time at the cost of his art.
In an act of desperation, he tried to rip off the bandages from his eyes. In a letter, published in The Artist’s Eyes by Michael Marmor and James Ravin (2009), to his surgeon Charles Coutela, Monet lamented:
“That’s the greatest blow I could have had and it makes me sorry that I ever decided to go ahead with that fatal operation. Excuse me for being so frank and allow me to say that I think it’s criminal to have placed me in such a predicament.”
As part of his treatment, Monet was provided with a new pair of spectacles specialized for cataracts. However, Monet was still unhappy with the outcome and found it hard to adjust to the new lenses.
His art was changing, and he wrote to his friend, March Elder: “in the end, I was forced to recognize that I was spoiling [the paintings], that I was no longer capable of doing anything good. So I destroyed several of my panels. Now I’m almost blind, and I have to abandon work altogether. It’s hard, but that’s the way it is: a sad end despite my good health!”
Some individuals have reported that after the removal of their congenital cataract, they perceive ultraviolet light which is otherwise invisible to those with normal, healthy eyes.
As suggested in a 2002 article published in The Guardian, Claude Monet’s surgery might have actually allowed him to see the white-blue or white-violet color of UV light, and possibly that was the reason why his paintings looked so different after the medical treatment. | <urn:uuid:24e8c883-7bcc-4ef7-a3f1-bdea40e53363> | {
"dump": "CC-MAIN-2019-18",
"url": "https://www.thevintagenews.com/2018/12/08/claude-monet-cataract/",
"date": "2019-04-23T04:21:27",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578586680.51/warc/CC-MAIN-20190423035013-20190423061013-00137.warc.gz",
"language": "en",
"language_score": 0.9864642024040222,
"token_count": 975,
"score": 3.796875,
"int_score": 4
} |
In some years in August-September, you could observe schools of small sharp-nosed fishes swimming unhurriedly at the very water surface. This is the so-called Japanese half-mug species. Unlike its relatives living in tropical and subtropical waters, it inhabits moderately warm waters, and in summer time visits Southern Primorye. The fish received its name because of its non-uniformly developed jaws: the upper jaw is short and the lower one considerably elongated. Saving itself from predators, the fish can jump to the surface to leave in the water only the lower part of its caudal fin, and to rapidly glide to thereby cover large distances. Species close to tropical Hyporhanphus sajori can even glide under water. The Japanese species are up to 30 cm long. The fish reproduce off the shores of Japan and Korea at temperatures ranging from 18 to 25oC. The roes are characterized by long sticky thread-like growths, with which they attach themselves to underwater vegetation. The flesh of this species is rather tasty, but the speciesТ commercial significance is not great. | <urn:uuid:eaac2c4e-803d-47e8-be3c-acaec7aba5d7> | {
"dump": "CC-MAIN-2017-47",
"url": "http://www.fegi.ru/prim/sea/fish1_13.htm",
"date": "2017-11-18T15:47:51",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934804976.22/warc/CC-MAIN-20171118151819-20171118171819-00445.warc.gz",
"language": "en",
"language_score": 0.9559028744697571,
"token_count": 230,
"score": 3.03125,
"int_score": 3
} |
The Importance of Socialization for Mental Health
Socialization is a fundamental aspect for the well-being and mental health of individuals. Contact with other people, social interaction, and participation in group activities are essential for the emotional and psychological development of each individual.
In a world increasingly connected digitally, it is common for people to spend long periods isolated, whether due to remote work, online studies, or even lack of time. However, it is important to remember that humans are social beings by nature and need interaction with others to feel complete.
Socialization triggers a series of benefits for mental health. First, it helps to reduce stress and anxiety. By sharing experiences, emotions, and concerns with other people, it is possible to relieve emotional pressure and find mutual support.
Additionally, socialization contributes to increased self-esteem and self-confidence. By interacting with other people, we are able to develop social skills such as empathy, communication, and conflict resolution. These competencies are essential for building healthy relationships and improving self-image.
Socialization also stimulates the brain and prevents cognitive decline. By participating in conversations, debates, and group activities, we are constantly exercising our minds, which contributes to the maintenance of memory, concentration, and logical reasoning.
Furthermore, socialization promotes a sense of belonging and connection with the world around us. By engaging in social groups, whether they are family, friendship, or professional, we feel part of something larger, which brings meaning and purpose to our lives.
To ensure good mental health through socialization, it is important to seek different forms of social interaction. Participating in interest groups, practicing team sports, attending cultural events, and getting involved in community activities are some of the available options.
Moreover, it is crucial to be open to new experiences and to meet different people. The diversity of thoughts, cultures, and perspectives enriches our relationships and allows us to learn and grow as individuals.
Finally, it is important to remember that socialization is not limited only to the physical world. Social networks and digital platforms can also be used as tools for social interaction, as long as they are used in a balanced and healthy manner.
In summary, socialization plays a fundamental role in people’s mental health. It contributes to stress relief, increased self-esteem, cognitive stimulation, and a sense of belonging. Therefore, it is essential to seek forms of social interaction and be open to new experiences, both in the physical and virtual world, to ensure an emotionally balanced and healthy life. | <urn:uuid:9c4da205-b1be-4fec-8c9d-fbdc2276639e> | {
"dump": "CC-MAIN-2023-40",
"url": "https://yourhealthdiary.com/the-importance-of-socialization-for-mental-health/",
"date": "2023-10-01T03:13:48",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510734.55/warc/CC-MAIN-20231001005750-20231001035750-00846.warc.gz",
"language": "en",
"language_score": 0.9491962194442749,
"token_count": 514,
"score": 3.8125,
"int_score": 4
} |
Posted: 23 Dec, 2020
Design a data structure, which represents two stacks using only one array common for both stacks. The data structure should support the following operations:
push1(NUM) - Push ‘NUM’ into stack1. push2(NUM) - Push ‘NUM’ into stack2. pop1() - Pop out a top element from stack1 and return popped element, in case of underflow return -1. pop2() - Pop out a top element from stack2 and return popped element, in case of underflow return -1.
There are 2 types of queries in the input
Type 1 - These queries correspond to Push operation. Type 2 - These queries correspond to Pop operation.
1. You are given the size of the array. 2. You need to perform push and pop operations in such a way that we are able to push elements in the stack until there is some empty space available in the array. 3. While performing Push operations, do nothing in the situation of the overflow of the stack.
The first line of the input contains two space-separated integers 'S' and 'Q', denoting the size of the array and number of operations to be performed respectively. The next 'Q' lines contain operations, one per line. Each line begins with two integers ‘type’ and ‘stackNo’, denoting the type of query as mentioned above and the stack number on which this operation is going to be performed. If the ‘type’ is 1 then it will be followed by one more integer ‘NUM’ denoting the element needed to be pushed to stack ‘stackNo’.
For each operation of Type 2, print an integer on a single line - popped element from the stack, if the stack is already empty print -1.
You do not need to print anything, it has already been taken care of. Just implement the given function.
0 <= S <= 10^5 1 <= Q <= 5*10^5 1 <= type, stackNo <= 2 0 <= NUM <= 10^9 Time Limit: 1 sec.
- To utilise the array space optimally, we start the top of both stacks from the extreme of the array.
- Let the array used by both the stacks is Arr, having size equals to ‘s’.
- Let’s say top1 and top2 points to the top of the stacks, stack1 and stack2 respectively.
- As the size of the array is ‘s’, we assign -1 to top1 and ‘s’ to top2 (keeping 0 based indexing in mind) which denotes both stacks are currently empty.
- In order to complete push operations (or operation of Type 1) we do as follows:
- Check if Arr has enough space to push an integer or not. For this, we check top1 + 1 < top2, if this is the case then we have enough space then, we increment top1 by 1 and assign num to Arr[top1].
- If there is insufficient space for pushing another element, which will happen in case top1 + 1 == top2. So we do nothing and just return.
- Check if Arr has enough space to push an integer or not. For this we check top2 - 1 > top1, if this is the case then we have enough space then, we decrement top2 by 1 and assign num to Arr[top2].
- If there is insufficient space for pushing another element, which will happen in case top2 - 1 == top1. So we do nothing and just return.
- In order to complete pop operations (or operations of Type 2) we do as follows:
- First, we check if the stack1 is empty or not, for which we just need to check if top1 is -1 or not, it is equal to -1 then it is the condition of underflow, so we just return -1.
- If stack1 is not empty, then we decrement top1 by 1 and return the value of Arr[top1+1].
- First, we check if the stack2 is empty or not, for which we just need to check if top2 is equal to 's' or not, it is equal to ‘s’ then, it is the condition of underflow, so we just return -1.
- If stack2 is not empty, then we increment top2 by 1 and return the value of Arr[top2-1].
- This way we can utilise the total empty space of the array available at any instant of time.
Two Sum II - Input Array Is Sorted
Posted: 4 Mar, 2022
Ninja And Matrix
Posted: 12 Apr, 2022
Ninja In Interview
Posted: 13 Apr, 2022
Posted: 17 Apr, 2022
Posted: 5 May, 2022 | <urn:uuid:4bf2a585-2849-43d1-82fe-e756522ab5e1> | {
"dump": "CC-MAIN-2022-21",
"url": "https://www.codingninjas.com/codestudio/problem-details/two-stacks_983634",
"date": "2022-05-28T20:58:44",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652663019783.90/warc/CC-MAIN-20220528185151-20220528215151-00177.warc.gz",
"language": "en",
"language_score": 0.8639861345291138,
"token_count": 1034,
"score": 3.703125,
"int_score": 4
} |
This research project speculates upon the notion of preparing bodies for the rigorous living conditions of outer space. The core elements behind this research project include the effects of the Anthropocene, the photo-allergenic, the human body and the mind. Extreme measures of the Anthropocene are moving the earth into an uninhabitable environment, and will therefore soon prompt extreme human adaptation. It will also force us to question the continuation of human existence on this planet. The effects of global warming will result in an increase of photo-allergenic humans, leaving their bodies unable to tolerate the sun’s rays. This solution poses the idea that humans may need to reside within the deep darkness of the remote cosmos. To do so requires the exploration and manipulation of the body and the mind in order to design procedures and training protocols necessary for preparing to live in space. Anthrolumino (human light) demonstrates how we can prepare for evacuating an overheated world. | <urn:uuid:0245a73a-899b-4e0c-824b-0431f04e8e29> | {
"dump": "CC-MAIN-2017-17",
"url": "http://artdes.monash.edu.au/interior/studentwork/2016/rebekah-purpura.html",
"date": "2017-04-28T08:09:04",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917122886.86/warc/CC-MAIN-20170423031202-00545-ip-10-145-167-34.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.887326717376709,
"token_count": 193,
"score": 3.328125,
"int_score": 3
} |
This is always a confusing question because it is easy to get the words, engine and train mixed up, many people assume you mean a train if you say steam engine so, therefore, it was Thomas Savery in 1679 who first invented the steam engine for work in mines. However, it was George Stephenson who adapted the idea of the steam engine and invented the first steam train, also called a locomotive, in 1829.
Before the train is made, the Greeks made wagons. It was not used for more than fifteen centuries when the Greeks empire fell. Wagons began to reappear again in the renaissance period.
Stephenson’s earliest locomotive designs were focused on constructing locomotives for coal from the mines, but in 1823 he joined forces with Robert Stephenson, his son, and Edward Pease and they became the first locomotive builders in the world.
On 27th September 1825, George Stephenson was at the controls of a locomotive that made a journey of just less than nine miles in two hours on the newly opened Stockton and Darlington line. Four years later in 1829, Stephenson designed the Rocket, a steam train locomotive capable of pulling many loads including passengers. It was this train that stimulated the huge growth in the railways industry and was very influential in the development of the industrial revolution.
Stephenson went on to discover a rich coal seam while he was cutting the Clay Cross railway tunnel and formed the Clay Cross Company in 1837. The Company built houses for the miners and their families resulting in 400 new homes and an entire community of schools, shops, a church and a Mechanics Institute, all at the company’s expense. | <urn:uuid:f05bc818-9391-4590-aa2f-79843eee31e0> | {
"dump": "CC-MAIN-2021-04",
"url": "https://science.blurtit.com/24597/when-was-the-steam-train-invented-",
"date": "2021-01-24T22:20:35",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703557462.87/warc/CC-MAIN-20210124204052-20210124234052-00443.warc.gz",
"language": "en",
"language_score": 0.9811932444572449,
"token_count": 342,
"score": 3.640625,
"int_score": 4
} |
At the ARPA-E Energy Innovation Summit back In 2017, we met a company called Marine BioEnergy that was exploring a concept involving robotic submarines farming the open ocean for kelp to create carbon-neutral biofuel. The concept had a lot going for it: Kelp sucks up carbon as it grows, so any carbon that it later releases into the atmosphere is balanced out as new plants take root. What’s more, kelp can be turned into energy-dense liquid fuel, for which there is already a massive distribution infrastructure. And most importantly, kelp grows in the ocean, meaning that we wouldn’t have to fertilize it, give it fresh water, or let it compete for land space like wind and solar farms do.
The tricky bit with kelp farming is that kelp needs three things to grow: sunlight, nutrients, and something to hold onto. This combination can only be found naturally along coastlines, placing severe limitations on how much kelp you’d be able to farm. But Marine BioEnergy’s idea is to farm kelp out in the open ocean instead, using robot submarines to cycle the kelp from daytime sunlight to nighttime nutrient-rich water hundreds of meters beneath the surface. Whether this depth cycling would actually work with kelp was the big open question, but some recent experiments have put that question to rest.
Kelp doesn’t naturally depth-cycle itself. On its own, kelp will pick some nice rock in a shallow bit of coast, stick itself there, and grow straight upwards towards the sunlight. In order to keep itself vertical, the kelp produces floaty gas-filled bladders called pneumatocysts at the base of each leaf. Unfortunately, things that are filled with gas tend to implode when they descend deeper into the water. Nobody knew what would happen if kelp were to be grown while depth-cycling it; would those pneumatocysts even be able to form, and if not, what would that do to the rest of the plant?
To figure this out, Marine BioEnergy partnered with the USC Wrigley Institute for Environmental Studies on Santa Catalina Island, off the coast of California, to depth-cycle some baby kelp. Rather than using robot submarines, they instead put together a kelp elevator, consisting of an automated winch tethered to the seafloor. Attached to the winch was a scaffold that supported lots of little baby kelp plants. Every evening, the elevator lowered them 80 meters down into nutrient-rich waters to feed. In the morning, the whole contraption was winched back up into the sunlight.
After 100 days and nights of winching up and down, the testing showed the kelp had adapted to its depth cycling and was growing rapidly, as President of Marine BioEnergy Cindy Wilcox described to us in an email.
“As it turns out, the depth-cycled bladders were long and narrow and filled with a liquid, not gas. For the first time, this showed that at least one species of kelp (macrocystis, otherwise known as Giant Kelp) thrives when depth-cycled between sunlight at the surface in the daytime and submerged to the nutrients below the thermocline at night.”
The depth-cycled kelp produced about four times the biomass of a control group of kelp that was not depth-cycled, and although the experiment ended at 100 days, the kelp wasn’t even full grown at that point. Seeing exactly how big the mature kelp gets, and how quickly, will be the next phase of the experiment.
Ultimately, the idea is to disconnect production of kelp from the shore, using solar-powered robot submarines to depth-cycle giant rafts of kelp out in the open ocean. Every 90 days, the kelp (which grows continuously) would get trimmed, bagged, and delivered to a pickup point to get converted into biofuel, while the robot subs drag the freshly shorn kelp back out to start the cycle over again.
Diagram showing the life cycle of an ocean kelp farm.Image: Marine BioEnergy
The actual conversion of kelp into fuel happens through existing commercial processes, either hydrothermal liquefaction or anerobic digestion. About half the carbon in the kelp can be processed into gasoline or heating oil equivalents, while the other half is processed into methane that can be used to power the conversion process itself, or converted into hydrogen, or just sold off as a separate product. Since the carbon being released in this process is coming from the kelp itself, it’s not actually adding any carbon to the atmosphere, as Wilcox explains:
Our projections are that the kelp grown per drone submarine, over its 30-year life, is about 12,000 dry metric tons of biomass, which is over 200 times the mass of the drones and farm system. The energy contained in this biomass is over 160 times as great as that required to make and operate the drone and all associated farm equipment, including deployment and harvesting. When fuel from the kelp is burned, it releases CO2 that was absorbed from the environment only a few months before, and the carbon footprint of the farm itself is relatively minor since its mass is so small compared to the product. The vision is that, eventually, kelp-derived energy and organic feedstocks would provide all inputs for the relatively small mass of farm equipment and so no fossil fuels would be needed to sustain and grow the system beyond that point.
Replacing all liquid transportation fossil fuels used in the United States, Wilcox says, would require farming about 2.2 million square kilometers of kelp, representing less than 1.5% of the area of the Pacific. It may be a small percentage, but that’s still a lot of kelp, and some concerns have been raised about what effect that could have on other ocean life. According to Wilcox, the thermohaline circulation generates about 3.5 meters of nutrient upwelling across the entire ocean every year, and kelp farming would only suck up the nutrients in about 6 cm of that upwelling. Interestingly, by producing fertilizer as a biofuel byproduct, kelp could also be used to help bring deep-ocean nutrients back to land, a process that (as far as we know) currently only happens through volcanoes and salmon. “We expect that the main effect of the ocean farms will be to help reduce the damage from the human-caused flood of artificial nutrients that are making their way into the ocean,” Wilcox says, “but this needs more study.”
Over the next few years, Marine BioEnergy hopes to use funding from ARPA-E to prototype farm implements and perform large-scale ocean testing, after which the goal is to build the first farm and start producing kelp at scale. | <urn:uuid:fc98249f-b900-4c9a-86d0-ace8215e10d3> | {
"dump": "CC-MAIN-2023-50",
"url": "https://spectrum.ieee.org/robots-power-the-quest-to-farm-oceans-for-biofuel",
"date": "2023-11-28T17:36:25",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099892.46/warc/CC-MAIN-20231128151412-20231128181412-00354.warc.gz",
"language": "en",
"language_score": 0.9585439562797546,
"token_count": 1427,
"score": 3.625,
"int_score": 4
} |
On January 24, 1848, James Marshall, a mechanic working for mill owner John Sutter, discovered gold on the south fork of the American River in the Coloma Valley of California, northeast of Sacramento. Word of this discovery appeared in Eastern newspapers in the fall of 1848 and was further popularized in remarks made by James K. Polk in his farewell address. Soon, passenger-laden ships sailed from the East Coast, around Cape Horn, and northward to California. Within the first year, more than 80,000 prospectors, the Forty-Niners, arrived. San Francisco grew to a community of 20,000 in a few months. In 1848, the population of California was about 15,000. By 1852, California’s population topped 250,000. The ranks of new residents were swollen by sailors who jumped ship in San Francisco and headed to the gold fields, leaving their ships too short on crew to continue. California attracted many reputable people, but the first to go were the ones with the least to lose and the fewest responsibilities. The predictable result was a rise in crime. In San Francisco, a gang of immigrants from Australia known as the Sydney Ducks was particularly notorious, and led to the establishment of the San Francisco Committee of Vigilance in 1851. The Vigilantes hanged several men and forced a number of corrupt officials to resign. It was reestablished in 1856. In both cases, the committees dissolved themselves after a few months. The discovery of gold transformed California from a sparsely populated, distant region into an area ripe for statehood. The issue of admitting California as a free state — this was not cotton country — was one of the prime issues to be addressed in the Compromise of 1850. | <urn:uuid:e6a68873-f6e9-4542-a836-62f07c54753b> | {
"dump": "CC-MAIN-2020-34",
"url": "https://u-s-history.com/pages/h133.html",
"date": "2020-08-10T05:55:13",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738609.73/warc/CC-MAIN-20200810042140-20200810072140-00579.warc.gz",
"language": "en",
"language_score": 0.9767383337020874,
"token_count": 354,
"score": 4.0625,
"int_score": 4
} |
Resources for teachers about conservation education in Canterbury, including field trip information that can be used inside and outside the classroom.
Aoraki/Mt Cook National Park is the ideal case study area for students investigating the geography of the South Island glaciated high country.
This Motukarara Nursery teaching resource allows students to explore the ecology of Canterbury's native plants and animals.
Ōtukaikino Wildlife Management Reserve is managed as a Living Memorial - Mau Mahara. Students can learn about wetlands and the restoration of Wilson’s Swamp, and find out about the cultural heritage of the area.
Find out about this school field trip that explores the cultural and natural significance of Peel Forest.
This school field trip investigates the natural landscape and threatened species that inhabit Pōhatu Marine Reserve.
This teaching resource is based on braided rivers in the Mackenzie basin, but the concepts can be applied to rivers anywhere in New Zealand.
View and download the River life: braided rivers in the MacKenzie Basin teaching resource.
This field trip explores Temple Basin, where students can learn about life forms within an Alpine Region and the history and current plans of Temple Basin and Arthur’s Pass National Park.
Getting involved - all regions | <urn:uuid:2bead39c-4a16-4d15-92da-5aac11491c12> | {
"dump": "CC-MAIN-2013-48",
"url": "http://doc.org.nz/by-region/canterbury/getting-involved/for-teachers/",
"date": "2013-12-05T17:28:29",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163046954/warc/CC-MAIN-20131204131726-00092-ip-10-33-133-15.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.9032335877418518,
"token_count": 256,
"score": 3.359375,
"int_score": 3
} |
Write a file to a directory in python
Txt', 'W') # Create a new file by default under this folder a.We need to import the module os in order to use different methods of the os module in our program using the below statement The open() function returns a file object.In this tutorial, I will go further and discuss different operations we can perform on files and directories (folders).Python makes your life easier writing to CSV files [code python] #!Python makes your life easier writing to CSV files..Writing Data in a CSV File with Python.Mkdir -p will create any parent directories as required, and silently do nothing if it already exists Here I've implemented a safe_open_w() method which calls mkdir_p on the directory part of the path, before opening the file for writing:.When a file object is instantiated in write.There are numerous times when you’ll need to add data into CSV files, such as adding your customer’s data, employees’ salary records, employee ID, etc.If there are a large number of files to handle in our Python program, we can arrange our code within different directories to make things more manageable A directory or folder is a collection of files and subdirectories.Isfile() to check if they are a file or directory.Traditionally, Python has represented file paths using regular text strings In this article, you will learn how to get the list of all files and folders in a given directory using a simple Python code.You know, for instance, we use files a lot, and working with files goes beyond just opening and closing the file Sometimes, we need to move write a file to a directory in python the file location from one path to another path for the programming purpose.The source must represent the file, and the destination may be a file or directory.In mode, we specify whether we want to read r, write w or append a to the file Writing Data in a CSV File with Python.Path module has lots of tools for working around these kinds of operating system-specific file system issues.Example 5: Read and align the data using format.Python now supports a number of APIs to list the directory contents.The line ending has its roots from back in the Morse Code era, when a.By now, you have learned how to open a CSV file using Python.From my current attempt, above attempt didn't save data onto.You cannot use this method to create a folder in a folder that does not exist.
Write Astronomy Papers
The default value being ‘0o777‘.We can specify binary mode by adding “b” to any of the modes (e.If there are a large number of files to handle in our Python program, we can arrange our code within different directories to make things more manageable A directory or folder is a collection of files and subdirectories.There are numerous times when you’ll need to add data into CSV files, such as adding your customer’s data, employees’ salary records, employee ID, etc.I have write a file to a directory in python created the csv file into the directory which is same as the.Is there any way to change this?Mkdir() method creates a blank directory on your file system.Built-in function open() The standard way to open files for reading and writing with Python #3) Writing Data to File.(Both source and destination are strings.The simplest cases may involve only reading or writing files, but sometimes more complex tasks are at hand.Mode: The permissions that needs to be given to deal with the file operations within the directory.Listdir() in legacy versions of Python or os.Extractall(optional_target_folder) Look at extractall, but use it only with trustworthy zip files.The write() method writes a string to a text file and the writelines() method write a list of strings to a file at once In fact, the writelines() method accepts an iterable object, not just a list, so you can pass a tuple of strings, a set of strings.File objects contain methods and attributes that can be used to collect information about the file you opened.Serialize a Python dictionary and store it into a YAML formatted file.Note: The r is placed before filename to prevent the characters in filename string to be treated as special character In a previous tutorial, I discussed how we can read, open, close, and write to files.Using the csv module, you can open/create a file and write a csv file at that location.But don’t be confused; a dictionary is simply what you.2 This code is used Write Multiple images into a folder using python cv2.Python makes your life easier writing to CSV files The built-in os module has a number of useful functions that can be used to list directory contents and filter the results.One problem often encountered when working with file data is the representation of a new line or line ending.Csv from the to folder, you would use //animals.The default directory is always the directory in which , the program is ?There are numerous times when you’ll need to add data into CSV files, such as adding your customer’s data, employees’ salary records, employee ID, etc.The difference is in the second argument to open(), in which the string "w" – short for write – is passed newfile = open ("hello.Now that we have learned how to convert a YAML file into a Python dictionary, let's try to do things the other way around i.Txt cd file object = open (file_name [, access_mode] [, buffering]) Here are parameter details −.A, AB, A +, AB +: As long as the file does not have a new file, the file pointer is at the end, 2, file reading and writing 2.Mode: The permissions that needs to be given to deal with the file operations within the directory.This is the first step in reading and writing files in python. | <urn:uuid:d8d61f82-6f1f-4f12-ae97-30e050ebc23b> | {
"dump": "CC-MAIN-2021-43",
"url": "https://www.flyfusionmag.com/9snkoubwt8/",
"date": "2021-10-22T09:44:49",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585504.90/warc/CC-MAIN-20211022084005-20211022114005-00512.warc.gz",
"language": "en",
"language_score": 0.8910530805587769,
"token_count": 1261,
"score": 3.21875,
"int_score": 3
} |
According to who, social and economic conditions and their effects on people's lives determine their risk of illness as well as the actions they are able to take in order to prevent themselves from becoming ill or treating illness when it does occur we know that poverty can affect our health in a myriad of ways income. Any discussion of social class and mobility would be incomplete without a discussion of poverty, which is defined as the lack of the minimum food and shelter ne. The consequences of poverty—emotional issues, delayed development and lower academic social/emotional poor children may also have challenges with social and emotional development they are at risk for developing both behavior and emotional problems like impulsiveness, disobedience and. Addressing poverty asif afridi march 2011 this paper: • explains what social networks are, and their benefits • explores how social networks can help address poverty and be made more accessible and • discusses the impacts of government spending cuts on social networks the joseph rowntree foundation (jrf. The negative impacts of poverty start before birth and accumulate across the life course and onto the next generation poverty has negative impacts on children's health, cognitive development, social, emotional and behavioural development and educational outcomes the parents of children living in poverty are more likely. Poverty and social impact analysis is an approach to assess the distributional and social impacts of policy reforms and the well-being of different groups of the population, particularly on the poor and most vulnerable.
These periods of profound change come with a transformation of social order, values, and methods of governing that many people may find distressing and unsettling therefore, stabilizing and empowering political institutions is a crucial part of fighting against the dangerous consequences of poverty. To grow up in poverty can have a lasting impact on a child what is less understood is how it affects the early relationships that shape a child's social and emotional growth in ongoing research, center for poverty research affiliate ross a thompson and graduate student researcher abby c winer have found that a. Learn about the effects of youth poverty on academic achievement, psychosocial outcomes and physical health, as well as the prevalence of child hunger in the us. This study employs indicators of child poverty and social exclusion to examine the relationships between the risk of children falling into poverty and their chances of being socially excluded in taiwan this study defines 'children in poverty' as those in a household deprived of 60% of an equivalent median.
What forces shape family life in our society in this lesson, we'll look at how poverty and social class impact families' experiences and create. Emotional support but this does not overcome socio-economic inequalities • evidence regarding whether social networks in deprived neighbourhoods reduce the chances of leaving poverty through negative role models and social norms is mixed: quantitative evidence tends not to find an effect while qualitative evidence.
This intervention was conducted to examine the effects of a teacher–student relationship program on the social, emotional and academic adjustment of youth with emotional and behavioral problems who were attending one high school in a high-poverty urban environment overall, the intervention did not appear to impact. African countries stress negative impact of conflicts, poverty, unemployment, aids on social development, as third committee concludes debate on social issues difficulties faced in achieving social development in the midst of conflicts, poverty,. Global housing charity fighting poverty home is the absolute foundation through which we can tackle the effects of poverty on society and its vicious cycle home is safe homes and neighbourhoods, in which residents are satisfied with housing conditions and public services, help to build social stability and security. Part of the fuel for poverty's unending cycle is its suppressing effects on individuals' cognitive development, executive functioning, and attention, as four the causes and consequences of social transformation, says sociologist jürgen schupp of the german institute for economic research (diw berlin.
And consequences of child poverty and the principles that inform our approach to tackling it key facts child poverty in the uk has changed significantly, the proportion of children in poverty has risen considerably in social deprivation – poverty restricting their access to attend social events and their ability to maintain. The relationship between poverty and education shows in the students' levels of cognitive readiness the physical and social-emotional factors of living in poverty have a detrimental effect on students' cognitive performance some children have short attention spans, some are highly distractible, and some. These causes-effects, or factors, that perpetuate poverty in a household are known as the cycle of poverty families who fall in this cycle tend to stay in it “for enough time that the family includes no surviving ancestors who possess and can transmit the intellectual, social, and cultural capital necessary to.
Unicef takes this multi-dimensional approach, and while progress has been made toward reducing poverty and its effect on children, there is still much work to be done to assist the one billion children living in poverty – about half of all children in the world understanding the impact of child poverty understanding child. The official poverty measure (opm), criticized practically since its inception in the 1960s by researchers and policymakers alike, continues to be a topic of important discussion the opm determines poverty status by comparing pre-tax cash income by three times a minimum food diet, set in 1963, adjusted.
One of the effects of poverty on children's development is to lead them to build an antisocial behavior that acts as a psychological protection against their hostile environment discrimination and social exclusion often push them to more aggressiveness and less self-control and nuance in reaction to stressful. The process of preparing poverty reduction strategy papers (prsps) in low- income countries has given prominence to the need to understand the impact of public policies on social and poverty outcomes the goals of psia are (i) to provide a basis for considering policy options and appropriate. Poverty is commonly defined as a lack of economic resources that has negative social consequences, but surprisingly little is known about the importance of economic hardship for social outcomes this article offers an empirical investigation into this issue we apply panel data methods on longitudinal data.
The second looks at the relationship between childhood disadvantage (an important element of which is child poverty) and subsequent economic and social outcomes among adults the link between the two strands is the significant long- term impact likely from high child poverty rates, which are likely to. There are currently high levels of child poverty in the uk, and for the first time in almost two decades child poverty has started to rise in absolute terms child poverty is associated with a wide range of health-damaging impacts, negative educational outcomes and adverse long-term social and psychological outcomes. [email protected] or visit us online at: povertyucdavisedu uc davis center for poverty research: one shields avenue: davis, ca 95616 (530) 752- 0401 to download the full research study, visit povertyucdavisedu how poverty and depression impact a child's social and emotional competence by abby c. Poverty does not impact all children equally children of color are significantly more likely to be affected by poverty than white children in the us in 2015:iii poverty in early childhood by the numbers poverty impacts development in early childhood early childhood is a critical period of physical and social- emotional. | <urn:uuid:1af15fc9-4558-4f8d-9c88-45a01a157882> | {
"dump": "CC-MAIN-2018-43",
"url": "http://bbassignmentxnks.nextamericanpresident.us/poverty-and-the-effects-of-social.html",
"date": "2018-10-18T04:06:18",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583511642.47/warc/CC-MAIN-20181018022028-20181018043528-00389.warc.gz",
"language": "en",
"language_score": 0.9488385319709778,
"token_count": 1454,
"score": 3.421875,
"int_score": 3
} |
Technology with Healthcare
In this today’s world, Technology plays a special role in every industry as well as in our personal lives. Out of all of the industries that technology plays a crucial role in, healthcare is definitely one of the most important. This merger is responsible for improving and saving countless lives all around the world.
Medical technology is a broad field where innovation plays a crucial role in sustaining health. Areas like biotechnology, pharmaceuticals, information technology, the development of medical devices and equipment, and more have all made significant contributions to improving the health of people all around the world. From “small” innovations like adhesive bandages and ankle braces, to larger, more complex technologies like MRI machines, artificial organs, and robotic prosthetic limbs, technology has undoubtedly made an incredible impact on medicine.
In the healthcare industry, the dependence on medical technology cannot be overstated, and as a result of the development of these brilliant innovations, healthcare practitioners can continue to find ways to improve their practice — from better diagnosis, surgical procedures, and improved patient care.
How Information Technology is helpful in Healthcare?
Information technology has made significant contributions in the medical industry. With the increased use of **Electronic Medical Records (EMR)**, **Electronic Health Record(EHR)**, Telehealth services, and mobile technologies like tablets and smart phones, physicians and patients are both seeing the benefits that these new medical technologies are bringing.
Medical technology has evolved from introducing doctors to new equipment to use inside private practices and hospitals to connecting patients and doctors thousands of miles away through telecommunications. It is not uncommon in today’s world for patients to hold video conferences with physicians to save time and money normally spent on traveling to another geographic location or send health information instantaneously to any specialist or doctor in the world.
With more and more hospitals and practices using medical technology like mobile devices on the job, physicians can now have access to any type of information they need — from drug information, research and studies, patient history or records, and more — within mere seconds. And, with the ability to effortlessly carry these mobile devices around with them throughout the day, they are never far from the information they need. Applications that aid in identifying potential health threats and examining digital information like x-rays and CT scans also contribute to the benefits that information technology brings to medicine.
Medical scientists and physicians are constantly conducting research and testing new procedures to help prevent, diagnose, and cure diseases as well as developing new drugs and medicines that can lessen symptoms or treat ailments.
Through the use of technology in medical research, scientists have been able to examine diseases on a cellular level and produce antibodies against them. These vaccines against life-threatening diseases like malaria, polio, MMR, and more prevent the spread of disease and save thousands of lives all around the globe. In fact, the World Health Organization estimates that vaccines save about 3 million lives per year, and prevent millions of others from contracting deadly viruses and diseases.
Benefits of IT in Healthcare
1) Better and Accessible Treatment
A number of industry analysts have observed that increased accessibility of treatment is one of the most tangible ways that technology has changed healthcare. Health IT opens up many more avenues of exploration and research, which allows experts to make healthcare more driven and effective than it has ever been.
2) Improved Care and Efficiency
Another key area that has grown and continues to do so is patient care. The use of information technology has made patient care safer and more reliable in most applications. The fact that nurses and doctors who are working on the front line are now routinely using hand-held computers to record important real-time patient data and then sharing it instantly within their updated medical history is an excellent illustration of the benefits of health IT.
Being able to accumulate lab results, records of vital signs and other critical patient data into one centralized area has transformed the level of care and efficiency a patient can expect to receive when they enter the healthcare system.
An increased level of efficiency in data collection means that a vast online resource of patient history is available to scientists, who are finding new ways to study trends and make medical breakthroughs at a faster rate.
3) Improves Healthcare and Disease Control
The development of specific software programs means that, for example, the World Health Organization has been able to classify illnesses, their causes and symptoms into a massive database that encompasses more than 14,000 individual codes.
This resource allows medical professionals and researchers to track, retrieve and utilize valuable data in the fight to control disease and provide better healthcare outcomes in general.
Software also plays a pivotal role in tracking procedures and using billing methods that not only reduce paperwork levels, but also allow practitioners to use this data to improve quality of care and all around efficiency.
Doctors report that they are deriving enormous benefits from the drive toward a total system of **Electronic Health Records**; patients enjoy the fact that software has created a greater degree of transparency in the healthcare system.
We have seen many positive changes in health IT and expect to continue witnessing more exciting developments in the future!
If you really like this blog then please do “claps” for us and share with you loved once, too! | <urn:uuid:c1205ed8-b0f3-4404-a89c-69e4e28a5f4f> | {
"dump": "CC-MAIN-2021-39",
"url": "https://fuzzycloud.in/blog/tech_with_healthcare/",
"date": "2021-09-19T23:00:41",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056902.22/warc/CC-MAIN-20210919220343-20210920010343-00636.warc.gz",
"language": "en",
"language_score": 0.943217933177948,
"token_count": 1059,
"score": 3.1875,
"int_score": 3
} |
Burned by the Germans during World War II, Kalambaka has only one building of interest, the centuries-old cathedral church of the Dormition of the Virgin. Patriarchal documents in the outer narthex indicate that it was built in the first half of the 12th century by Emperor Manuel Comnenos, but some believe it was founded as early as the 7th century, on the site of a temple of Apollo (classical drums and other fragments are incorporated into the walls, and mosaics can be glimpsed under the present floor). The latter theory explains the church's paleo-Christian features, including its center-aisle ambo (great marble pulpit), which is usually to the right of the sanctuary; its rare synthronon (four semicircular steps where the priest sat when not officiating) east of the altar; and its Roman-basilica style, originally adapted to Christian use and unusual for the 12th century. The church has vivid 16th-century frescoes, work of the Cretan monk Neophytos, son of the famous hagiographer Theophanes. The marble baldachin in the sanctuary, decorated with crosses and stylized grapes, probably predates the 11th century.
North end of town, follow signs from Riga Fereou Sq., Kalambaka, 42200, Greece | <urn:uuid:bf6016c7-19af-4ec0-9796-cb60a0c7dd5f> | {
"dump": "CC-MAIN-2015-40",
"url": "http://www.fodors.com/world/europe/greece/epirus-and-thessaly/things-to-do/sights/reviews/dormition-of-the-virgin-469151",
"date": "2015-10-09T12:45:32",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443737929054.69/warc/CC-MAIN-20151001221849-00087-ip-10-137-6-227.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.9554768204689026,
"token_count": 281,
"score": 3.265625,
"int_score": 3
} |
Start studying Elements, Compounds, and Mixtures Test. Learn vocabulary, terms , and more with flashcards, games, and other study tools.
http://quizlet.com/595322/elements-review-flash-cards/. To Play a Jeopardy review game on elements compounds and mixtures use the link below
Science 7 - Unit 8: Elements, Compounds and Mixtures. Quizlet Vocabulary: Classes of Matter · pre-Test; practice - Quizlet Live or independent games then.
An element is a simple substance that is made from one type of atom and cannot be broken down into simpler components by chemical or physical means.
Let Tim and Moby show you the difference between a compound and a mixture in this BrainPOP movie! Which blend looks nothing like its original elements?
Feb 5, 2019 ... In chemistry class, we came to know the difference between a mixture, compound , and an element. A compound is basically a substance that is ...
Quizlet Vocabulary · ECM Activity · ECM PPTPrintable Vocabulary Flashcards · Elements, Compounds, Mixtures OH MY video ... No Bones About it Quizlet
For example, oil is less dense than water, so a mixture of oil and water can be separated by letting it stand until the oil floats to the top. Other ways of separating ...
Quizlet · ABC Splash · Perdue Elements, Compounds, Mixtures · Cleaning Water · Element, Compound, Mixture Game · Mocomi. Molecularium. Jeopardy.
Elements and compounds are pure chemical substances found in nature. ... between elements, compounds and mixtures, both homogenous and heterogenous. | <urn:uuid:fe3d6841-fdde-4ed8-84c0-488c34594b7c> | {
"dump": "CC-MAIN-2019-22",
"url": "https://www.reference.com/web?q=elements+compounds+and+mixtures+quizlet&qo=relatedSearch&o=600605&l=dir",
"date": "2019-05-25T07:35:50",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232257920.67/warc/CC-MAIN-20190525064654-20190525090654-00191.warc.gz",
"language": "en",
"language_score": 0.8208796381950378,
"token_count": 355,
"score": 3.578125,
"int_score": 4
} |
Earth is actually in a favored position for space exploration
Gonzalez: In the larger context of the Milky Way galaxy, our Solar System is in the best location to initiate interstellar missions. In summary, we here confirm and expand upon recent studies that argue that the Earth and the Solar System are rare in the degree to which they facilitate space exploration.
For a far out New Year’s Day, try Ultima Thule, 4 billion km from the sun. “The object was subsequently designated 2014 MU69, given the minor planet number 485968, and based on public votes, nicknamed “Ultima Thule”, which means ‘beyond the known world.’” | <urn:uuid:b782cc18-163d-4f27-9b66-be0f6ad0d621> | {
"dump": "CC-MAIN-2022-27",
"url": "https://uncommondescent.com/tag/space-exploration/",
"date": "2022-07-02T09:16:10",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103989282.58/warc/CC-MAIN-20220702071223-20220702101223-00766.warc.gz",
"language": "en",
"language_score": 0.9229370951652527,
"token_count": 145,
"score": 3.140625,
"int_score": 3
} |
In today\'s digitalized and technological age, immense electricity has been consumed by street lights because of incessant working of street lights during night. Smart system is mechanized and designed to tackle the crisis related to access consumption of electricity as well as emission of carbon dioxide from it. The system of ‘SMART STREET LIGHTS\' can be imposed through some technical operations inside it.
The importance of automation in both daily life and the global economy is rising. Systems that are automatic are preferred to those that are manual.
The research demonstrates automatic control of the streetlights, which results in some electricity savings. Automation goes beyond mechanization in the context of industrialization. Automation considerably reduces the need for human sensory and mental requirements as well, whereas mechanization offered human operators with machinery to aid the users with muscle requirements of job.
Fundamentally, one of the crucial components is street lighting. Street lighting are therefore quite straightforward, however as urbanization progresses, the number of streets grows quickly with high traffic density. In order to construct a decent street lighting system, a number of issues must be taken into account, including the need to provide public lighting at a reasonable cost, reduce crime, and minimize the environmental impact of that crime. Street lights were first manually operated, with a control switch placed in each one.
This period is referred to as the first generation of the original street light. After that, a further approach that was employed involved optical control, which made advantage of high. In the meantime, street lighting systems can be divided into groups based on the types of bulbs that are utilized, including incandescent, mercury vapour, metal halide, high pressure sodium, low pressure sodium, fluorescent, compact fluorescent, induction, and LED lights. Various types of light technology, together with its luminous efficacy, bulb life, and concerns, are employed in lighting design.
II. LITERATURE SURVEY
In this project ‘SMART STREET LIGHTS ' we have inspected some other aspects known by the other researchers. In the system all the components are well developed itself and enforced by various aspects in several platforms. As we know rural areas and urban areas electricity is a dire need, therefore this system analyses and text decisions accordingly.
As per the research in most of the cases the LDR sensor and PIR sensor is mostly used individually. So they are able to work according to the component’s function.
But in this project both the sensors LDR & PIR sensors are used collectively so as to give a better result and more efficient circuit system .LDR sensors are used to acknowledge the light. Also we have viewed PIR sensor which detects the motion and send the messages to the other components of a system so that they can prepare and work efficiently before the object arrives . The proposed system which we have used reduces the consumption of electricity upto 40%.
III. PROPOSED SYSTEM
A. Light Unit
Coordination of LED PIR sensors and communication devices enables the lamp unit to work efficiently. The main function of this unit is to pass on the message whenever any motion is detected.
B. Sensing Unit
Communication devices and controller are main an important parts of this. It carries and passed the message by detection of motion. Assistant like this is placed to many areas to ensure the regulation of street lights.
IV. PROPOSED DESIGN
VI. COMPONENTS USED
Semiconductor device that emits electromagneticwaves when electric current flows in it. Whenever the current flows through the LED, the electrons present in that recombines with holes discharging light in the process. LED's restrict the current to flow in backward direction , hence it flows only in forward direction. LED’s are heavily dopped in P-N JUNCTION. On the basis of semiconductor material used & the amount of doping the LED will determine the emission of coloured light, spectral wavelength when forward BIASED
A battery is a portable device that stores chemical energy and converts it into electrical energy. This process is known as electrochemistry. And the system that supports a battery is called electrochemical cell. Battery consists of one or more electrochemical cells. Its electrochemical cell consists of two electrodes separated by an electrolyte.
A resistor is a passive two terminal electrical component that implements electrical resistance as a circuit element . In electronic circuits , resistors are used to reduce current flow , adjust signal levels , to divide voltages , bias active elements and the terminate transmission lines , among other uses . Variable resistors can be used to adjust circuit elements (such as a volume control or a lamp dimmer), or as sensing devices for heat, light, humidity, force or chemical activity.
D. Motion Sensor
A motion sensor , or motion detector , is an electronic device that uses a sensor to detect nearby people or objects . Motion sensors are an important component of any security system. When a sensor detector motion , it will send a alert to your security system , and with newer systems , right to your mobile phone. If you have subscribed to an alarm monitoring service , motion sensors can even be configured to send an alert to your monitoring team.
E. PIR Sensors
PIR sensors are bit more complex than active ultrasonic sensors , but the result is the same . Walls , floors , stairways , windows , cars , dogs , trees , people you name it radiate some amount of heat . Infrared waves can detect temperature . Infrared motion sensors detect the presence of a person or object by detecting the change in temperature of a given area .
A PIR sensor uses these temperature changes to detect the presence of a person or a object . Like active ultrasonic sensors , PIR sensors can be set to ignore small changes in IR , so you can walk around your home or business without setting off alarms all day and nights.
F. LDR Sensor
The working principle of an LDR is photoconductivity , which is nothing but an optical phenomenon. When the light is absorbed by the material then the conductivity of the material enhances . When the light falls on the LDR , then the electrons in the valence band of the material are eager to the conduction band. But the protons in the incident light must have energy superior to the bandgap of the material to make the electrons jump from one band to another band [ valance to conduction] .
Hence when the lights having ample energy , more electrons are excited to the conduction
band which grades in a large no of charge carriers . when the effect of this process and the flow of the current starts flowing more, the resistance of the device decreases .
The transistor is made up of two pn diodes that are attached back to back. it has emitter, base, and collector terminals as its three terminals. the central part, which is composed of thin layers, serves as the base. the emitter diode is the portion of the diode on the right, while the collector-base diode is the portion on the left. these designations are given in accordance with the transistor's common terminal. the transistor's collector-base junction has a high resistance because it is connected in reverse bias while the transistor's emitter-based junction is connected in forward bias.
Our guide Shivrajsinh Rayjada assists us . He helped a lot to make this project so well . All the circuit and sensor system was suggested by him . And also he guided us for certain transformation in the project .
The result includes the successful operation of vital and dynamic Street lightning. Introduces the wastage of electricity in unused hours. This system controls the intensity of the lights based on density of lane. It is also referred to as ‘intelligent street lights\'. | <urn:uuid:24cbe08c-6848-40d8-b700-d58916861641> | {
"dump": "CC-MAIN-2024-10",
"url": "https://www.ijraset.com/research-paper/smart-street-lights",
"date": "2024-02-23T01:37:32",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947473871.23/warc/CC-MAIN-20240222225655-20240223015655-00707.warc.gz",
"language": "en",
"language_score": 0.9339644908905029,
"token_count": 1589,
"score": 3.25,
"int_score": 3
} |
Easter, the Sunday of the Resurrection, Pascha, or Resurrection Day, is the most important religious feast of the Christian liturgical year, observed at some point between late March and late April each year (early April to early May in Eastern Christianity), following the cycle of the moon. It celebrates the resurrection of Jesus, which Christians believe occurred on the third day of his death by crucifixion some time in the period AD 27 to 33. Easter also refers to the season of the church year, called Eastertide or the Easter Season. Traditionally the Easter Season lasted for the forty days from Easter Day until Ascension Day but now officially lasts for the fifty days until Pentecost. The first week of the Easter Season is known as Easter Week or the Octave of Easter.
Today many families celebrate Easter in a completely secular way, as a non-religious holiday.
Segments Alluded ToEdit
- Bulgarian Easter traditions
- Easter in the Armenian Orthodox Church
- Eastern Orthodox views on Easter
- Roman Catholic view of Easter (from the Catholic Encyclopedia)
- Rosicrucians: The Cosmic Meaning of Easter (the esoteric Christian tradition)
- Calculator for the date of Festivals (Anglican)
- A simple method for determining the date of Easter for all years 326 to 4099 A.D.
- Paschal Calculator (Eastern Orthodox)
- Orthodox Calculator
- Bulgarian Easter
- Easter traditions in Finland
- Easter-postcards from 1898 to today from 36 countries all over the world - Exhibition
- Easter in Germany
- Easter in Russia
- Template:Ru icon Easter traditions in Russia
- Template:Uk icon Easter traditions in Ukraine. Velykden'
- Pascha and Kulich (Photo) traditional Russian Paschal foods | <urn:uuid:6f646d10-7826-4f8e-b3c8-ebe28ecb16d7> | {
"dump": "CC-MAIN-2017-34",
"url": "http://robotchicken.wikia.com/wiki/Easter",
"date": "2017-08-20T22:56:04",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886106996.2/warc/CC-MAIN-20170820223702-20170821003702-00124.warc.gz",
"language": "en",
"language_score": 0.9115374684333801,
"token_count": 364,
"score": 3.640625,
"int_score": 4
} |
Breast carcinoma (BC) is not only the most common invasive malignant neoplasm in women but also the second main cause of cancer death in women (1). According to the statistics of Global Cancer Statistics 2018, it is estimated that about 2,088,849 patients were newly diagnosed with breast cancer and 626,679 patients died from breast cancer in 2018. Among neoplastic diseases, the mortality rate of BC ranked 2nd after the lung cancer worldwide (2). Since cancer cells can develop in the ducts, lobules or tissues in between, it has multiple histological types. In the last few decades, due to the raising public awareness of the disease and the wide application of screening mammogram, more patients are diagnosed in relatively early stage than before. Also, new treatment approaches such as targeted therapy and endocrine therapy have greatly increased the overall survival rate associated with the disease and the quality of patients’ life. It is assumed that one in eight women in the world will develop cancer of the mammary glands (3), in which only 5–10% of all these cases are caused by genetic disorders. The remaining 90–95% of cases are connected to environmental factors and lifestyle choices (4). Therefore, more investigations were carried out in order to figure out different risk factors associated with BC.
The human papillomavirus (HPV) is a non-enveloped DNA virus which belongs to the papillomaviridae family with over 150 types. It is the most common sexually transmitted infection agent in the United States. Most women will get at least one type of HPV at some point in life, since the incidence percentage is as high as 80% (5). Boshart et al. was the first to theorize that HPV was one of the potential causes of cervical cancer (6). Following further investigation of HPV, scientists discovered that HPV infections cause a significant proportion of cancers worldwide, with high-risk HPV types such as HPV16 and HPV18, being associated with squamous cell carcinomas of the anogenital and oropharyngeal tract. Furthermore, some low-risk HPV types have been observed to cause genital warts and recurrent respiratory papilloma.
The association of HPV infection and BC was put forward by Band et al. in 1990. Band et al. reported that HPV could immortalize normal human mammary epithelial cells, and reduce their requirement on growth factors (7). Since 1992, an increasing number of researches has been carried out trying to investigate the relationship between HPV infection and BC. However, the conclusions of these studies were controversial. Some scientists could not find HPV DNA in breast cancer tissues as is reported by Doosti et al. (8), Bakhtiyrizadeh et al. (9) and Gannon et al. (10), while some were able to find a high prevalence rate of HPV in breast cancer tissues such as Cavalcante et al. (11). Although almost 30 years has passed since the proposition of this theory, the definite conclusion remains unknown.
The main purpose of this study is to collect the information of original studies worldwide concerning the relationship of HPV infections to BC. In order to draw a scientific conclusion, we limited the study type as case-control studies which are considered to be more convictive than cross-sectional study. Through the assistance of a systematic literature search method, we carefully analyzed 37 related case-control studies.
We successfully registered a systematic review entitled “Human papillomavirus infection in breast cancer patients: a meta-analysis of case-control studies” on PROSPERO. The registration number is: CRD42019121723. We have been updating our information during the systematic review writing procedure.
Search strategy and study selection
We followed the Meta-analysis of Observational Studies in Epidemiology (MOOSE) (12) for performing and reporting the present meta-analysis. The MOOSE checklist is exhibited in Table S1. Two authors (C Ren and K Zeng) independently performed the literature search procedure and study selection. Two authors reached consensus on all items. Relevant articles on the association between BC and HPV infection were identified through an extensive search of the Cochrane Library, Embase, PubMed, Web of Science and clinicaltrials.gov. In PubMed, the search was performed by using the keywords: “Breast cancer” and “Papillomavirus”. We defined the following medical subject headings (MeSH) terms: “papillomaviridae” and “breast neoplasms”. We combined the Mesh terms and entry words when completing the searching process. The literature search contains all literature until March 2019, with no publication starting-date limitation. These search queries yielded 12 citations in The Cochrane Library, 955 citations in Embase, 308 citations in PubMed and 1149 citations in Web of science.
Studies about the relationship between BC and HPV infection were reviewed and evaluated critically for predefined eligibility criteria. Figure 1 summarized the flow of information retrieval process and the inclusion and exclusion criteria. The searching strategy in PubMed is exhibited in the supplementary I.
Data extraction was performed by two authors (C Ren and K Zeng) independently. We collected the information of authors, year of publication, area, histologic type of control, tissue type and HPV infection related data. For those articles without a description of the entire clinical data, we made additional efforts to obtain the original data by contacting the authors.
Revman 5.3 and Stata 14.0 were utilized to analysis the retrieved data. OR and 95% CI were calculated for each article. Five subgroup analysis—the histologic type of control group, tissue type and three subtypes of HPV were performed. The presence of heterogeneity in meta-analysis was evaluated using the I-squared value (%). A P value of less than 5% was considered statistically significant. In order to estimate publication bias, Egger’s linear regression and Egger’s linear regression were applied. In addition, trim and fill method was applied to test if the pooled results were affected by publication bias.
Figure 1 elucidates the process of the literature search. A total of 2,424 records were identified through database searching and 22 records were manually added by going through the reference list of published meta-analysis (13,14). We removed duplicates and 1,566 records by browsing titles and abstracts. Fifty-eight full-text articles were obtained for eligibility. Sixteen records were removed because of case-only study design, and 5 were excluded because of null results in both groups, while 1 record was removed for the same specimen resource as another.
A total of 37 original studies containing 5,335 detected specimens were finally included to carry out the analysis. All the included articles were carefully assessed through reading full text. The included original studies are listed in the Table 1.
Characteristics of eligible studies
The 37 included studies range from 1999–2019 covering 17 countries worldwide. Over 50% of the research were carried out in Asian countries. Among them, 11 articles were published before 2010, 15 were published between 2010 and 2015, and 11 were published after 2015. All of the included research are case-control studies. Most studies used non-malignant breast lesions as a control, while 9 studies used normal breast tissue as controls which are showed in Table 1. Furthermore, we collected the statistics of HPV subtypes detection (high risk HPV subtypes HPV16, HPV18 and HPV33) in order to carry out subgroup analysis figuring out carcinogenic subtype. The specimen types used for HPV detection are as follows: 11 studies used fresh frozen tissue, 2 studies used fresh tissue, 1 study used liquid cytology specimens, and the rest used paraffin-embedded tissue.
Methodological quality of included studies
All of the 37 studies have undergone methodological quality assessment according to Newcastle-Ottawa Quality Assessment Scale designed for case control studies. The results of quality assessment are listed in Table 2. The assessment results showed that 16 original studies got 6 stars which were considered to be relatively high quality and the other 21 got 5 stars which indicated some risk of bias.
HPV infection and the risk of BC
The results of the pooled analysis for included studies are shown in the forest plot (Figure 2). In all studies, 1,097 tissues in BC group were found HPV positive, while 132 tissues in control group were found HPV positive. Since the I-square is 52%, we applied the random effect model. The summary odds ratio (SOR) was 6.22 (95% confidence interval 4.25 to 9.12; P=0.0002, Figure 2) which provided evidence for the theory that HPV infection increased the risk of BC.
A subgroup analysis was carried to find out the cause of heterogeneity. For histologic type of control, 37 studies were divided into normal breast tissue subgroup and benign breast lesion subgroup in terms of type of control group. The SOR was 8.78 (95% confidence interval 5.54 to 13.92; P<0.00001; I2=10%, Figure 3) in normal breast tissue subgroup and the SOR was 4.91 (95% confidence interval 3.08 to 7.82; P<0.00001; I2=50%, Figure 3) in the benign breast lesion subgroup. This subgroup analysis evidently declined the heterogeneity.
For tissue type, the original studies were divided into paraffin-embedded tissue subgroup and fresh frozen tissue subgroup. In this way, SOR was 7.43 (95% confidence interval 4.56 to 12.09; P<0.00001; I2=49%, Figure 4) in paraffin-embedded tissue subgroup and SOR was 6.32 (95% confidence interval 2.93 to 13.64; P<0.00001; I2=51%, Figure 4) in fresh frozen tissue subgroup.
In addition, subgroup analysis of association between HPV types and BC was conducted. In HPV16 subgroup, SOR was 6.33 (95% confidence interval 3.47 to 11.52; P<0.00001; I2=10%, Figure 5). In HPV18 subgroup, SOR was 3.49 (95% confidence interval 2.24 to 5.41; P<0.00001; I2=0%, Figure 5). In HPV33 subgroup, SOR was 3.20 (95% confidence interval 1.64 to 2.26; P=0.0007, I2=0%, Figure 5).
Publication bias and trim & fill analysis
We also examined the influence of publication bias through picturing a funnel graph, conducting Egger’s linear regression test and Begg’s rank correlation test (Figure 6). There was no publication bias found for Begg’s rank correlation test (Pr>|z|=0.628). However, the P value of Egger’s linear regression test was 0.003. Egger’s linear regression test proved the existence of publication bias and that is the reason why trim & fill analysis was performed. The application of the trim and fill method did not change the risk estimate (SOR 1.478, 95% confidence interval 1.110 to 1.847). These data favor our previous theory that HPV infection is related to BC.
The results of this meta-analysis support the hypothesis put forward by Band et al., which means the HPV infection could be a potential risk factor for BC, with SOR of original case control studies as high as 6.22. The SOR of this meta-analysis is higher than that in previous meta-analysis of Bae (13) which further indicates the potential association between HPV infection and BC. The transmission path of HPV could be through blood, body contact and lymphatic fluid. The virus finally arriving the breast tissue may participate in the growth and development of tumor. Although the mechanisms involving in this biological behavior are still under discussion, we should never neglect the impact of warts infection in breast malignant neoplasm.
Considering the data in the sub group about histological type of control group, we found that an interesting fact that the SOR of normal breast tissue subgroup is lower than that in benign breast lesion subgroup. Despite the I-square value in either subgroup is under 50% which indicated heterogeneity is acceptable, the I-square value between subgroups is obvious. This discovery described the importance of control selection. Although benign breast tissues are more easily to be obtained, there are still some difference between normal breast tissue and benign breast lesion. We cannot figure the underlying reason up till now, but we boldly came up with a hypothesis that maybe in normal breast tissue the consistence infection of HPV is more dangerous than benign breast tissues. Still, whether benign breast lesion is a kind of precancerosis remains unsettled but this article indicates the scientists to avoid considering benign breast lesions as a control in future.
Regarding to the detection approaches, we noticed that more case control studies use paraffin-embedded tissue for HPV PCR detection, only 11 studies used fresh frozen tissue. Li et al. (49) have reported that HPV detection rates were slightly higher when HPV DNA was extracted from paraffin-embedded tissue than from fresh frozen tissue, which hints that biopsy taken or slide preparation may affect the detection results. However, in our subgroup analysis, we did not get the similar conclusion, we speculate this result may due to the heterogeneity of fresh frozen tissue subgroup (I2=51%). Besides the development of detection method may be an important influence factor of the studies. Before 2000, only type-specific PCR primers were used in the detection of HPV in breast tissues. Afterwards, broad spectrum PCR primers and the combined usage of type specific and broad-spectrum primers were used (49). The progress of technique made the detection result more stable and reliable.
HPV has over 150 subtypes. The high-risk HPV types have higher oncogenicity and cause many kinds of tumors such as cervical cancer (50) and oropharyngeal carcinoma (51). In this article, we carried out a subgroup analysis on three high-risk HPV types (HPV 16, 18 and 33). It turned out the three high-risk HPV were all related to BC with P value <0.05. This result is in accordance with the research of most original studies and many researches on other carcinomas.
In the process of quality assessment, we found that although we included 37 original case control studies and performed a relative large-scale meta-analysis, the quality of included studies was generally low. Only 16 studies got 6 stars in terms of NOS scale. That means there is a possibility of confounding factors influencing the results to this meta-analysis, while these confounding factors could have been associated with the mechanism of HPV oncogenicity in BC, such as age, family history, menarche age, TNM stages estrogen and progesterone receptors and/or HER2 oncogene expression, etc.
As to the potential mechanism, association between HPV and BC has not been confirmed yet. We have to admit that the appearance of HPV is not sufficient to prove virus’ etiological role in BC development. However, the HPV infection is expected to be an early event, followed by cumulative changes over the years, similar to cervical carcinogenesis (52). Lots of theories were proposed. Khodabandehlou et al. reported that the presence of the HPV was associated with increased inflammatory cytokines (IL-1, IL-6, IL-17, TGF-β, TNF-α, and NF-kB) and tumor progression (15). Wang et al. revealed that HPV16 E7 may promote the proliferation of breast cancer cells by upregulating COX-2 (53). Besides, Yan et al. discovered the knockdown of HPV18 E6 and E7 could suppress the proliferation, metastasis, and cell cycle progression in HPV positive breast cancer cell line (54). What’s more, Michael B. Burns’ team had revealed that the mutation and deletions of DNA cytosine deaminase APOBEC3B (A3B) which functions by inhibiting retrovirus replication would elevate the risk of BC (55). After that, in 2014 Vieira and Ohba team reported the presence of HPV could alter the expression of APOBEC3B (A3B) (56,57). So, it is reasonable to suppose HPV involves in the early stage of BC by affecting APOBEC3B (A3B).
Comparing with previous systemic reviews on the similar topic, this article limited the study type as case control study instead of case only studies. Although the results of previous systemic reviews are mostly positive, due to the improvement of study design and more original researches included the conclusion in this article should be more exact and convincing. Besides, multiple subgroup analysis provide evidence for further mechanism exploration. As the debate that had been discussed by Lawson et al. in 2016, the low HPV viral load is the reason why some original researches obtained negative results (58). However, with the development of technique, the SOR did not become higher in most original studies within recent 5 years. We suspected that some certain kinds of BC are HPV-related while others are not.
Limitations of the review
Our meta-analysis has certain limitations. For instance, most of included studies were conducted in Asian countries. As is reported by Li et al. (49), although 32.42% of BC cases were HPV-associated in Asians, only 12.91% were in Europeans. Also, the histology types were obliged to be analyzed by subgroup. However, due to the lack of specified data of clinical trials, we cannot perform them in this article. Thus, these limitations are likely to bring about bias. Besides the affection of area and pathological features, the approach of HPV affection should be advanced by identifying integrated virus DNA and free virus DNA, which is expected to provide more evidence for HPV oncogenicity mechanism in BC.
Through carefully data collection and analysis, we drew a conclusion that the infection of HPV can really increase the risk of BC, especially for some high-risk types, such as HPV16, 18 and 33. The underlying molecular mechanism has not been settled yet. This calls for the scientists’ attention and further exploration in the following research.
Supplementary I Search strategy in Pubmed database
(((“Papillomaviridae”[Mesh]) OR (((((((((((((“Human Papilloma Virus”[Title/Abstract]) OR “Human Papilloma Viruses”[Title/Abstract]) OR “Papilloma Virus, Human”[Title/Abstract]) OR “Papilloma Viruses, Human”[Title/Abstract]) OR “Virus, Human Papilloma”[Title/Abstract]) OR “Viruses, Human Papilloma”[Title/Abstract]) OR “HPV, Human Papillomavirus Viruses”[Title/Abstract]) OR “Human Papillomavirus Viruses”[Title/Abstract]) OR “Human Papillomavirus Virus”[Title/Abstract]) OR “Papillomavirus Virus, Human”[Title/Abstract]) OR “Papillomavirus Viruses, Human”[Title/Abstract]) OR “Virus, Human Papillomavirus”[Title/Abstract]) OR “Viruses, Human Papillomavirus”[Title/Abstract]))) AND ((“Breast Neoplasms”[Mesh]) OR (((((((((((((((((((((((((((((((((((((“Breast Neoplasm”[Title/Abstract]) OR “Neoplasm, Breast”[Title/Abstract]) OR “Breast Tumors”[Title/Abstract]) OR “Breast Tumor”[Title/Abstract]) OR “Tumor, Breast”[Title/Abstract]) OR “Tumors, Breast”[Title/Abstract]) OR “Neoplasms, Breast”[Title/Abstract]) OR “Breast Cancer”[Title/Abstract]) OR “Cancer, Breast”[Title/Abstract]) OR “Mammary Cancer”[Title/Abstract]) OR “Cancer, Mammary”[Title/Abstract]) OR “Cancers, Mammary”[Title/Abstract]) OR “Mammary Cancers”[Title/Abstract]) OR “Malignant Neoplasm of Breast”[Title/Abstract]) OR “Breast Malignant Neoplasm”[Title/Abstract]) OR “Breast Malignant Neoplasms”[Title/Abstract]) OR “Malignant Tumor of Breast”[Title/Abstract]) OR “Breast Malignant Tumor”[Title/Abstract]) OR “Breast Malignant Tumors”[Title/Abstract]) OR “Cancer of Breast”[Title/Abstract]) OR “Cancer of the Breast”[Title/Abstract]) OR “Mammary Carcinoma, Human”[Title/Abstract]) OR “Carcinoma, Human Mammary”[Title/Abstract]) OR “Carcinomas, Human Mammary”[Title/Abstract]) OR “Human Mammary Carcinomas”[Title/Abstract]) OR “Mammary Carcinomas, Human”[Title/Abstract]) OR “Human Mammary Carcinoma”[Title/Abstract]) OR “Mammary Neoplasms, Human”[Title/Abstract]) OR “Human Mammary Neoplasm”[Title/Abstract]) OR “Human Mammary Neoplasms”[Title/Abstract]) OR “Neoplasm, Human Mammary”[Title/Abstract]) OR “Neoplasms, Human Mammary”[Title/Abstract]) OR “Mammary Neoplasm, Human”[Title/Abstract]) OR “Breast Carcinoma”[Title/Abstract]) OR “Breast Carcinomas”[Title/Abstract]) OR “Carcinoma, Breast”[Title/Abstract]) OR “Carcinomas, Breast”[Title/Abstract])) Sort by: Best Match
We would like to acknowledge the librarians at the Libraries of Central South University for their efforts in obtaining primary resources for this meta-analysis. Additionally, we would like to acknowledge professors, colleagues, friends and family members who assisted and gave encouragements in the writing procedure of this article.
Conflicts of Interest: The authors have no conflicts of interest to declare.
Ethical Statement: The authors are accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved.
- DeSantis CE, Fedewa SA, Goding Sauer A, et al. Breast cancer statistics, 2015: Conver-gence of incidence rates between black and white women. CA Cancer J Clin 2016;66:31-42. [Crossref] [PubMed]
- Bray F, Ferlay J, Soerjomataram I, et al. Global cancer statistics 2018: GLOBOCAN esti-mates of incidence and mortality worldwide for 36 cancers in 185 countries. CA Cancer J Clin 2018;68:394-424. [Crossref] [PubMed]
- Ferrini K, Ghelfi F, Mannucci R, et al. Lifestyle, nutrition and breast cancer: facts and presumptions for consideration. Ecancermedicalscience 2015;9:557. [Crossref] [PubMed]
- Castelló A, Martin M, Ruiz A, et al. Lower Breast Cancer Risk among Women following the World Cancer Research Fund and American Institute for Cancer Research Lifestyle Recommendations: EpiGEICAM Case-Control Study. PLoS One 2015;10:e0126096. [Crossref] [PubMed]
- Myers ER, McCrory DC, Nanda K, et al. Mathematical model for the natural history of human papillomavirus infection and cervical carcinogenesis. Am J Epidemiol 2000;151:1158-71. [Crossref] [PubMed]
- Boshart M, Gissmann L, Ikenberg H, et al. A new type of papillomavirus DNA, its presence in genital cancer biopsies and in cell lines derived from cervical cancer. EMBO J 1984;3:1151-7. [Crossref] [PubMed]
- Band V, Zajchowski D, Kulesa V, et al. Human papilloma virus DNAs immortalize normal human mammary epithelial cells and reduce their growth factor requirements. Proc Natl Acad Sci U S A 1990;87:463-7. [Crossref] [PubMed]
- Doosti M, Bakhshesh M, Zahir ST, et al. Lack of Evidence for a Relationship between High Risk Human Papillomaviruses and Breast Cancer in Iranian Patients. Asian Pac J Cancer Prev 2016;17:4357-61. [PubMed]
- Bakhtiyrizadeh S, Hosseini SY, Yaghobi R, et al. Almost Complete Lack of Human Cy-tomegalovirus and Human papillomaviruses Genome in Benign and Malignant Breast Lesions in Shiraz, Southwest of Iran. Asian Pac J Cancer Prev 2017;18:3319-24. [PubMed]
- Gannon OM, Antonsson A, Milevskiy M, et al. No association between HPV positive breast cancer and expression of human papilloma viral transcripts. Sci Rep 2015;5:18081. [Crossref] [PubMed]
- Cavalcante JR, Pinheiro LGP, Almeida PRC, et al. Association of breast cancer with human papillomavirus (HPV) infection in Northeast Brazil: molecular evidence. Clinics (Sao Paulo) 2018;73:e465. [Crossref] [PubMed]
- Stroup DF, Berlin JA, Morton SC, et al. Meta-analysis of observational studies in epide-miology: a proposal for reporting. Meta-analysis Of Observational Studies in Epidemi-ology (MOOSE) group. JAMA 2000;283:2008-12. [Crossref] [PubMed]
- Bae JM, Kim EH. Human papillomavirus infection and risk of breast cancer: a me-ta-analysis of case-control studies. Infect Agent Cancer 2016;11:14. [Crossref] [PubMed]
- Simões PW, Medeiros LR, Simoes Pires PD, et al. Prevalence of human papillomavirus in breast cancer: a systematic review. Int J Gynecol Cancer 2012;22:343-7. [Crossref] [PubMed]
- Khodabandehlou N, Mostafaei S, Etemadi A, et al. Human papilloma virus and breast cancer: the role of inflammation and viral expressed proteins. BMC Cancer 2019;19:61. [Crossref] [PubMed]
- Malekpour Afshar R, Balar N, Mollaei HR, et al. Low Prevalence of Human Papilloma Virus in Patients with Breast Cancer, Kerman; Iran. Asian Pac J Cancer Prev 2018;19:3039-44. [Crossref] [PubMed]
- ElAmrani A, Gheit T, Benhessou M, et al. Prevalence of mucosal and cutaneous human papillomavirus in Moroccan breast cancer. Papillomavirus Res 2018;5:150-5. [Crossref] [PubMed]
- Salman NA, Davies G, Majidy F, et al. Association of High Risk Human Papillomavirus and Breast cancer: A UK based Study. Sci Rep 2017;7:43591. [Crossref] [PubMed]
- Ladera M, Fernandes A, López M, et al. Presence of human papillomavirus and Ep-stein-Barr virus in breast cancer biopsies as potential risk factors. Gaceta Mexicana de Oncologia 2017;16:107-12.
- Islam S, Dasgupta H, Roychowdhury A, et al. Study of association and molecular analysis of human papillomavirus in breast cancer of Indian patients: Clinical and prognostic implication. PLoS One 2017;12:e0172760. [Crossref] [PubMed]
- Delgado-García S, Martínez-Escoriza JC, Alba A, et al. Presence of human papillomavirus DNA in breast cancer: A Spanish case-control study. BMC Cancer 2017;17:320. [Crossref] [PubMed]
- Zhang N, Ma ZP, Wang J, et al. Human papillomavirus infection correlates with inflammatory Stat3 signaling activity and IL-17 expression in patients with breast cancer. Am J Transl Res 2016;8:3214-26. [PubMed]
- Wang D, Fu L, Shah W, et al. Presence of high risk HPV DNA but indolent transcription of E6/E7 oncogenes in invasive ductal carcinoma of breast. Pathol Res Pract 2016;212:1151-6. [Crossref] [PubMed]
- Li J, Ding J, Zhai K. Detection of Human Papillomavirus DNA in Patients with Breast Tumor in China. PLoS One 2015;10:e0136050. [Crossref] [PubMed]
- Fu L, Wang D, Shah W, et al. Association of human papillomavirus type 58 with breast cancer in Shaanxi province of China. J Med Virol 2015;87:1034-40. [Crossref] [PubMed]
- Peng J, Wang T, Zhu H, et al. Multiplex PCR/mass spectrometry screening of biological carcinogenic agents in human mammary tumors. J Clin Virol 2014;61:255-9. [Crossref] [PubMed]
- Manzouri L, Salehi R, Shariatpanahi S, et al. Prevalence of human papilloma virus among women with breast cancer since 2005-2009 in Isfahan. Adv Biomed Res 2014;3:75. [Crossref] [PubMed]
- Hong L, Tang S. Does HPV 16/18 infection affect p53 expression in invasive ductal car-cinoma? An experimental study. Pak J Med Sci 2014;30:789-92. [Crossref] [PubMed]
- Ali SHM, Al-Alwan NAS, Al-Alwany SHM. Detection and genotyping of human papil-lomavirus in breast cancer tissues from Iraqi patients. East Mediterr Health J 2014;20:372-7. [Crossref] [PubMed]
- Ahangar-Oskouee M, Shahmahmoodi S, Jalilvand S, et al. No detection of 'high-risk' human papillomaviruses in a group of Iranian women with breast cancer. Asian Pac J Cancer Prev 2014;15:4061-5. [Crossref] [PubMed]
- Liang W, Wang J, Wang C, et al. Detection of high-risk human papillomaviruses in fresh breast cancer samples using the hybrid capture 2 assay. J Med Virol 2013;85:2087-92. [Crossref] [PubMed]
- Sigaroodi A, Nadji SA, Naghshvar F, et al. Human papillomavirus is associated with breast cancer in the north part of Iran. ScientificWorldJournal 2012;2012:837191. [Crossref] [PubMed]
- Glenn WK, Heng B, Delprado W, et al. Epstein-Barr virus, human papillomavirus and mouse mammary tumour virus as multiple viruses in breast cancer. PLoS One 2012;7:e48788. [Crossref] [PubMed]
- Divani SN, Giovani AM. Detection of human papillomavirus DNA in fine needle aspirates of women with breast cancer. Arch Oncol 2012;20:12-4. [Crossref]
- Chang P, Wang T, Yao Q, et al. Absence of human papillomavirus in patients with breast cancer in north-west China. Med Oncol 2012;29:521-5. [Crossref] [PubMed]
- Frega A, Lorenzon L, Bononi M, et al. Evaluation of E6 and E7 mRNA expression in HPV DNA positive breast cancer. Eur J Gynaecol Oncol 2012;33:164-7. [PubMed]
- Mou X, Chen L, Liu F, et al. Low prevalence of human papillomavirus (HPV) in Chinese patients with breast cancer. J Int Med Res 2011;39:1636-44. [Crossref] [PubMed]
- Heng B, Glenn WK, Ye Y, et al. Human papilloma virus is associated with breast cancer. Br J Cancer 2009;101:1345-50. [Crossref] [PubMed]
- He Q, Zhang SQ, Chu YL, et al. The correlations between HPV16 infection and expres-sions of c-erbB-2 and bcl-2 in breast carcinoma. Mol Biol Rep 2009;36:807-12. [Crossref] [PubMed]
- Mendizabal-Ruiz AP, Morales JA, Ramirez-Jirano LJ, et al. Low frequency of human papillomavirus DNA in breast cancer tissue. Breast Cancer Res Treat 2009;114:189-94. [Crossref] [PubMed]
- de León DC, Montiel DP, Nemcova J, et al. Human papillomavirus (HPV) in breast tumors: prevalence in a group of Mexican patients. BMC Cancer 2009;9:26. [Crossref] [PubMed]
- Fan CL, Zhou JH, Hu CY. Expression of human papillomavirus in mammary carcinoma and its possible mechanism in carcinogenesis. Virologica Sinica 2008;23:226-31. [Crossref]
- Choi YL, Cho EY, Kim JH, et al. Detection of human papillomavirus DNA by DNA chip in breast carcinomas of Korean women. Tumour Biol 2007;28:327-32. [Crossref] [PubMed]
- Tsai JH, Hsu CS, Tsai CH, et al. Relationship between viral factors, axillary lymph node status and survival in breast cancer. J Cancer Res Clin Oncol 2007;133:13-21. [Crossref] [PubMed]
- Gumus M, Yumuk PF, Salepci T, et al. HPV DNA frequency and subset analysis in human breast cancer patients' normal and tumoral tissue samples. J Exp Clin Cancer Res 2006;25:515-21. [PubMed]
- Damin AP, Karam R, Zettler CG, et al. Evidence for an association of human papilloma-virus and breast carcinomas. Breast Cancer Res Treat 2004;84:131-7. [Crossref] [PubMed]
- Ren Z, Huang J, Shi Z, et al. Detection of human papillomavirus types 16 and 18 infection in breast cancer tissues by Primed in situ labeling. Zhongguo Zhongliu Linchuang 2003;30:243-6.
- Yu Y, Morimoto T, Sasa M, et al. HPV33 DNA in premalignant and malignant breast lesions in Chinese and Japanese populations. Anticancer Res 1999;19:5057-61. [PubMed]
- Li N, Bi X, Zhang Y, et al. Human papillomavirus infection and sporadic breast carcinoma risk: a meta-analysis. Breast Cancer Res Treat 2011;126:515-20. [Crossref] [PubMed]
- Auborn KJ, Woodworth C, DiPaolo JA, et al. The interaction between HPV infection and estrogen metabolism in cervical carcinogenesis. Int J Cancer 1991;49:867-9. [Crossref] [PubMed]
- Tobouti PL, Bolt R, Radhakrishnan R, et al. Altered Toll-like receptor expression and function in HPV-associated oropharyngeal carcinoma. Oncotarget 2017;9:236-48. [PubMed]
- Malhone C, Longatto-Filho A, Filassi JR. Is Human Papilloma Virus Associated with Breast Cancer? A Review of the Molecular Evidence. Acta Cytol 2018;62:166-77. [Crossref] [PubMed]
- Wang YX, Zhang ZY, Wang JQ, et al. HPV16 E7 increases COX-2 expression and pro-motes the proliferation of breast cancer. Oncol Lett 2018;16:317-25. [PubMed]
- Yan C, Teng Zhi P, Chen Yun X, et al. Viral Etiology Relationship between Human Pap-illomavirus and Human Breast Cancer and Target of Gene Therapy. Biomed Environ Sci 2016;29:331-9. [PubMed]
- Burns MB, Lackey L, Carpenter MA, et al. APOBEC3B is an enzymatic source of mutation in breast cancer. Nature 2013;494:366-70. [Crossref] [PubMed]
- Ohba K, Ichiyama K, Yajima M, et al. In vivo and in vitro studies suggest a possible in-volvement of HPV infection in the early stage of breast carcinogenesis via APOBEC3B induction. PLoS One 2014;9:e97787. [Crossref] [PubMed]
- Vieira VC, Leonard B, White EA, et al. Human papillomavirus E6 triggers upregulation of the antiviral and cancer genomic DNA deaminase APOBEC3B. MBio 2014;5:e02234-14. [Crossref] [PubMed]
- Lawson JS, Glenn WK, Whitaker NJ. Human Papilloma Viruses and Breast Cancer - Assessment of Causality. Front Oncol 2016;6:207. [Crossref] [PubMed] | <urn:uuid:41fd444b-2053-4b42-9f03-2586085276b1> | {
"dump": "CC-MAIN-2020-34",
"url": "http://gs.amegroups.com/article/view/29046/26271",
"date": "2020-08-12T08:53:01",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738888.13/warc/CC-MAIN-20200812083025-20200812113025-00579.warc.gz",
"language": "en",
"language_score": 0.8812189102172852,
"token_count": 8237,
"score": 3.046875,
"int_score": 3
} |
Spinosaurus made famous by the Jurassic Park movies was originally discovered in Egypt in the early 1900’s by a German paleontologist. It has been known by very few bones that were unfortunately destroyed during WWII. However many photographs and casts survived and in the 1990’s paleontologists discovered more Spinosaurus remains. With the new discoveries they found parts of the jaws and many teeth.
It has been determined that Spinosaurus was actually a fish eater due to the narrow and lightweight skull and narrow snout filled with straight conical teeth that lacked serrations. These are the teeth of a fish eater. Also due to the lightweight jaws, it would not have been possible to crush and chew other prey. One of the Spinosaurus’ favorite meals was the Onchopristis, or “Giant Saw,” a sawfish that could grow to 8 meters and weigh close to 1.5 tons.
This tooth is 3 1/4 inches long and the base measures 7/8 inch. The tooth in photo is the tooth you will receive.
Shed “Spitter” tooth
Kem Kem Fossil Beds Formation | <urn:uuid:64b6948e-0233-4608-8010-f5b83c0510f3> | {
"dump": "CC-MAIN-2020-16",
"url": "https://www.paleojoe.com/product/large-spinosaurus-tooth-2/",
"date": "2020-04-06T02:14:40",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371612531.68/warc/CC-MAIN-20200406004220-20200406034720-00197.warc.gz",
"language": "en",
"language_score": 0.9790468215942383,
"token_count": 240,
"score": 3.515625,
"int_score": 4
} |
As someone who loves cooking and cleaning, I often reach for baking soda as a go-to ingredient. However, there is a common question that many people ask: “Is baking soda an acid or a base?” The answer may surprise you, but fear not! In this article, I will explore the properties of baking soda and provide you with everything you need to know about its chemical makeup.
When it comes to understanding the properties of baking soda, it’s essential to have a basic understanding of what acids and bases are. Acids are substances that have a pH level of less than 7 and can donate hydrogen ions (H+) in a chemical reaction. On the other hand, bases have a pH level greater than 7 and can accept hydrogen ions.
So, what category does baking soda fall into? Well, baking soda, also known as sodium bicarbonate, has a pH level of 9, which makes it a base. When mixed with an acid, such as vinegar, it undergoes a chemical reaction that produces carbon dioxide gas. This reaction is what makes baking soda useful in cooking and as a cleaning agent.
Is Baking Soda an Acid or Base?
Baking soda is a base. Its pH level is around 8.4, which classifies it as a weakly alkaline substance. When baking soda comes into contact with an acid, such as vinegar or buttermilk, it reacts and produces carbon dioxide gas, which causes baked goods to rise.
Understanding Acids and Bases
Before we delve any further, let’s take a closer look at how acids and bases work. Acids and bases are both corrosive to certain materials, which makes them useful for cleaning and other purposes. Acids are commonly found in fruits, such as lemons and oranges, and they have a sour taste. Strong acids can be dangerous and cause chemical burns or other injuries.
Bases, on the other hand, are typically bitter-tasting and slippery to the touch. They can be found in household items like bleach, soap, and ammonia. Bases are also used to neutralize acids in certain situations, such as in antacid medications to treat heartburn or indigestion.
Baking Soda Chemical Formula and Structure
So, what is the chemical formula for baking soda, and what is its structure? The molecular formula for baking soda is NaHCO3, which means it contains one sodium (Na) atom, one hydrogen (H) atom, one carbon (C) atom, and three oxygen (O) atoms.
Baking soda is a white crystalline powder that is odorless but has a slightly salty taste. It is soluble in water, which means it can easily dissolve in liquids to create a solution.
The pH of Baking Soda
We already know that baking soda is a base with a pH level of 9. But how does this affect its uses in cooking and cleaning? Well, as I mentioned earlier, when baking soda is mixed with an acid, such as vinegar or lemon juice, it undergoes a chemical reaction that produces carbon dioxide gas. This reaction causes bubbling and fizzing, which can be useful in cooking to help dough or batter rise.
In cleaning, baking soda’s alkalinity makes it an effective agent for removing stains and odors. Its ability to neutralize acids also makes it useful for deodorizing refrigerators or other areas that may have strong smells.
How Does Baking Soda Affect pH Levels?
Another interesting property of baking soda is its ability to affect the pH levels of certain substances. When baking soda is added to an acidic substance, it can act as a buffer to neutralize the acid and raise the pH level. On the other hand, when baking soda is added to something that is too basic, it can lower the pH level and make it more acidic.
Uses of Baking Soda as a Base
Now that we know all about baking soda’s properties, let’s explore some of its common uses in different industries.
One of the most well-known uses of baking soda is in cooking as a leavening agent. When combined with an acidic ingredient, such as buttermilk or cream of tartar, baking soda reacts to produce carbon dioxide gas, which causes dough or batter to rise. Baking soda can also be used to tenderize meat or help neutralize the acidity in tomato-based sauces.
Beyond its uses in cooking, baking soda’s alkaline properties make it an effective cleaning agent. It can be used to scrub surfaces like countertops, sinks, and bathtubs, as well as to remove stains from clothing or carpets. Baking soda can even be used to clean your teeth or neutralize odours in shoes or gym bags.
Health and Beauty
Baking soda’s alkaline properties make it useful in personal care products as well. It can be used as a natural deodorant, toothpaste, or exfoliant for the skin. Some people even use it as a remedy for heartburn or indigestion.
In conclusion, baking soda is an alkaline substance with a pH level of 9 which makes it a base. Its unique chemical properties make it useful in a variety of industries, including cooking, cleaning, and personal care. Baking soda’s ability to react with acids to produce carbon dioxide gas makes it an important ingredient in baking and as a leavening agent. Its alkalinity also makes it effective for cleaning surfaces and neutralizing odours.
Baking soda can be safely used in moderation, but it’s important to note that excessive consumption or use can have adverse health effects. When using baking soda, always follow proper safety guidelines and avoid mixing it with other chemicals without proper knowledge and precautions.
Overall, baking soda’s unique properties make it a versatile substance with many practical uses. Whether you’re trying to bake the perfect loaf of bread, clean your kitchen, or freshen up your shoes, baking soda is definitely worth keeping on hand.
Why is baking soda an acid?
Baking soda is not an acid, but rather an alkaline compound. However, baking powder contains sodium bicarbonate (the same compound as baking soda) as well as two acids. When baking soda is mixed with an acid, it reacts to produce carbon dioxide gas, which causes baked goods to rise.
Is baking soda a strong or weak base?
Baking soda, also known as sodium bicarbonate, is a weak base. When dissolved in water, it has a pH of about 8.3, which is slightly above 7, the pH of neutral water. While it is more basic than neutral water, it is less basic than stronger bases like seawater.
Is baking powder an acid and a base?
Yes, baking powder contains both an acid and a base component. It is a mixture of carbonate or bicarbonate and a weak acid. The acid and base components are prevented from reacting with each other until they are mixed with a liquid. | <urn:uuid:4d4c6f53-6a28-4a64-b34f-61c8b1e676a8> | {
"dump": "CC-MAIN-2023-23",
"url": "https://bakingbakewaresets.com/is-baking-soda-an-acid-or-base/",
"date": "2023-06-02T07:48:56",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224648465.70/warc/CC-MAIN-20230602072202-20230602102202-00533.warc.gz",
"language": "en",
"language_score": 0.9490687847137451,
"token_count": 1459,
"score": 3.390625,
"int_score": 3
} |
Rubens was born in Westphalia in present-day Germany in 1577. He moved with his family to Antwerp at a young age and trained under a handful of artists there. Arguably his real education, however, came upon leaving for Italy in 1600, where he encountered the works of Mantegna, Michelangelo, Raphael, Titian and other masters.
Rubens ended up spending eight years in Italy, most of it as court painter to Vincenzo I Gonzaga, Duke of Mantua. He didn’t only create artworks for Gonzaga, however — he also went on diplomatic missions for him. An urbane figure who spoke five languages, Rubens for much of his life twinned a career as an emissary with one as an artist.
In 1608, he returned to Antwerp for good. The city was then part of the Spanish Netherlands, and Rubens — a devout Catholic — frequently produced pictures in line with the Counter Reformation. Two of his best-known works, the triptychs The Elevation of the Cross and The Descent from the Cross, were painted for Antwerp’s Cathedral of Our Lady and can still be seen there.
His style was essentially a mix of Northern European naturalism with the colour and dramatics of Italian Renaissance art.
‘The painter of princes and the prince of painters’ is how one of Rubens’ peers described him. This reflected the fact that his works were coveted by many of Europe’s most important figures. These included Archduke Albert and Archduchess Isabella, the rulers of the Spanish Netherlands; Charles I, the King of England; and Philip IV, the King of Spain, for whom he painted Saturn devouring a Son in 1636–38 (a canvas which can today be found in the Museo del Prado in Madrid).
Late in life, Rubens turned increasingly to painting landscapes around a rural castle he acquired called Het Steen. He died in 1640, aged 62.
Portrait of a commander, three-quarter-length, being dressed for battle
Two studies of a man, head and shoulders
Portrait of a young woman, half-length, holding a chain
Head of a bearded man in profile holding a bronze figure
Scipio Africanus welcomed outside the gates of Rome, after Giulio Romano
Portrait of a lady, probably Isabella Brant (1591-1626), as a shepherdess
The Archduke Albert and Infanta Isabella, Governors of the Netherlands: Design for the title page of the 'Gelresche Rechten' ('Rights of the Province of Gelderland') ( recto ); The same composition traced through in reverse ( verso )
An écorché study of the legs of a male nude, with a subsidiary study of the right leg
Sir Peter Paul Rubens (Siegen 1577-1640 Antwerp) extensively reworking a drawing attributed to Hans Witdoeck (Antwerp 1615-after 1642)
Saint Ildefonso receiving the Chasuble from the Virgin
A double-sided sheet of studies: Hippodameia abducted by the centaur Eurytion, and Hercules overcoming the river-god Achelous in the form of a bull ( recto ); Christ shown to the People, and The Way to Calvary ( verso )
The Virgin and Child with Saints George, Jerome, Mary Magdalene and three others
Eucharistic Teachers and Saints: Gregory, Ambrose, Augustine, Clara, Thomas Aquinas, Norbert and Jerome, with the dove of the Holy Spirit
Three figures in classical mantles, probably apostles
The Virgin supporting the Christ Child on a parapet
Portrait of Cornelis Lantschot, three-quarter length, holding gloves, a landscape beyond
The Roman Emperors Augustus; Caligula; Claudius; Nero; Galba; Vespasian; Titus; and Domitian
The Virgin and Child surrounded by flowers
Portrait of a gentleman, probably Peter van Hecke; and Portrait of a Lady, probably his wife, Clara Fourment.
Portrait of a lady, traditionally identified as Helena Fourment (1614-1673) | <urn:uuid:dbd43e04-86f4-4c46-b680-fd33f1bcacf0> | {
"dump": "CC-MAIN-2023-40",
"url": "https://www.christies.com.cn/en/artists/peter-paul-rubens",
"date": "2023-09-28T00:59:45",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510334.9/warc/CC-MAIN-20230927235044-20230928025044-00138.warc.gz",
"language": "en",
"language_score": 0.9446760416030884,
"token_count": 899,
"score": 3.078125,
"int_score": 3
} |
Since the enforcement of the first environmental laws in the 19th century, the implementation of decrees to maximise environmental sustainability has increased dramatically – however the state of the environment is far from pristine. In order to preserve the environment, the responsibilities are not only targeted towards corporations and communities to undertake processes which support this cause, but it is crucial that each individual play a role in its conservation. With the right initiatives and values, there are a number of simple tasks households can undergo and incorporate within their daily routines to assist in the preservation of nature (Nováček,2013). By supporting ...view middle of the document...
These terms have a direct correlation with the following four changes that individuals can make within their lifestyle in order to maximise sustainability in the developed world. The five changes are as follows.
Supporting sustainable resources:
In order to increase sustainability, individuals should support sustainable sources by purchasing solar panels to accumulate their households’ energy.
Solar panels are easily accessible, require little maintenance, and can be expected to last 20 years or more (Solar Power, 2014). They are an effective way of producing energy whilst also being harmless to the environment. The use of solar panels within households has become increasingly popular in the last few years, with the awareness of long term benefits for both the consumers and the environment escalating. Solar panels reduce the pressures being placed on the environment as it is a renewable and sustainable source, retrieving energy from the sun rather than from the burning of harmful fossil fuels (Crabtree & Lewis, 2007). The impacts this has on the consumers are that it eventually will pay itself off and though it will initially be an out of pocket expense, many consumers find they eventually result with a profit - thus, it is both harmless and beneficial, with no known negative health impacts.
Refer to Figure A below for DPSIR framework.
Cutting unnecessary energy consumption:
The use of solar panels goes hand in hand with cutting down unnecessary energy consumption. In order for the environment and the consumers to reap the maximum benefits possible from the installation of solar panels, unnecessary energy consumption must be prevented.
There is a plethora of instances that can be easily avoided where individuals use energy unnecessarily and it goes unnoticed. Some common examples within the average household in the developed world are: Power switches remaining on when not being used, TV and other appliances turned on while serving no purpose, lights turned on in unoccupied rooms etc. Each individual must take appropriate measures to avoid these by using energy responsibly by making sure power switches remain off when serving no purpose, or by using a smart power board, another smarter home habit is the use of low energy light bulbs and other energy efficient appliances. (Getting Started - Energy, 2014) By avoiding the use of unnecessary energy consumption, the individual is supporting sustainable measures and cutting down the cost of their energy bill.
Refer to figure B below for DPSIR framework.
Reduce, reuse, recycle:
In addition to the use of solar panels or other sustainable sources and cutting down on unnecessary energy consumption, another approach in order to maximise sustainability within the developed world is the process of reducing, reusing and recycling. This is a broad and simple task that every household can take on board. A basic example that still has enormous benefits on the environment is... | <urn:uuid:1f1357ec-3ef0-47a6-ac42-f7d6f0ce7974> | {
"dump": "CC-MAIN-2019-47",
"url": "https://brightkite.com/essay-on/changes-that-individuals-in-the-developed-world-should-undertake-to-increase-environmental-sustainability",
"date": "2019-11-17T02:39:07",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668782.15/warc/CC-MAIN-20191117014405-20191117042405-00265.warc.gz",
"language": "en",
"language_score": 0.9393659234046936,
"token_count": 665,
"score": 3.4375,
"int_score": 3
} |
Thyroid problems are not only a problem that can affect humans but also cats. The most common version of a thyroid problem experienced by felines is hyperthyroidism, caused by an excessive concentration of circulating thyrozine-a thyroid hormones, also known as T4.
Studies have shown the hyperthyroidism can affect any breed of cat and either gender but also always occurs in older cats. Only around 6% of cases happen in cats that are under 10 years of age and the average age for those affected is between 12 and 13 years old.
The two main symptoms of hyperthyroidism in cats is weight loss and an increased appetite. Weight loss is seen in around 95% of cases while an increase in appetite is seen in 67-81%. Other symptoms include excessive thirst, increased urination, hyperactivity and diarrhoea. Some have noted that an unkempt appearance, an increase in fur shedding and panting can be associated with the condition while around half the cats affected vomit. Find all types of items you might need for care of your cat at http://thebestcatlitterbox.com.
Many of the symptoms of hyperthyroidism are similar to other conditions that older cats suffer from such as diabetes, inflammatory bowel disease, chronic kidney failure and types of cancer. Therefore, a vet uses a battery of tests to make a diagnosis including a CBC, chemistry panel and urinalysis. These can rule out a number of common conditions. Added to this a blood test is taken that will show elevation of the T4 levels in their bloodstream, though a small percentage of cats with the condition will still show normal levels.
There are a number of different treatment paths that can be used which have advantages and disadvantages.
Oral administration of anti-thyroid medicine is one step with a drug called methimazole being the most commonly used. It can start correcting the condition in as little as 2-3 weeks but around 10-15% of cats see side effects from it including loss of appetite, vomiting, lethargy and occasionally a blood cell abnormality. Most side effects are mild and wear off with time though some of the more serious ones can mean stopping the medication. The biggest common problem can be when a cat refuses to take the tablet as the course needs to be administered for life.
Surgical removal of the thyroid gland is an option when the condition causes benign tumours, called thyroid adenoma that can affect one or both of the thyroid glands. It can be expensive in the short term but may save years of medication and check-ups.
Radioactive iodine therapy is the safest and most effective option. Radioactive iodine is given by injection and concentrates in the thyroid gland where it irradiates and destroys the hyperfunctioning tissues. Only a single treatment is needed and no surgery is involved. Originally, it was difficult to obtain but now more and more facilities are licensed to give the treatment. It still remains expensive however, with cost usually from $500-800 including the cost of being in the treatment centre for up to 14 days for recovery.
It is always best to check with a vet whenever you think there may be something wrong with your cat. Also, remember that cats are very adept at hiding problems so you may need to observe your cat carefully to catch signs something is wrong but as soon as you do, take them to see a professional for a full diagnosis and treatment. | <urn:uuid:8881c55b-5fc5-4111-a1b8-780b897cab05> | {
"dump": "CC-MAIN-2015-40",
"url": "http://www.thyroidfoundation.org/",
"date": "2015-10-04T03:13:53",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443736672328.14/warc/CC-MAIN-20151001215752-00065-ip-10-137-6-227.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.9664820432662964,
"token_count": 699,
"score": 3.1875,
"int_score": 3
} |
Two dangerous things together might make a medicine for one of the hardest cancers to treat. In a mouse model of pancreatic cancer, researchers have shown that bacteria can deliver deadly radiation to tumours — exploiting the immune suppression that normally makes the disease so intractable.
Fewer than one in 25 people diagnosed with pancreatic cancer are alive five years later. Chemotherapy, surgery and radiotherapy are generally ineffective, mainly because the disease has often spread to other organs even before it is detected.
The work, which is described in the Proceedings of the National Academy of Sciences1, began when Ekaterina Dadachova of the Albert Einstein College of Medicine in New York thought of combining two ways to fight cancer. She studies how radioactive isotopes can be used as anti-cancer weapons, and her colleague Claudia Gravekamp has been looking at whether weakened bacteria can be used to carry compounds that provoke a patient’s white blood cells into attacking the cancer. “I thought maybe we could combine the power of radiation with the power of live bacteria,” Dadachova says.
Sometimes found in food, the bacterium Listeria monocytogenes can cause severe infection, but is usually wiped out by the immune system. Exploiting the fact that cancer cells tend to suppress the immune reaction to avoid being destroyed, the two researchers and their collaborators decided to coat Listeria with radioactive antibodies and injected the bacterium into mice with pancreatic cancer that had spread to multiple sites. After several doses, the mice that had received the radioactive bacteria had 90% fewer metastases compared with mice that had received saline or radiation alone. “That was the first time we'd seen such a big effect,” says Gravekamp.
The immune system rapidly clears Listeria from healthy tissue, says Gravekamp, but tumour cells suppress the immune system and allow Listeria to remain. That means that tumour cells will receive continuous exposure but normal cells will be spared, she says.
But Elizabeth Jaffee, an oncologist at Johns Hopkins University in Baltimore, Maryland, who has used non-radioactive Listeria in human trials for advanced cancers, including pancreatic cancer, says that some of the observations in the paper are hard to explain, particularly how weakened Listeria gets into metastases and why it's ineffective against the primary tumour.
Other researchers worry that healthy organs may receive excessive amounts of radiation. James Abbruzzese, an oncologist at the University of Texas MD Anderson Cancer Center in Houston, says that the levels of radiation reported in the liver and other organs were disturbingly high, and that he would have liked clearer data that the radiation is being delivered specifically to tumours.
Estimating dose levels between animals and humans is not always straightforward, but Dadachova counters that, according to her calculations, the radiation levels are below what is considered the safety threshold for humans, and that patients with pancreatic cancer tend to be less prone to radiation sickness because they have not usually received chemotherapy beforehand.
Joseph Herman, a radiation oncologist at Johns Hopkins, says that he would have liked to have seen results for other tumour types. And although the study found no signs of tissue damage one week after high-dose treatment of radioactive Listeria, Herman thinks that the effects of radiation might take longer to show up.
Still, Herman says, the approach might present an option where few exist. “The benefit is that it's a way of killing cancer cells in a cancer where therapy has not been very effective,” he says. “It's exciting, but it needs to be further validated.”
- Journal name: | <urn:uuid:bfc88b81-9a20-4472-9643-001556168845> | {
"dump": "CC-MAIN-2018-39",
"url": "http://www.nature.com/news/radioactive-bacteria-attack-cancer-1.12841?error=cookies_not_supported&code=953fd518-cb43-463e-b548-a787783d4086",
"date": "2018-09-24T13:20:56",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267160454.88/warc/CC-MAIN-20180924125835-20180924150235-00433.warc.gz",
"language": "en",
"language_score": 0.9613808393478394,
"token_count": 754,
"score": 3.90625,
"int_score": 4
} |
We are all probably aware of the detection of metabolites in wastewater to indicate the levels of illegal drug usage in a locality, but researchers at the University of Valencia (UV), Spain, have extended this to monitor alcohol consumption in near real time.
They have used ion-pair liquid chromatography-tandem mass spectrometry (LC-MS/MS) to monitor the levels of metabolic by-products reaching wastewater treatment plants, as reported in Science of the Total Environment. Ethyl sulphate is one of the more stable chemical compounds released in our urine after we consume alcohol. Already used in workplace and rehabilitation centre alcohol testing, this compound is now being proposed as an indicator of real-time per capita alcohol consumption. Results have shown that average alcohol consumption rockets to 400% during the city’s annual festivities, known as Fallas. Indeed, it peaks at up to six times normal levels on the final night. This is perhaps not that surprising, especially for anyone who has experienced Fallas and lived to tell the tale. Fallas is a five-day bonanza of traditional dress, parades, professional-level fireworks and street parties.
Existing estimations of per capita alcohol consumption rely on surveys and sales figures, which are limited, since they do not take into account either home-made alcohol production, stockpiling or wastage. They can tell us how much we buy, where we buy it and when, and how much we think we drink in an average week, but these indicators cannot accurately reflect how much we actually drink and when.
Indeed, the technique can even reveal what people are drinking. In the case of Valencia during Fallas, beer comes in at first place, accounting for 50% of all alcohol consumed in this period, followed by spirits (28%) and wine (20%).
As researcher Yolanda Picó tells us, this new technique is good news from a health perspective, as it will now be possible to monitor and therefore predict drinking levels at a particular event or during festive periods, and act accordingly. Peaks in consumption will no longer be diluted into annual and/or nationwide figures, giving us a much more detailed picture of alcohol consumption. | <urn:uuid:1db85c30-0c25-45fe-a384-0e6d6cc15fdc> | {
"dump": "CC-MAIN-2020-05",
"url": "https://www.spectroscopyeurope.com/news/spy-sewer-lc-msms-tells-all",
"date": "2020-01-24T05:15:25",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250615407.46/warc/CC-MAIN-20200124040939-20200124065939-00468.warc.gz",
"language": "en",
"language_score": 0.9606059193611145,
"token_count": 441,
"score": 3.09375,
"int_score": 3
} |
Seeds are a vital part of an agricultural endeavor, such as a farm or business. Government grants for agricultural projects ranging from education and research to sustainable farming practices and energy creation can support seed-based projects. Though a government agency might not expressly have a granting program for seeds, many agencies administered by the United States Department of Agriculture have grant programs for which seed-based proposals qualify.
The USDA administers grant funding for which seed-based projects are eligible, such as the Sustainable Agriculture Research and Education program. SARE supports profitable and environmentally responsible agricultural projects. To be competitive, your seed-based project should align with the SARE mission. For example, sewing wheat between growing seasons can stave off soil erosion and replenish the soil's nutrients; a SARE grant could help with the cost of buying seeds in bulk. Describe how the seeds you will plant will support sustainable and profitable growth as well as how you will use the grant funding.
Farms producing or supporting alternative energy projects -- such as ethanol production -- can qualify for a grant from the USDA's Rural Energy for America Program. REAP grants help fund renewable energy development in the rural United States by covering up to 25 percent of a project's total cost. A REAP grant can be used to purchase corn, switchgrass or soybean seed, all of which can be used to create alternative fuels. REAP grants are awarded in part based on financial need, so detail why you need the grant to buy seeds. Farmers, ranchers and small-business owners are eligible for REAP grants.
The cost of seed to sew an educational farm or to fund a research program can be covered by several government grants. The USDA's Alternative Farming Systems Information Center grants funding for classroom and on-farm research projects. A teacher could use AFSIC funding to purchase the seeds necessary to teach his students how to grow and care for plants in an urban setting, for example, or a graduate student could qualify for a grant to buy seeds to study crop pairing for her thesis. Consult the USDA's "Agricultural Research" magazine to get an idea of the sorts of projects AFSIC grants support.
Heirloom and historical seeds are a source of diversity and cultural importance because they produce varieties of fruits and vegetables that are uncommon today, but have been widely used by farmers throughout history. National Institute of Food and Agriculture grants help fund organic agriculture research, which can include organic seeds. NIFA grants are for producers and processors who have already adopted organic food standards and would like to expand their capacities. A NIFA grant could help produce and distribute heirloom seeds for fruits, vegetables and flowers, many of which are organically produced.
- SARE Grant Information
- SARE Vision & Mission
- USDA Rural Development: Rural Energy For America Program Grants/Renewable Energy Systems/Energy Efficiency Improvement Program (REAP/RES/EEI)
- AFSIC: Education and Research
- AFSIC: Grants and Loans for Farmers
- NIFA: Organic Agriculture Research and Extension Initiative
- Photo Credit Ablestock.com/AbleStock.com/Getty Images | <urn:uuid:d4317968-03b1-4c16-bca3-972351fc6deb> | {
"dump": "CC-MAIN-2014-15",
"url": "http://www.ehow.com/info_10072923_government-grants-seeds.html",
"date": "2014-04-20T05:45:40",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00346-ip-10-147-4-33.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.9392076134681702,
"token_count": 639,
"score": 3.203125,
"int_score": 3
} |
Do you agree or disagree with the following statement?
In the past, young people depended too much on their parents to make decisions for them; today young people are better able to make decisions about their own lives.
Use specific reasons and examples to support your answer.
Some people say that in the past the young generations were too much dependent on their parents as parents were in charge of their life, but in the present time, the young people are more independent and make their decision of their own. I am agreeing with this stance and believe that young people are now more independent for three reasons.
First of all, young people now make their own decision as they earn their own money. For example, young people are now allowed to do jobs like a grown person and by this process, they have become economically stable and can make their decision of their own. In the past, our grandparents were not allowed to do any jobs when they were young and due to this their decision-making capabilities were restricted. As the children are now earning money, they can use this money of their own and can make their own decisions.
Second of all, young people are now more mentally developed than their parents. For instance, scientists have now found that the brain developments of young people are more rapid than the past generations. As the brain is developing faster, they are now maturing faster and making their decision of their own. In the past, young people were not maturing faster and due to this their decision-making capabilities were hindered. As a mature people like to make their own planning, these younger people like to make their own decision in life
Finally, young people are now more educated, as a result, they like to take their decision of their own. For example, younger people are now learning more easily as it is easy to get educated. As we know, an educated person likes to take their own decision, similarly, the young people like to take their decision of their own. In the past, education was limited to only the rich people and due to this, the older generation was dependent on their parents for any decision.
For those above three reasons, I believe that young people make the decision about their life by themselves as they have money, more education, and further mental development.
- Do you agree or disagree with the following statement Governments should spend more money in support of the arts than in support of athletics such as state sponsored Olympic teams Use specific reasons and examples to support your answer 65
- Do you agree or disagree with the following statement In the past young people depended too much on their parents to make decisions for them today young people are better able to make decisions about their own lives Use specific reasons and examples to su 70
- Do you agree or disagree with the following statement All university students should be required to take history courses no matter what their field of study is Use specific reasons and examples to support your answer 73
- Some people believe that when busy parents do not have a lot of time to spend with their children the best use of that time is to have fun playing games or sports Others believe that it is best to use that time doing things together that are related to sc 61
- Summarize the points made in the lecture being sure to explain how they challenge the specific points made in the reading passage 3
Grammar and spelling errors:
Line 1, column 241, Rule ID: PROGRESSIVE_VERBS
Message: This verb is normally not used in the progressive form. Try a simple form instead.
...and make their decision of their own. I am agreeing with this stance and believe that young...
Line 3, column 124, Rule ID: ALLOW_TO
Message: Did you mean 'doing'? Or maybe you should add a pronoun? In active voice, 'allow' + 'to' takes an object, usually a pronoun.
.... For example, young people now allowed to do jobs as like a grown person and by this...
Transition Words or Phrases used:
but, finally, first, if, second, similarly, so, for example, for instance, as a result, first of all
Attributes: Values AverageValues Percentages(Values/AverageValues)% => Comments
Performance on Part of Speech:
To be verbs : 17.0 15.1003584229 113% => OK
Auxiliary verbs: 3.0 9.8082437276 31% => OK
Conjunction : 10.0 13.8261648746 72% => OK
Relative clauses : 5.0 11.0286738351 45% => More relative clauses wanted.
Pronoun: 45.0 43.0788530466 104% => OK
Preposition: 43.0 52.1666666667 82% => OK
Nominalization: 4.0 8.0752688172 50% => More nominalizations (nouns with a suffix like: tion ment ence ance) wanted.
Performance on vocabulary words:
No of characters: 1565.0 1977.66487455 79% => OK
No of words: 333.0 407.700716846 82% => More content wanted.
Chars per words: 4.6996996997 4.8611393121 97% => OK
Fourth root words length: 4.27180144563 4.48103885553 95% => OK
Word Length SD: 2.31601435251 2.67179642975 87% => OK
Unique words: 132.0 212.727598566 62% => More unique words wanted.
Unique words percentage: 0.396396396396 0.524837075471 76% => More unique words wanted or less content wanted.
syllable_count: 486.0 618.680645161 79% => OK
avg_syllables_per_word: 1.5 1.51630824373 99% => OK
A sentence (or a clause, phrase) starts by:
Pronoun: 8.0 9.59856630824 83% => OK
Article: 3.0 3.08781362007 97% => OK
Subordination: 4.0 3.51792114695 114% => OK
Conjunction: 2.0 1.86738351254 107% => OK
Preposition: 3.0 4.94265232975 61% => OK
Performance on sentences:
How many sentences: 15.0 20.6003584229 73% => Need more sentences. Double check the format of sentences, make sure there is a space between two sentences, or have enough periods. And also check the lengths of sentences, maybe they are too long.
Sentence length: 22.0 20.1344086022 109% => OK
Sentence length SD: 41.8961679499 48.9658058833 86% => OK
Chars per sentence: 104.333333333 100.406767564 104% => OK
Words per sentence: 22.2 20.6045352989 108% => OK
Discourse Markers: 6.66666666667 5.45110844103 122% => OK
Paragraphs: 5.0 4.53405017921 110% => OK
Language errors: 2.0 5.5376344086 36% => OK
Sentences with positive sentiment : 6.0 11.8709677419 51% => More positive sentences wanted.
Sentences with negative sentiment : 0.0 3.85842293907 0% => More negative sentences wanted.
Sentences with neutral sentiment: 9.0 4.88709677419 184% => OK
What are sentences with positive/Negative/neutral sentiment?
Coherence and Cohesion:
Essay topic to essay body coherence: 0.460886513967 0.236089414692 195% => OK
Sentence topic coherence: 0.216823616986 0.076458572812 284% => Sentence topic similarity is high.
Sentence topic coherence SD: 0.0938077659779 0.0737576698707 127% => OK
Paragraph topic coherence: 0.324101446466 0.150856017488 215% => OK
Paragraph topic coherence SD: 0.0641243354485 0.0645574589148 99% => OK
automated_readability_index: 11.8 11.7677419355 100% => OK
flesch_reading_ease: 57.61 58.1214874552 99% => OK
smog_index: 8.8 6.10430107527 144% => OK
flesch_kincaid_grade: 10.7 10.1575268817 105% => OK
coleman_liau_index: 9.98 10.9000537634 92% => OK
dale_chall_readability_score: 7.05 8.01818996416 88% => OK
difficult_words: 49.0 86.8835125448 56% => More difficult words wanted.
linsear_write_formula: 13.5 10.002688172 135% => OK
gunning_fog: 10.8 10.0537634409 107% => OK
text_standard: 11.0 10.247311828 107% => OK
What are above readability scores?
We are expecting: No. of Words: 350 while No. of Different Words: 200
Rates: 63.3333333333 out of 100
Scores by essay e-grader: 19.0 Out of 30
Note: the e-grader does NOT examine the meaning of words and ideas. VIP users will receive further evaluations by advanced module of e-grader and human graders. | <urn:uuid:3a599f76-a459-4532-af01-9bbf2cad5082> | {
"dump": "CC-MAIN-2021-49",
"url": "https://www.testbig.com/independent-toefl-writing-essays/do-you-agree-or-disagree-following-statement-past-young-people-96",
"date": "2021-12-02T16:45:30",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362230.18/warc/CC-MAIN-20211202145130-20211202175130-00600.warc.gz",
"language": "en",
"language_score": 0.9277235865592957,
"token_count": 2085,
"score": 3.421875,
"int_score": 3
} |
An angle on bunions
A bunion is a swelling, usually located on the first toe joint, caused by an inflamed or irritated bursa, the sac protecting the joint. The underlying cause of most bunions is hallux valgus, a misalignment of the toe bones. The metatarsal bone points outward and the phalanges point inward.
A bunion can be a source of pain and hallux valgus widens the foot, making shoe wear increasingly difficult. Small bunions should be treated conservatively using well-fitting shoes and special toe pads or corrective socks which hold the toe in place. If the bunion gets larger, the pain becomes unbearable or shoe wear becomes difficult, surgery can remove the bunion and straighten the toe bones.
Text and illustrations by Kevin T. Boyd
Acupressure for foot pain
Here are lists of acupressure points for Foot pain, on PointFinder.org.
If this is your first time, please read the instructions. Don’t use acupressure to replace standard emergency procedures or licensed medical treatment. If you are seriously injured or have acute symptoms seek urgent medical treatment. | <urn:uuid:31d88d90-aa51-418c-b1a1-1b2d82b8aa9b> | {
"dump": "CC-MAIN-2019-35",
"url": "https://www.pointfinder.org/health-infographics/bunions/",
"date": "2019-08-23T16:03:45",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027318894.83/warc/CC-MAIN-20190823150804-20190823172804-00488.warc.gz",
"language": "en",
"language_score": 0.8757361173629761,
"token_count": 244,
"score": 3.078125,
"int_score": 3
} |
Wanted to divorce his first wife, Catherine of Aragon because she had failed to produce a male heir
Had fallen in love with Anne Boleyn (a lady-in-waiting to the queen) Anne refused to be just a mistress Henry needed a legitimate son for a male heir
Pope might under normal circumstances annul the marriage, but... Sack of Rome in 1527 had made the Pope dependent on Charles the V (emperor of HRE). Charles the V was the nephew of Catherine
Henry advised to cut ties with Rome Parliament cut off all appeals from English church courts to Rome Ended Papal Authority in England Jan. 1533, king’s marriage was announced “null and absolutely void” and his marriage to Anne was validated 3 months later she had a child—a girl, named Elizabeth.
Act of Supremacy 1534—declared that the king was “taken, accepted, and reputed the only supreme head on earth of the Church of England.” (Anglican Church) This meant that the English Monarch now controlled all matters of doctrine, clerical appointments, and discipline.
Treason Act Made it punishable by death to deny that the king was the supreme head of the church. Would be challenged by Thomas More—who would be beheaded on July 6, 1535.
King keeps trying... Anne would be beheaded in 1536 on accounts of adultery #3, Jane Seymour produced the long awaited male heir, but died 12 days later.
#4, Anne of Cleves, was a German princess, and arranged for political purposes. –The painting of her turned out to be a bit flattering –Disappointed with her, he quickly divorced her
#5, Catherine Howard; prettier than #4, but preferred other men than Henry— she was beheaded #6, Catherine Parr; She would outlive him.
Catherine of Aragon Catherine of Aragon m. 1509 - 1533 Divorced Anne Boleyn Anne Boleyn m. 1533 - 1536 Executed Jane Seymour Jane Seymour m. 1536 - 1537 Died Anne of Cleves Anne of Cleves m. 1540 Jan. - July Divorced Kathryn Howard Kathryn Howard m. 1540 - 1542 Executed Katherine Parr m. 1543 - 1547 WidowedKatherine Parr
Henry would be briefly succeeded by his sickly son Edward VI (1547-1553), the son of Jane Seymour.
Evolution of the Church Thomas Cromwell worked out details for the new church. –Monasteries were taken over and possessions were confiscated by the King. –Many were sold to nobles, gentry, and some merchants. (added to his supporters) –Moved towards more Protestant Doctrines (clergy’s right to marry, elimination of imagery, and new guide book, “Book of Common Prayer”
ARCHBISHOP THOMAS CRANMER BORN: 1489 EXECUTED: 21 MARCH 1556 Archbishop of Canterbury from 1533 to 1553. Granted the annulment of Henry VIII andHenry VIII Catherine of AragonCatherine of Aragon's marriage. Burned at the stake in Mary I's reign.Mary I
Cromwell was the chief minister to Henry VIII from 1532 to 1540. He supported English reformers such as Robert Barnes and Hugh Latimer, but he did not institute doctrinal changes in the Church of England. However, he did promote the use of the English Bible. He was executed on July 28, 1540 for treason and heresy
CHARLES V, KING OF SPAIN AND HOLY ROMAN EMPEROR BORN: 1500 DIED: 1558 Charles V by Titian Prado, Madrid Nephew of Catherine of Aragon and cousin of Mary I.Catherine of AragonMary I | <urn:uuid:984caee4-4734-41d9-b9d2-aad884388fcd> | {
"dump": "CC-MAIN-2017-43",
"url": "http://slideplayer.com/slide/3562026/",
"date": "2017-10-21T01:08:09",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824537.24/warc/CC-MAIN-20171021005202-20171021025202-00674.warc.gz",
"language": "en",
"language_score": 0.9595269560813904,
"token_count": 798,
"score": 3.109375,
"int_score": 3
} |
This page is an overview of Spanish numbers. Like in English and other languages, learning numbers requires memorizing the smallest ones (up to 10) and then understanding the patterns used to combine them into bigger numbers.
When learning Spanish numbers, it is best to divide them into small groups where the underlying rules for combining small numbers into the big ones are the same or similar. We will also do it on this page:
0 to 10
Here are the basic Spanish numbers from zero to ten. Even if you are a complete beginner to Spanish, some of them (like uno dos tres) may be familiar from songs or popular culture.
0 = cero
1 = uno
2 = dos
3 = tres
4 = cuatro
5 = cinco
6 = seis
7 = siete
8 = ocho
9 = nueve
10 = diez
11 to 15
The next five numbers all end with -ce, in a way similar to the English -teen. While the English -teens run from 13 to 19, the Spanish -ce numbers are from 11 to 15.
11 = once
12 = doce
13 = trece
14 = catorce
15 = quince
Avoiding common mistakes:
- Unlike uno (1), once (11) starts with “o”, not “u”.
- Unlike cuatro (4), there is no “u” in catorce (14).
- Unlike cinco (5), quince (15) starts with “q”, not “c”.
16 to 19
The numbers from 16 to 19 are all formed in the same way, literally by saying ten and six, ten and seven, ten and eight, ten and nine.
16 = dieciséis
17 = diecisiete
18 = dieciocho
19 = diecinueve
Main things to remember when writing these numbers:
- The numbers are written together as one word.
- While “and” is “y” in Spanish, in this case it is spelled “i”.
- While diez (10) ends with “z”, in this case there is a “c”.
So 17 is not “diez y siete”, but “diecisiete”, although there is no real difference in the spoken language.
Also notice the little accent in dieciséis (16), which is not present in seis (6). It is a hint that when pronouncing dieciséis, you should put stress on the last syllable (séis) and not on the second-to-last syllable (i), as it would be without the accent mark.
20 to 29
Twenty is just one more number you need to memorize:
20 = veinte
The numbers from 21 to 29 follow the same logic as the numbers 16 to 19 explained above. You literally say twentyandone, twentyandtwo etc., write the number as one word, and use “i” instead of “y” to connect the two digits. Also note the “e” from the end of veinte is no longer there:
21 = veintiuno
22 = veintidós
23 = veintitrés
24 = veinticuatro
25 = veinticinco
26 = veintiséis
27 = veintisiete
28 = veintiocho
29 = veintinueve
Main things to remember:
While there is “e” at the end of veinte (20), it is no longer present in the other numbers. For example, 21 is veintiuno, not veinteiuno. You can feel it is also much easier to say without the “e”.
Notice the accent in veintidós (22), veintitrés (23), and veintiséis (26). The reason is the same as in dieciséis (16) discussed earlier – when pronouncing these numbers, put stress on the last syllable.
30 to 99
Once you get to 30, creating higher numbers becomes very easy. You just need to memorize the tens:
30 = treinta
40 = cuarenta
50 = cincuenta
60 = sesenta
70 = setenta
80 = ochenta
90 = noventa
You can see that they are all derived from the corresponding numbers 3 to 9, although there are some very important differences.
- Unlike in seis (6) and siete (7), there is no “i” in sesenta (60) and setenta (70).
- “ue” in nueve (9) becomes “o” in noventa (90).
- If you speak Italian, it may be tempting to use the ending “-anta” for the Spanish tens, which are otherwise very similar to the Italian ones. The correct ending for Spanish tens from 40 to 90 is “-enta”.
Once you know the tens, it is very easy to form all the other numbers from 31 to 99. They are still formed by saying thirty and one, thirty and two etc., but the big difference from the 20’s is that the number is no longer written as one word and the conjunction “and” is spelled “y” as elsewhere in Spanish. There are no extra accents and no surprises:
31 = treinta y uno
32 = treinta y dos
33 = treinta y tres
34 = treinta y cuatro
35 = treinta y cinco
36 = treinta y seis
37 = treinta y siete
38 = treinta y ocho
39 = treinta y nueve
41 = cuarenta y uno
42 = cuarenta y dos
43 = cuarenta y tres
51 = cincuenta y uno
61 = sesenta y uno
71 = setenta y uno
81 = ochenta y uno
91 = noventa y uno
99 = noventa y nueve
Note: In the past (many decades ago) it was common and correct to write the numbers 16-19 and 21-29 in the same way, as separate words (e.g. veinte y uno). In today’s Spanish they are written as one word, while the numbers 31-99 are still written as separate words.
100 to 199
The word for the number 100 in Spanish is:
100 = cien
Notice the similarity to the English words percent or century, or the metric unit prefix centi-, which means 1/100 (e.g. 1 centimeter is 1/100 of 1 meter). These all relate to the Latin word for 100 = centum (the Roman number C for hundred is not a coincidence).
In the numbers from 101 to 199 “cien” turns into “ciento”:
101 = ciento uno
102 = ciento dos
103 = ciento tres
110 = ciento diez
111 = ciento once
116 = ciento dieciséis
120 = ciento veinte
121 = ciento veintiuno
122 = ciento veintidós
130 = ciento teinta
131 = ciento treinta y uno
168 = ciento sesenta y ocho
185 = ciento ochenta y cinco
199 = ciento noventa y nueve
It is all very simple: just “ciento” followed by the number from 1 to 99. Note there is no “y” (and) between “ciento” and the rest.
200 to 999
The higher hundreds are made as multiples of “ciento”, using the plural “cientos”, and writing it as one word:
200 = doscientos
300 = trescientos
400 = cuatrocientos
500 = quinientos
600 = seiscientos
700 = setecientos
800 = ochocientos
900 = novecientos
Notice the small irregularities in the numbers 500, 700, and 900:
- Quinientos (500) is quite like quince (15) and unlike cinco (5). Note that there is no “c” in quinientos.
- There is no “i” in setecientos (700), like setenta (70) and unlike siete (7).
- Novecientos (900) is quite like noventa (90) and unlike nueve (9).
Also note that seiscientos (600) and ochocientos (800) are completely regular, made simply by connecting the words seis+cientos and ocho+cientos. Ochocientos has an “o” like ocho (8), not an “e” like ochenta (80).
Once you know the hundreds, the numbers in between are again very easy, following the same pattern as the numbers from 101 to 199. Main rule: no “y” (and) between the hundreds and the rest.
201 = doscientos uno
215 = doscientos quince
222 = doscientos veintiuno
333 = trescientos treinta y tres
444 = cuatrocientos cuarenta y cuatro
555 = quinientos cincuenta y cinco
666 = seiscientos sesenta y seis
777 = setecientos setenta y siete
888 = ochocientos ochenta y ocho
999 = novecientos noventa y nueve
1000 to 1999
The word for thousand is:
1000 = mil
Numbers from 1001 to 1999 are simply “mil” followed by the already familiar numbers from 1 to 999:
1001 = mil uno
1002 = mil dos
1025 = mil veinticinco
1100 = mil cien
1101 = mil ciento uno
1492 = mil cuatrocientos noventa y dos
1516 = mil quinientos dieciséis
1813 = mil ochocientos trece
1978 = mil novecientos setenta y ocho
There is no “y” between thousands and hundreds, and no “y” between hundreds and tens. There is an “y” between tens and ones (in 31 to 99).
2000 to 999999
Higher thousands are made simply as the number of thousands, followed by the word “mil” (not “mils” – that’s wrong).
2000 = dos mil
3000 = tres mil
10000 = diez mil
55000 = cincuenta y cinco mil
182000 = ciento ochenta y dos mil
325000 = trescientos veinticinco mil
The rest follows the logic that is already familiar. The hardest part with these higher numbers (though not that hard to remember) is where to put an “y” (and) and where not.
262144 = doscientos sesenta y dos mil ciento cuarenta y cuatro
531441 = quinientos treinta y uno mil cuatrocientos cuarenta y uno
390625 = trescientos noventa mil seiscientos veinticinco
823543 = ochocientos veintitrés mil quinientos cuarenta y tres
Note: When writing numbers as digits, most Spanish speaking countries (including Spain, Argentina, Chile, and most of South America) use a dot as thousands separator and a comma as decimal separator – we will also do that below. On the contrary, Mexico and most of Central America follow the decimal comma system like the US or the UK.
The word for million in Spanish is:
1.000.000 = un millón
While we don’t put an “un” before a single thousand (mil), we must always include it before a single million (un millón).
1.001.001 = un millón mil uno
1.234.567 = un millón doscientos treinta y cuatro mil quinientos sesenta y siete
Notice the accent in “millón”. When pronouncing it, put stress on the second syllable (llón).
The plural is “millones” – without an accent mark, but stress should still be put on the llon syllable, which is now second-to-last.
Multiple millions are formed very simply:
2,000,000 = dos millones
10,000,000 = diez millones
50,000,000 = cincuenta millones
100,000,000 = cien millones
200,000,000 = doscientos millones
123,456,789 = ciento veintitrés millones cuatrocientos cincuenta y seis mil setecientos ochenta y nueve
202,020,202 = doscientos dos millones veinte mil doscientos dos
999,999,999 = novecientos noventa y nueve millones novecientos noventa y nueve mil novecientos noventa y nueve
Notice in the last example how the number 999 is always the same (“novecientos noventa y nueve”), only first you add “millones” for 999 millions, then you add “mil” for 999 thousand, and then you add nothing for 999. Also keep in mind there is no “y” after “millones”, no “y” after “mil”, no “y” after novecientos, but there is an “y” in every “noventa y nueve”.
Billions, trillions, and more
These can be very confusing if your native language is English, because Spanish uses the same words for totally different numbers!
English numbers use the so called short scale system, where new words come at multiples of one thousand:
billion = 1000x million = 1,000,000,000
trillion = 1000x billion = 1,000,000,000,000
quadrillion = 1000x trillion = 1,000,000,000,000,000
In other words, you always add three zeros to get to the next higher word.
On the contrary, Spanish uses the so called long scale system, where new words come at multiples of one million (you add six zeros to get from million to billion, from billion to trillion, and so on). While the words themselves are the same (un billón, un trillón), the numbers they represent are completely different:
1,000,000 = un millón (so far so good)
1,000,000,000 = the English billion = mil millones (literally a thousand millions) or un milliard (special word for one billion not common in English, but used in many other European languages)
1,000,000,000,000 = the English trillion = un billón (yes, the Spanish billion is actually the English trillion)
1,000,000,000,000,000 = the English quadrillion = mil billones (one thousand Spanish billions)
1,000,000,000,000,000,000 = the English quintillion = un trillón
Luckily, you won’t need these numbers very often in everyday use. For most purposes (e.g. understanding Spanish economic news), just remember that billion in Spanish is “mil millones” or “un milliard”.
Saying negative numbers in Spanish is as easy as in English. Just add the word “menos” (minus) before the number.
-10 = menos diez
-5500 = menos cinco mil quinientos
As we have already mentioned, most of the Spanish speaking world uses a comma as decimal separator. When pronouncing decimal numbers, you just say the integer part, followed by the word “coma” and the digits after the comma. It is in fact exactly like in English, only with “coma” instead of the English “point”.
For example, the number pi, written as 3.14159 in English, would be written 3,14159 in Spain or Argentina, and pronounced:
tres coma uno cuatro uno cinco nueve
In Spanish speaking countries which use decimal point, use the word “punto” instead of “coma”. For example, in Mexico the number pi would be pronounced:
tres punto uno cuatro uno cinco nueve
While it is never wrong to say the digits after the decimal separator one by one as in the examples above, they are often grouped together and said as a sequence of two-digit numbers. For example:
5,25 = cinco coma veinticinco
7,1215 = siete coma doce quince
This form will most likely be used when shorter and more convenient than the digit-by-digit form. Notice how “siete coma doce quince” is much easier to say than “siete coma uno dos uno cinco”. Nonetheless, both are correct.
It is also common when talking about prices or amounts of money, where the two digits after the comma represent the number of cents to pay. For example:
€12,63 = doce coma sesenta y tres
With prices and amounts of money, the word “con” (with) is sometimes used for the decimal separator instead of “coma” or “punto”. For example:
€12,63 = doce con sesenta y tres = doce euros con sesenta y tres cent
The above said, the Spanish speaking world is very diverse and different forms are more common in some countries or regions than others. | <urn:uuid:6afe0c76-800a-47ab-9f3e-5f751fdbc764> | {
"dump": "CC-MAIN-2022-21",
"url": "https://www.greenspanish.com/numbers/",
"date": "2022-05-25T06:33:28",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662580803.75/warc/CC-MAIN-20220525054507-20220525084507-00009.warc.gz",
"language": "en",
"language_score": 0.7962513566017151,
"token_count": 4050,
"score": 4.28125,
"int_score": 4
} |
A single piece of crisp paper can be folded into any number of shapes. Twist its edges one way and you’ll wind up hoisting a paper crane. A simple re-arranging of those creases, however, could yield a dove, a boat or seemingly boundless other possibilities. “You can think of the genome three-dimensional structure the same way,” MacKay said. “You have your strand of DNA. Fold it one way and it will do one thing, but fold it a different way and it will do another thing, and those will have different structures and functions associated with them.”
MacKay, who studies bioinformatics, is one of two U of S students—along with Jacques Desmarais—to win a Vanier Canada Graduate Scholarship in 2016. The prize is one of Canada’s top honours for graduate students, and awards recipients with up to $150,000 over three years to support their doctoral studies.
For MacKay, that funding is supporting her work on predicting three-dimensional genome structures, as she builds a program that takes in biological data and uses it to create a model of what these structures could potentially look like.
MacKay explained that the project could reveal a new understanding of science at a fundamental level, with vast potential in fields as wide-ranging as agriculture, medicine and essentially anything involved with the study of biology. It could even be used as a starting point to develop new treatments for diseases such as cancer.
“One of the main areas that three-dimensional genome structure has been shown to have a big role in is cancer research,” she said. “With cancer, different folds can happen that cause different mutations or rearrangements to occur in the genome. If we can develop therapies to reverse these disease-related folds, we can potentially return cells back to a non-cancerous state.”
For his part, Desmarais, a PhD student in geology, earned his Vanier Scholarship in part due to his research into defining objects unseen to the human eye.
Desmarais is working on computer algorithms that calculate properties of materials. The project works entirely from a theoretical standpoint, based on the concepts of quantum mechanics and using little to no prior knowledge from experiments.
“Say we wanted to know how the inside of the Earth works, because the inside of the Earth is what drives volcanoes and plate tectonics and things like that,” he said. “Since we can’t get there, and since often it’s too difficult to simulate the high pressures and temperatures, sometimes the only thing we can do is theory.”
Among other applications, the algorithms are used to study crystals that make up geological formations on Earth and other planets.
“Using these types of theoretical approaches, one can calculate the properties of materials inside of planets and, from there, start to predict how exactly the inside of that planet works,” he said.
The Vanier Scholarship has allowed Desmarais to team up with researchers in Italy working on a project called Crystal, featuring collaborators from around the globe who have been developing these types of algorithms for more than 60 years. In 2016, he joined the group for three months, got a grasp of speaking Italian and relished the opportunity to learn from bright minds from around the world.
“They have a lot of collaborators from different countries and they all come to meet at this particular lab,” he said. “Maybe every week there would be a new professor or researcher from a different country and I’d get to meet them and learn something from them.”
After six decades of work already, there is no end in sight for the Crystal project. While some researchers may be uncomfortable working without a strict deadline, Desmarais is excited about the possibility of remaining involved in the project long-term.
"It’s an exciting project. We’ll see how things turn out, but I could see myself working on it or similar projects for some time.” | <urn:uuid:1f28fb06-7634-4054-83fd-314b73a50b0f> | {
"dump": "CC-MAIN-2019-22",
"url": "https://news.usask.ca/articles/people/2017/vanier-scholarships-supporting-students.php",
"date": "2019-05-22T04:33:32",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256763.42/warc/CC-MAIN-20190522043027-20190522065027-00480.warc.gz",
"language": "en",
"language_score": 0.9605945348739624,
"token_count": 845,
"score": 3.546875,
"int_score": 4
} |
While accounting and bookkeeping have plenty in common, taking an accounting course will show you that it is a whole other field of work. While accounting is a more complex and higher-paying profession, it does not mean that studying accounting needs to be any more difficult!
Take a look at the certified accounting courses you can study through The Learning Group:
Technical Financial Accountant
- ICB National Diploma: Technical Financial Accounting
- ICB Short Course: Technical Financial Accountant – Business Law and Accounting Control
- ICB Short Course: Technical Financial Accounting – Income Tax Returns
Certified Financial Accountant
- ICB National Diploma: Certified Financial Accounting
- ICB Short Course: Corporate Strategy
- ICB Short Course: Accounting Theory and Practice
- ICB Short Course: Financial Reporting and Regulatory Frameworks
- ICB Short Course: Management and Accounting Control Systems
What is the difference between accounting and bookkeeping?
Though accountants and bookkeepers both work in financial record keeping, there is a discernable difference between the two disciplines.
A bookkeeper is more concerned with the single function of keeping track of finances: recording income, expenses and other transactions in account books.
An accountant might have some bookkeeping duties, but will be more concerned with the interpretation of financial records. An accountant interprets and analyses bookkeeping data to see where there are deficiencies in a business’ finances. He or she then comes up with financial strategies and financial problem solving tactics.
Accountants are also usually more qualified than bookkeepers, as their skills are more specialised and their duties more complex.
What is financial accounting?
There is more than one type of accounting out there. The types include:
- General accounting
- Management accounting
- Forensic accounting
- Governmental accounting
Financial accounting is a specific field of accounting, distinct from others. This field is concerned with the preparation of financial documents and statements for decision making. Stockholders, banks, company owners, government agents and auditors will work with the documents prepared by a financial accountant.
Financial accountants work with certain guidelines, accounting principles and regulations when preparing financial documents and statements. A financial accountant will often use the data collected by a bookkeeper and then analyse and prepare it for external use.
A financial accountant’s duties go beyond any single function, however. A good course in financial accounting will teach you all the different things you’ll need to know to do your job:
- Stock control
- Management of accounting systems
- Liquidation account keeping
- Tax systems
- Corporate strategising
- SARS returns
- Business literacy
- Costing systems
Taking an accounting course
Studying accounting can be very demanding. It is an extremely technical field and requires intense skills training. It is, however, a great direction to go into: accountants will always be in demand and they usually receive good salaries for their services.
If you are already working in accounting, you can also take a specialised short course to develop new accounting skills. When working in this industry, you will constantly need to improve your skills and build your CV if you want to move upwards.
Whether you take a general course or a short course in accounting, you will need to get trained and qualified before you’ll be able to get a real accounting job.
Where can I study financial accounting?
The Learning Group offers ICB-certified accounting courses through distance learning. This means that you can study accounting from home, in your own time, while keeping a full-time job!
The great thing about The Learning Group’s distance learning programme is that it makes it so much easier to study. It is less demanding, more accessible, and more flexible than traditional classroom learning, all without compromising educational quality.
Click on the button below to learn more about the benefits of studying with us:
The Learning Group offers its courses to anyone, anywhere in South Africa! For over 30 years now, we’ve been bringing quality education to the doorsteps of thousands of South African citizens.
Accounting Courses (Home) | <urn:uuid:480c9572-400c-419f-95e2-dc5bfb2710db> | {
"dump": "CC-MAIN-2019-35",
"url": "https://www.learninggroup.co.za/accounting-courses/",
"date": "2019-08-26T03:42:29",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027330962.67/warc/CC-MAIN-20190826022215-20190826044215-00325.warc.gz",
"language": "en",
"language_score": 0.9372787475585938,
"token_count": 842,
"score": 3.296875,
"int_score": 3
} |
At no time are we more aware about athletes doping than during an Olympiad. Training at altitude for several weeks leads to an increased capacity for one’s blood to carry oxygen and it’s a legitimate practice, but people sometimes look for shortcuts—like doping.
Traditional doping means blood doping, wherein an athlete is infused with oxygenated blood prior to competition. A more sophisticated tactic is EPO doping; the athlete is given erythropoietin (EPO), a hormone that normally the kidney produces to stimulate production of red blood cells (RBCs) in the bone marrow. In fact, the altitude training tactic works because EPO production is stimulated. Short of ascending to the mountains, however, injection provides the athlete with higher than normal levels of the hormone. This, in turn, leads to increase in the hematocrit, the ratio of blood cells to the liquid in blood. No matter what the method the end result is the same, the athlete’s performance is increased beyond her or his normal abilities.
Modern methods allow for the detection of supplemental EPO in the blood and urine of athletes, so what does one do? One possibility is get gene therapy; the gene for EPO is introduced into the body, so many of the athlete’s cells would have extra copies of the gene. Consequently, the athletes make extra RBCs, so their blood can carry extra oxygen, but the International Olympic Committee (IOC) announced recently that they’ll soon be able to detect any extra copies of the EPO gene:
“We will store [athletes’] samples,” said IOC medical chief Richard Budgett to reporters last week, and went on to explain the following:
We can be very confident that an athlete who is cheating should be very scared. If someone thinks they have designer drugs eventually they will be found. The message for all those cheats out there is “beware you will be caught.” I am confident we have the deterrents that should lead to the protection of clean athletes.
It’s not clear how sensitive the emerging EPO gene test will actually be, and given how gene therapy can be directed to specific body tissues it’s plausible that guaranteeing reliability would require tissue biopsies. This may be a problem in elite athletes, and moreover, EPO gene doping is only one of several genetic strategies that an athlete might employ. Potentially, one might also use gene therapy to increase muscle mass, to grow new blood vessels, or to modify muscle phenotype (the proportion of red [slow twitch] versus white [fast twitch] muscle fibers. Much less researched, but in the realm of possibilities, gene therapy also might be used to increase the pain threshold. If this sounds dangerous, that’s because it is.
But the point of developing these techniques isn’t to help athletes fake their way to a world record. The development of these gene therapies is potential boon to people with medical conditions ranging from muscular dystrophy to cancer cathexia, or for that matter elderly individuals with senile sarcopenia (muscle atrophy associated with old age).
Strategies for going faster, further, higher
Getting extra copies of the gene for EPO is one way to get more oxygen to tissues. another way is to introduce more genes for protein called vascular endothelia growth factor (VEGF), which stimulates growth of blood vessels in tissues where they’re needed, such as muscle. One can also get specific to particular sports and in this case muscle fiber type is particularly important. In his 2001 book Taboo: Why Black Athletes Dominate Sports And Why We’re Afraid To Talk About It, Genetic Literacy Project executive director Jon Entine made some points about muscle fiber types, to which he also alluded in a recent GLP article related to the Rio 2016 Olympics.
It’s non-controversial and taught in any exercise physiology class that people with relatively high ratios of white fibers (cells), also called fast-twitch fibers, versus red, or slow-twitch, fibers in their muscles are well suited for power sports. The quintessential power sport is sprinting, short-distance running. That’s where Jamaican runner Usain Bolt—also called ‘the fastest man’—excels. There are optimal body dimensions that go a long with it too, but having a very large number of fast-twitch fibers enables the rapid use of energy to produce the needed bursts of physical power. At the opposite end of the spectrum is the endurance athlete, represented by the marathon runner, an area that is dominated by the Kalenjin tribe present in East Africa, specifically Kenya and Ethiopia.
Body dimensions, including lung dimensions, also come into play for distance, but distance runners have unusually high ratios of red to white fibers and the best way to have such a ratio is to be of Kalenjin descent. Similarly, the best way to have a high proportion of white fibers is to be of West African descent, like Bolt. Training comes into play too, not only for the cardiovascular system, but it can influence muscle fiber phenotype, because some fibers can be converted between fast and slower twitch. But short of being having the genetic background for fast- or slow-twitch-dominated muscles, one might be tempted to use gene therapy to do one of two things. Fast twitch (white) fibers are not only powerful (at the cost of fatiguing quickly), but they can also bulk up fairly easily compared with slow twitch fibers. But one can grow his or her muscle fibers by way of gene therapy to increase growth hormone, or to reduce activity of the enzyme myostatin. The latter is one strategy that the biotech company Bioviva has employed in its CEO/test subject Liz Parrish, who hopes this will lead to anti-aging therapies, along with therapies for people afflicted with muscle diseases, such as muscular dystrophy. In a sprinter, when the goal is to increase the ratio of white-to-red fibers, another potential approach would be to stimulate production or activation of a particular contractile protein within muscle cells called myosin 2b.
Possible benefit of all of these approaches is supported by studies in laboratory animals, but not clinical trials in humans (the single-subject Bioviva test that’s in progress notwithstanding), so we’re talking high risk. It might work, or it might not work. Moreover, given the mechanisms underlying such potential treatments, there also are dangers that could be life threatening.
Dangers of gene doping
It does not take much imagination to see how the various genetic strategies could harm an athlete, if enough genetic material is delivered to enough body cells to cause a physiological change. Whereas VEGF grows new blood vessels, anti-VEGF agents are used in multiple clinical settings because new blood vessel growth, or neovascularization, is often part of a disease process. Since neovascularization occurs in various cancers, anti-VEGF agents are used as part of cancer therapies. The same agents are used against VEGF in certain eye diseases, in which blood vessels are trying to grow to compensate for low oxygen levels, but having such vessels grow would block out vision.
When it comes to making extra EPO because of an added gene, this could be extremely dangerous because an extra high hematocrit—too many RBCs—makes blood too thick. This could lead to strokes, or various other life-threatening situations. With myostatin inhibition, the situation is rather complicated. As with most genes in the body, you have two copies. Studies with laboratory animals show that when both myostatin copies, or alleles, are knocked out, the muscles do bulk up, but they become rigid and don’t function well. Inhibiting myostatin makes muscles hypertrophy and get stronger when one of the two alleles is shut off, or knocked out, but the other remains functional. Thus with myostatin gene therapy, there’s an optimal effect; you want to suppress myostatin, but not too much, and this could be really, really tricky to achieve in a clinical setting.
Benefits to the general population: Uses against disease
Considering effects of various gene doping strategies on physiology, it becomes clear that the athletes trying them could end up as population of human guinea pigs, a kind of phase 1 clinical trial that could highlight desired effects, but also safety issues, for potential treatments of human disease. People with chronic pulmonary disease would welcome novel treatments developed in athletes to deliver oxygen more efficiently to body tissues. People with muscular atrophy—from cancer cathexia to senile sarcopenia—could use a myostatin gene therapy to rebuild their muscle. If it works in the athletes it should work in the disease-afflited people too; in fact, risks might make more sense in the latter group. The list goes on an on with clinical applications that could change he world of medicine.
By no means will sports fans around the globe welcome the idea that more athletes will be able to get a way with cheating and for the athletes it may be entirely cost prohibited for most (at least). At present cost of gene therapy to treat blood diseases is anywhere in the neighborhood of $100,000-1,000,000 per case. From that perspective, a trip to Salt Lake City or Denver for a few weeks doesn’t look so bad.
But if cheating is inevitable anyway, if development of the treatments is driven by the lucrative markets of elite sports, maybe the spinoffs in the clinic will constitute an acceptable silver lining to helping those who are suffering from a myriad of diseases.
David Warmflash is an astrobiologist, physician and science writer. Follow @CosmicEvolution to read what he is saying on Twitter. | <urn:uuid:e0f04fbf-3040-423c-ad7e-6922f8db113f> | {
"dump": "CC-MAIN-2020-50",
"url": "https://geneticliteracyproject.org/2016/09/14/gene-doping-sports-entails-challenges-dangers-may-not-dopey/",
"date": "2020-12-02T12:30:08",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141708017.73/warc/CC-MAIN-20201202113815-20201202143815-00655.warc.gz",
"language": "en",
"language_score": 0.9493170380592346,
"token_count": 2031,
"score": 3.25,
"int_score": 3
} |
The Census Bureau is scheduled to release an alternate supplemental poverty measure meant to capture a more accurate picture of the contemporary social and economic realities driving poverty next week.The agency provided a bleak glimpse into the state of American poverty to give a preview of what can be expected.
The report, based on an analysis of the most recent Census data, found that the number of poor individuals rose by 12.3 percent in the last decade, increasing the overall number of Americans in poverty to an all-time high of 46.2 million.
The poverty threshold for a family of four is currently $22,314, according to the Census Bureau. A single individual making less than $11,139 per year is considered to be living in poverty.
Poverty by Region
Even though a record-number of people are now poor, it has not affected all Americans equally.
Households in the Midwest, South and West experienced declines in real median income between 2009 and 2010, while the Northeast has not seen any statistical changes. Due to a struggling manufacturing economy, a separate analysis by the Brookings Institution reports the Great Lakes metro regions of Toledo, Daton and Youngstown, Ohio, as well as Detroit, Mich. experienced some of the largest growths in concentrated poverty, nearly doubling in some areas between 2000 and 2009.
Meanwhile, cities in the Southern U.S. experienced some of the most significant increases (El Paso, Texas; Baton Rouge, La.; Jackson, Miss.) and decreases (Charleston, S.C.; Virginia Beach, Va.).The Census Bureau reports the South was the only region to show statistically significant increases in both the poverty rate as well as the number of individuals living in poverty.
According to Brookings, poverty rose more than twice as fast in the suburbs, increasing by 41 percent compared to 17 percent in cities. Many residents of extreme-poverty neighborhoods between 2005 and 2009 were more likely to be white, high school or college graduates and not receive public benefits, suggesting that many individuals have been hampered by the economic recession.
However, the Census Bureau reports that minorities, especially blacks and Hispanics, are still considerably more likely to be poor than white Americans. Twenty-six percent of Hispanics are living in poverty, compared to 27 percent of blacks and 13 percent of whites.
About half of those living below the poverty line were classified as the poorest of the poor, meaning they live at less than 50 percent of the poverty line. In 2010, that would mean an individual income below $5,570; for a family of four, below $11,157.
Poverty Metric Revision Ahead
The faces of the poor are expected to change once the new census data is released next week. The official way that the government has been measuring poverty is based on a system created in the 1960s by the statistician Mollie Orshansky, which was reportedly meant to be a placeholder until something more sophisticated came along -- except it never did.
The system only counts cash as income and ignores expenses such as taxes and medical costs. Orshansky used the cost of a nutritionally adequate diet as the basis of her poverty threshold, since food was often the biggest expense for families during that era. Now, as food costs are far from the largest share of a typical budget, the model is extremely outdated.
Experts anticipate Monday's report will reflect an even higher percentage of those in poverty, especially among the elderly and working class families. The current system overlooks the income poor people receive from government assistance such as food stamps and tax credits, The New York Times reports. This often boosts the perceived income of those who are technically poor but, statistically, are above the poverty threshold.
For instance, The Times reports that an individual making an annual income of about $7,500 to support a family of three, who also receives $3,600 a year in food stamps, $1,800 in nutritional supplements, a tax credit of $4,000 and almost $1,900 in housing aid is seen as having an annual income of about $18,800. While that certainly isn't much, it is still below the current poverty threshold of $17,374 for a family of three.
The new data will include details on individuals who receive non-cash help, and will take into account spending on necessities such as healthcare and commuting, rather than taxable income alone. | <urn:uuid:ebfd40a5-dca7-4eff-a8d6-1463603cf90d> | {
"dump": "CC-MAIN-2013-20",
"url": "http://www.ibtimes.com/us-poverty-data-1-15-live-extreme-poverty-record-365482",
"date": "2013-05-24T01:58:27",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00020-ip-10-60-113-184.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.9691219925880432,
"token_count": 890,
"score": 3.15625,
"int_score": 3
} |
We have accumulated many documents in our long history of using paper as a writing medium. Documents like books, newspapers, letters, birth certificates, agreements, photographs, and much more contain valuable information about our history and civilization but are also built on one of the most fragile materials we have. That is why proper paper-preserving techniques are crucial if we want to pass these documents to future generations.
What Causes Paper to Deteriorate
Before we describe the best way to preserve paper, we should look into how paper can deteriorate and lose its quality over time. The first way paper can deteriorate is an inherent vice, meaning problems with how the material itself was created. The mechanization of papermaking in the 1840s and the introduction of ground wood pulp instead of rug pulp made paper cheaper and more accessible. However, it also resulted in a paper that ages poorly and becomes brittle easily. Most paper from the mid-19th century until today is in danger of becoming brittle. Environmental conditions can play a strong role in paper preservation. Poor conditions can accelerate deterioration and cause the yellowing of the paper and the fading of the ink. The environment can also cause mold growth and an influx of airborne contaminants, including soot, grime, and chemicals. Finally, how paper is used, including how it is stored and handled, can also cause deterioration. This would include folding paper, tear, creases, staples, and paperclips. Acid migration, or the transfer of acid substances between two surfaces in contact with each other, can occur from adjacent materials.
Poor storage can also result in damage from pests, some of which feed on paper, while others like to use paper as a material to make nests. While we greatly understand what inherent vice does to paper, there are a few ways we have to combat this phenomenon. That is not to say that there are no chemical solutions trained conservators can try, but these are all very complicated and often very expensive. We should also notice that no treatment can restore flexibility to brittle paper; it can only slow down further deterioration.
How To Preserve Paper and Archive Important Documents
Out of all the ways we have to preserve paper, copying or scanning another hard copy is one of the easiest ways, and we have practiced this since the invention of different scanning technologies. However, while this may create a higher-quality copy of the same document, the same problems will eventually arise for the new document we created. We will have two papers instead of one that will need preservation in a short time. With the invention of digital technologies and the process of digitizing our documents, we have largely solved the problem of physical scans while at the same time making these documents more accessible, easier to search, and also preserving them indefinitely. But while this allows us to ensure these records are not lost to history, it still does not solve our problem of protecting the physical copy of valuable historical documents.
Where you store your documents will have the most significant impact on their long-term preservation. Paper collections should be stored at a constant 21° Celsius (70° Fahrenheit) and below 50% humidity, but attending these needs special storage facilities and will be hard to do in a home environment. For preserving your family documents at home, keep them in a space with good airflow, and it is always a good practice to keep them out of the attic or basement.
Another thing we should keep in mind when preserving documents is to keep them away from light. Light accelerates the deterioration and can cause darkening or yellowing of the paper. Most importantly, light damage accumulates over time and is irreversible. If you have to display your paper documents, it is always a good idea to display a high-quality reproduction. If you must display the original document, consider glass that blocks UV rays, conservation quality matting, and choosing a display spot out of direct sunlight. Dusting and regular maintenance of your storage area will help avoid a buildup of pollutants.
On top of this, several other solutions slow down paper deterioration, including storage and handling. You will need to use archival-quality material to house your personal family documents. Keep in mind that archival quality, while a widely used term, has no regulations behind it, so it is imperative to research and find the most effective solution for your specific needs.
Make sure any material you use to house your documents is acid-free, has a neutral PH level, or is buffered with a slight alkaline reserve. Ideally, they should also be free of lignin. This is because as lignin deteriorates over time, it produces acids that turn the paper yellow and brown.
If you would like to see your paper documents while they are archived, a polyester sleeve will be your best choice. In this case, ensure the polyester is inert, contains no plasticizers, and has no coating. Keep in mind that polyester has a static charge which, while it helps the sleeves stay well together, it is bad for flaking media, so polyester should not be used for graphite, pastels, chalk, or any other friable media, meaning any media which is crumbly, fragile or easily brushes away when touched.
Make sure you never use tape because it can cause irreversible damage in as little as six months. You should also avoid using staples and paperclips that will rust on documents while at the same time checking and monitoring your documents regularly for insect activities and mold growth.
Another way to extend the life of your valuable documents is to handle them properly. You should always wash your hands before touching old documents, and you should make sure to treat paper gently. Avoid stacking objects on top of your documents.
In conclusion, as we see above, there is no straightforward way to preserve paper, and you have to judge your needs on a case-by-case basis. But with care and effort, you should be able to preserve the life of your important documents and keep their often unique content alive! | <urn:uuid:0064dca7-c227-410e-86ef-caf7cdb154c1> | {
"dump": "CC-MAIN-2023-50",
"url": "https://www.paperpapers.com/news/preserving-paper-documents/",
"date": "2023-12-09T08:32:26",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100873.6/warc/CC-MAIN-20231209071722-20231209101722-00256.warc.gz",
"language": "en",
"language_score": 0.9492180943489075,
"token_count": 1216,
"score": 3.484375,
"int_score": 3
} |
White space works
One of the ‘tricks‘ when presenting written material is to create the right amount of white space.
White space, sometimes called negative space, is the portion of a page (or screen) left unmarked
White space is controlled in two ways
Spacing happens in one of two directions: horizontally, or vertically.
- Horizontal spacing: margins (between the text and the vertical edges of the page), indents (between the vertical margins and the text), gutters (the gaps between columns of a table), tabs (for alignment of tabulated data)
- Vertical spacing: the margin at the top of the page above any header, the margin at the bottom of the page below any footer, between any header and the start of text, between the end of text and any footer, between lines of text, between paragraphs, between images/tables and text
Much of the horizontal spacing control relies on the ruler. If your ruler is not visible, select View / Text Editing / Show Ruler.
The symbols reveal the current settings for indentation and tab stops. Here is an example:
- The down-triangles at 0 and 6 indicate the left and right indents. This is the amount by which the text is offset from the left and right margin respectively.
- The rectangle at 0.5 represents the indentation for the first line of each paragraph.
- The right-triangle at 0.5 and at 2 are left-align tab stops.
- The diamond at 3 is a centre tab stop.
- The left-triangle at 4 is a right-align tab stop.
- The circle with a dot in it at 5 is a decimal tab stop.
By sliding these settings along the ruler, you can change how the text lands on the page and, consequently, the white space effect.
Notice that the ruler can be calibrated in centimetres or inches or pica or points. The choice is yours and is determined in Scrivener / Preferences / Editing / Options.
The default is inches.
There are 12 points in 1 pica and 6 picas in 1 inch. There are 72 points in an inch.
Horizontal spacing: Font formatting versus paragraph formatting
The horizontal spacing of text is mostly controlled at paragraph level. Whatever format you apply, it tends to be applied to one or more paragraphs.
However, there is an option to tweak the spacing between letters within a word. This is accessed through Format / Font / Character spacing.
The only time I’d be inclined to use this feature would be if I wanted a heading to fill the width of a page and I thought it might look cool to do so!
Paragraph formatting for horizontal spacing
This is the route you are most likely to need to take. Select Format / Paragraph to open this window. Here, you can control the horizontal spacing. (You can also control this by manipulating the ruler.)
Having set this up (here or using the ruler), Scrivener lets you control the tabs and indents, increase/decrease the indents, and remove all tab stops, via the menu. For example: Paragraph / Increase/Decrease Indents offers lots of options.
Vertical spacing is controlled through formatting. Do not use extra returns to create space between paragraphs or before/after headings – instead specify this as follows.
Select the text you want to format, and Format / Paragraph / Line and Paragraph Spacing. In the window that opens, choose your vertical spacing and click OK.
Notice that if you use a ‘blocked‘ style (ie no first line indentation), you need maybe a 6pt space after each paragraph (matching the line height). If you prefer first line indentation for each paragraph, there need be no extra gap between paragraphs.
Changing the colour of your white space
Scrivener offers the option to change almost anything you want to change – to customise your workspace so it’s just how you like it best.
And, you can change the colour of the ‘paper‘ so your ‘white‘ space can be whatever colour you like.
Select Scrivener / Preferences / Appearance. In the Editor tab, click on the Colors tab, and then on the rectangle (white will have been showing white). A colour pane opens and you can choose whatever colour you like for your ‘paper‘. I’ve selected a light cream.
Questions? Need a helping hand? Want a demo?
To watch me go through the process of formatting onscreen or to ask any questions, book a place at the next Simply Scrivener Special. 60 minutes of Q&A on Scrivener with me, Anne Rainbow, ScrivenerVirgin!
To help me prepare, you could also complete this short questionnaire.
The ScrivenerVirgin blog is a journey of discovery:
a step-by-step exploration of how Scrivener can change how a writer writes.
To subscribe to this blog, click here.
Also … check out the Scrivener Tips
on my ScrivenerVirgin Facebook page. | <urn:uuid:ef7fb428-7645-4684-a62c-9e1c8fc0c770> | {
"dump": "CC-MAIN-2019-26",
"url": "https://www.scrivenervirgin.com/2019/01/scrivener-no-style-creating-white-space/",
"date": "2019-06-16T16:15:28",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998250.13/warc/CC-MAIN-20190616142725-20190616164550-00055.warc.gz",
"language": "en",
"language_score": 0.8623158931732178,
"token_count": 1082,
"score": 4.5,
"int_score": 4
} |
Herbicides were developed in the 1940s. Here is a brief overview of their history and how they work.
There are a number of factors that should be taken into account before planting pecan nut trees.
Also known as red ear rot, this disease appears to be on the increase in South Africa’s maize-producing areas.
Pepper is a tropical plant that prefers hot, humid areas such as the Lowveld and the northern coastal areas of KwaZulu-Natal.
Barley is highly sensitive to competition from weeds, especially in the initial stages. Early control measures will therefore enhance yield potential.
The incidence of ear rot in South Africa’s maize-producing areas can vary greatly from year to year and from land to land within the same season.
A cover crop is a fundamental and sustainable tool used to manage various functions of soil health. It is defined as any type of plant grown to improve any number...
As with any crop, fertilisation can be successful only when the minimum acidity requirements are met.
Taking crops off the land without returning nutrients to the soil is called ‘mining’ the soil. If you do this for some years, your crops may grow slowly, have a...
Research programmes since 1991 have identified barley cultivars that ensure an economical, optimal yield and grain conforming to SAB Maltings’ quality specifications. This is an overview of the research.
Suggested application rates for some common crops, including tomatoes and potatoes.
Shepherd’s purse and sow thistle are weeds that are hard to eradicate once established, says Bill Kerr. | <urn:uuid:420f8ee9-6b2c-487e-ae76-cc53822e06af> | {
"dump": "CC-MAIN-2019-04",
"url": "https://www.farmersweekly.co.za/farm-basics/how-to-crop/page/3/",
"date": "2019-01-22T13:15:13",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583850393.61/warc/CC-MAIN-20190122120040-20190122142040-00213.warc.gz",
"language": "en",
"language_score": 0.9462166428565979,
"token_count": 334,
"score": 3.3125,
"int_score": 3
} |
The third and last volume of Robert Skidelsky’s wonderful, engrossing biography of John Maynard Keynes is a triumph over its raw material. It covers the last decade of Keynes’s life—1937 to 1946. By 1937, Keynes, who was born in 1883, was a very sick man. The heart infection which was to kill him in 1946 was well established, and incurable. The backdrop against which the events of this last volume are played out is one of remorseless physical decline; mentally, he remained as sharply imaginative as ever until a few months before his death. Indeed, ill as he was, the ministrations of his wife and doctor ensured that even in the narrowest physical sense he survived the stresses of wartime better than most of his colleagues at the Treasury and the Bank of England. But his achievements were the achievements of a dying man.
During these years, Keynes set his hand to four great enterprises. He met defeat in all four—though the defeats were partial, and sometimes prepared the way for something better. He had a vision of how Britain might fight World War II without rationing, and without a totalitarian planning system. The little book in which he argued for the control of wartime inflation through compulsory savings—How to Pay for the War—was a high point of Keynesian economic argument. It was also a last gasp of Edwardian liberalism. Keynes did not win the subsequent debate, and government policy did not achieve what Keynes wanted; but Britain did avoid the inflation and the industrial strife of World War I.
His second defeat was in the negotiations with the United States for the economic assistance without which Britain could not have resisted Germany in late 1940 and 1941. Lend-Lease enabled Britain to devote all its energies to fighting the war, without undue anxiety about paying its bills as it went; but its terms were not what Keynes hoped the United States would agree to, and they became a heavy burden on the British economy. The third was over his plan for a Clearing Union. Keynes wanted to do more than restore the pre-1939—or the pre-1914—system of international finance. With his eye on the depression of the 1930s, and on the shortage of gold and hard currency reserves that inhibited international trade, he wanted something more radical: a true international bank, an institution which, like a national bank, would take in deposits and advance credit, and ensure that international trade was never restricted by a shortage of liquidity. Keynes’s plan was thus very different from the compromises of Bretton Woods, which set up the International Monetary Fund and the World Bank. To establish these institutions was no small achievement; but Keynes was not a willing architect of what emerged from Bretton Woods. It can be argued that he secured as much by way of international economic cooperation as was humanly possible, but he continued to fight for something better until a few …
This article is available to online subscribers only.
Please choose from one of the options below to access this article:
Purchase a print premium subscription (20 issues per year) and also receive online access to all all content on nybooks.com.
Purchase an Online Edition subscription and receive full access to all articles published by the Review since 1963.
Purchase a trial Online Edition subscription and receive unlimited access for one week to all the content on nybooks.com. | <urn:uuid:5c7da9f3-7d16-4160-a047-275d45cdb38e> | {
"dump": "CC-MAIN-2014-52",
"url": "http://www.nybooks.com/articles/archives/2002/mar/14/keyness-last-stand/?pagination=false",
"date": "2014-12-18T11:43:11",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802766267.61/warc/CC-MAIN-20141217075246-00099-ip-10-231-17-201.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.979365348815918,
"token_count": 691,
"score": 3.046875,
"int_score": 3
} |
SDSU researchers examine the effects of shrinking water supplies in the Imperial-Mexicali Valley.
Whenever it rained, six-year-old Trent Biggs would get in trouble for digging ditches in the school playground. “I just liked watching water flow around,” he explained.
He still does. Now a San Diego State University geography professor, Biggs leads water-use studies from the Himalayan foothills of Nepal to the Amazon rainforests of Brazil. Closer to home, he’s focused on the Sonoran desert towns and farms that surround SDSU’s Imperial Valley campus on both sides of the U.S.-Mexico border.
The problems there are as old as the urbanization of Southern California: insufficient water to meet community demands and ecosystem needs. The solutions, which could figure into future policy-making, are both increasingly high-tech and surprisingly personal.
“All the big environmental issues come together around water, and Imperial-Mexicali Valley is a great place to study all those issues because it incorporates them in one place,” Biggs said.
The Biggs Watershed Science Lab’s work in Imperial Valley is a collaborative effort, comprising multiple studies by faculty and students from both campuses. They are joined by research colleagues in the Imperial Valley at the Cooperative Extension Office of the University of California, Davis and the Imperial Irrigation District; by the nonprofits Pacific Institute and Comite de la Valle; and by researchers across the border at the Universidad Autónoma de Baja California and El Colegio de la Frontera Norte.
Together, these groups aim to assess the effects of shrinking water supplies in an arid region dependent on agriculture. Their primary goal is to provide information needed by current and future decision-makers to develop water policies benefiting people, economies and ecosystems.
”Time and time again, society has adapted to less water in ways that can end up making us better off,” Biggs said.
Imperial Valley’s history as an agricultural center began in the early 20th century when ambitious irrigation projects first brought Colorado River water to the area. Eventually, the demands of growing populations along the river’s route from the Rocky Mountains to the Gulf of California forced a continuing series of cuts in water allocations for agriculture.
Farmers in the valley have so far adapted to reductions in imported water by implementing conservation and efficiency measures, even leaving some fields unplanted in exchange for payments from the Imperial Irrigation District, ultimately funded by the San Diego County Water Authority.
While Biggs stresses the researchers’ job in the Imperial and Mexicali Valleys is to document the issues, not suggest solutions, preliminary data point toward a few new ideas worth exploring. One possibility: Farmers may be able to continue improving water efficiencies by switching crops without sacrificing revenue or reducing the workforce.
Biggs said most of the water used by Imperial Valley agriculture now goes to alfalfa, grown as animal feed. But salad greens and other grocery produce could bring in more money for the same amount of water, he added. Switching crops also could contribute to the Imperial Valley’s growing importance as a driver of California’s aggressive emissions reduction plan. Already, the region is bustling with clean energy projects—wind, solar and geothermal.
However, even the most well intentioned water conservation methods can have unforeseen consequences. For example, new concrete liners have minimized leakage from old earthen irrigation canals in Imperial Valley. But Mexican farmers who depended on that seepage into underlying aquifers are seeing their land dry up.
“The whole idea of saving water often means taking supposedly “wasted” water from another user,” Biggs said. “So we’re not sure what the ultimate impact of conservation policies will be on the water balance of the region. One of the big questions we’re looking at is: What is the future of the Imperial and Mexicali valleys under reduced water supply?”
The first step in answering that question is documenting and quantifying the impacts of current water and land use policies. To gather this data, Biggs and his students combine high-tech and old-school research methods: satellite photos and in-person interviews.
“We’re interested in using satellite imagery to see where groundwater levels and water quality are changing and where we should talk to people to learn how those changes have affected them,” Biggs said. “Our goal is to understand the cause of shifting land use in their fields and how are they responding.”
Graduate student Joel Kramer used this approach in gathering data for his master’s thesis. He first mapped water scarcity effects by noting the appearance or disappearance of green crop areas in satellite images of Imperial and Mexicali valley farms. Then he and the undergrads he mentors drove out and asked some 25 farmers in Mexico how those visible changes had affected them.
Gabriela Morales, also an SDSU master’s student, was drawn to this mixed-methods research model after completing her bachelor’s degree at UCLA last year. She chose the study of geography over more heavily quantitative environmental sciences because of geography’s emphasis on human interactions with natural processes. Morales hopes her eventual findings will inform future water policy.
“I want to create a holistic view of what’s happening; I want to connect people to the environment,” Morales said.
Biggs considers the kind of fieldwork done by his students in Imperial Valley as an invaluable part of any educational experience in geography. To understand big problems, you need to see them firsthand.
“Meeting the people affected, hearing their stories, seeing it happen in front of your eyes—it’s hugely motivating and hugely educational,” Biggs said. “You get experience with how knowledge is created and discoveries are made. You see how stuff you learned in class came to be. You understand the problems and nuances that go into testing hypotheses and making statements about how the world works.”
Used by permission. This story is featured in the spring 2019 issue of 360: The Magazine of San Diego State University. | <urn:uuid:9850f928-7b82-416e-8107-82271b2bf31c> | {
"dump": "CC-MAIN-2022-27",
"url": "https://www.escondidograpevine.com/2019/03/16/sdsu-the-shape-of-imperial-valley-water/",
"date": "2022-07-04T06:11:03",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104354651.73/warc/CC-MAIN-20220704050055-20220704080055-00747.warc.gz",
"language": "en",
"language_score": 0.9394409656524658,
"token_count": 1293,
"score": 3.59375,
"int_score": 4
} |
Gazes vs Gapes - What's the difference?
As verbs the difference between gazes and gapes
is that gazes
is while gapes
As a noun gapes is
Other Comparisons: What's the difference?
(plural only) A fit of yawning.
(plural only) A disease of young poultry and other birds, caused by a parasitic nematode worm in the windpipe.
- The gapes''' is contagious: when one person gets the '''gapes , pretty soon you've got a roomful of gapers.
- The gapes has gotten to many of the birds. | <urn:uuid:9c0554d4-ee24-48ea-822a-90f35b4d3b57> | {
"dump": "CC-MAIN-2022-21",
"url": "https://wikidiff.com/gapes/gazes",
"date": "2022-05-26T18:27:38",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662619221.81/warc/CC-MAIN-20220526162749-20220526192749-00716.warc.gz",
"language": "en",
"language_score": 0.8527016639709473,
"token_count": 134,
"score": 3.09375,
"int_score": 3
} |
the website where English Language teachers exchange resources:
worksheets, lesson plans, activities, etc.
Our collection is growing every day with the help of many teachers. If
you want to download you have to send your own contributions.
Children love writing letters to Santa. This worksheet can help them to write their first letter in English. First they read Michael and Janetīs letters, fill in gaps and then they write own letter. Hope you like it:-)
Level:elementary Age: 8-12
Copyright 06/12/2010 Jazuna
Publication or redistribution of any part of this
document is forbidden without authorization of the | <urn:uuid:b72a5b89-6a6c-4a89-9221-a7859f18670e> | {
"dump": "CC-MAIN-2022-27",
"url": "https://www.eslprintables.com/vocabulary_worksheets/holidays_and_traditions/christmas_/a_letter_to_santa/A_letter_to_Santa_484451/",
"date": "2022-07-03T02:44:27",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104209449.64/warc/CC-MAIN-20220703013155-20220703043155-00077.warc.gz",
"language": "en",
"language_score": 0.8605062365531921,
"token_count": 135,
"score": 3.796875,
"int_score": 4
} |
Shigella infection in Kerala: Symptoms, Causes, Prevention and Treatment
Recently an emergency meeting has been conveyed by the Health officials in Kozhikode district, Kerala, and kicked in preventive measures after 6 cases of Shigella infection and nearly 2 dozen suspected cases were detected.
About Shigella infection
According to the CDC, Shigella bacteria cause an infection known as shigellosis. It is a contagious intestinal infection. People suffering from Shigella infection have diarrhoea, fever, and stomach cramps. Usually, symptoms begin 1-2 days after infection and last about 7 days. Without taking antibiotics, most people recover.
It occurs in children especially children in African and South Asian regions. People are provided with antibiotics who have severe illness and those with underline conditions that weaken the immune system. Basically, the duration of illness is shortened by the antibiotics by about 2 days and might help reduce the spread of Shigella to others.
When the bacteria enter the body through ingestion it attacks the epithelial lining of the colon which results in inflammation of the cells and results in the destruction of cells in severe cases. Let us tell you that it takes a small number of Shigella bacteria to enter a person's system and get her sick.
Signs and symptoms of Shigella infection
The infection usually begins in a day or two after contact with shigella and it may also take up to a week to develop.
- Diarrhea (often containing blood or mucus)
- Stomach pain or cramps
- Nausea or vomiting
If a child or a person has bloody diarrhoea or it causes weight loss and dehydration then it is urgent to seek a doctor. Also, it is necessary to contact a doctor when a person or child has diarrhoea and a fever of 101 F (38 C) or higher.
How is Shigella infection caused?
- The most common way to spread the disease is direct person-to-person contact.
- Eating contaminated food.
- Swallowing contaminated water
Risk factors associated with it are:
- Shigella can infect people of any age but children under age 5 are most likely to get shigella infection.
- Close contact with other people may spread the bacteria from person to person.
- Travelling or living in an area where sanitation is not done properly or lacks sanitation.
Shigella Infection: Complications
It clears up without complications but may take weeks or months before bowel habits return to normal. Complications may include are dehydration, seizures, rectal prolapse, Hemolytic uremic syndrome, toxic megacolon, reactive arthritis, bloodstream infection, etc.
Shigella Infection: Prevention
As the Shigella vaccine is not available yet but researchers and scientists are working on it. Preventions that can be taken to prevent the spread of shigella infection are as follows:
- Wash hands with soap and water frequently for at least twenty seconds.
- After use, disinfect diaper-changing areas.
- Throw away soiled diapers properly.
- If a person is suffering from diarrhoea then don't prepare food for others.
- Avoid swallowing water from ponds, lakes, or untreated pools.
- Avoid sexual activity with anyone who has diarrhoea or who recently recovered from diarrhoea.
- Don't swim until you have fully recovered, etc.
Mucormycosis, a rare Fungal Infection linked to COVID-19: Causes, Symptoms, Types, Prevention and Treatment | <urn:uuid:90aa7b22-58de-441e-b4d1-c436f4c97c9d> | {
"dump": "CC-MAIN-2023-23",
"url": "https://www.jagranjosh.com/general-knowledge/shigella-infection-1608724762-1?ref=list_gk",
"date": "2023-06-10T00:15:19",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224656869.87/warc/CC-MAIN-20230609233952-20230610023952-00029.warc.gz",
"language": "en",
"language_score": 0.9276924729347229,
"token_count": 737,
"score": 3.015625,
"int_score": 3
} |
Understanding Ultrasonic Gas Flow Meters: An Essential Guide for Professionals in the Instrumentation and Flow Measurement Industry
Ultrasonic gas flow meters are indispensable tools in the field of instrumentation and flow measurement, specifically in the gas and liquid flow rate calculations. This guide aims to provide professionals in the industry with a comprehensive understanding of the principles, applications, advantages, and limitations of ultrasonic gas flow meters.
1. How do Ultrasonic Gas Flow Meters work?
Ultrasonic gas flow meters utilize the principle of measuring the time it takes for an ultrasonic signal to travel through the gas or liquid medium. These meters consist of two transducers, one acting as the transmitter and the other as the receiver. The transmitter emits ultrasonic signals that travel through the medium, and the receiver detects the transmitted signals after they have traveled through the medium. By measuring the time difference between the transmitted and received signals, the flow rate of the gas or liquid can be determined.
2. Advantages of Ultrasonic Gas Flow Meters:
- Non-invasive: Ultrasonic gas flow meters do not require direct contact with the fluid being measured, making them suitable for applications where high pressure, corrosive, or hazardous fluids are involved.
- Wide range of applications: These meters can be used in various industries, including oil and gas, chemical, pharmaceutical, and water treatment, to measure flow rates of gases and liquids accurately.
- Minimal pressure drop: Ultrasonic flow meters have a negligible effect on the fluid flow and do not cause significant pressure drops, ensuring efficient operations.
- High accuracy: With advancements in technology, ultrasonic gas flow meters offer high accuracy and repeatability in measuring flow rates.
3. Limitations of Ultrasonic Gas Flow Meters:
- Limited use in certain fluids: Ultrasonic gas flow meters may face challenges in measuring flow rates accurately in fluids with low ultrasonic transmission capabilities, such as highly attenuative or viscous fluids.
- Installation requirements: Proper installation, including the correct alignment and pipe conditions, is crucial for accurate measurements. Obstructions or disturbances in the flow path can affect the meter's performance.
- Initial calibration: Ultrasonic gas flow meters require initial calibration to ensure accurate measurements. Factors such as fluid properties, flow profile, and temperature variations may influence the calibration process.
In conclusion, ultrasonic gas flow meters play a vital role in the instrumentation and flow measurement industry. Their non-invasive nature, wide range of applications, minimal pressure drop, and high accuracy make them a preferred choice for measuring gas and liquid flow rates. However, understanding their limitations and ensuring proper installation and calibration are essential for obtaining accurate and reliable measurements in various industrial processes.
ultrasonic gas flow meter | <urn:uuid:0be864eb-a9fa-45b5-8a41-9a4321cd2d0c> | {
"dump": "CC-MAIN-2023-50",
"url": "https://www.theta-instruments.com/News_Detail/1726853018350063616.html",
"date": "2023-12-06T10:10:30",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100593.71/warc/CC-MAIN-20231206095331-20231206125331-00220.warc.gz",
"language": "en",
"language_score": 0.9051017761230469,
"token_count": 561,
"score": 3.203125,
"int_score": 3
} |
One of the mysteries of the English language finally explained.
1The mental process by which a person makes sense of an idea by assimilating it to the body of ideas he or she already possesses.
- ‘Performances provide another such context as audiences are brought together in a heightened awareness of sharing patterns of embodied apperception.’
- ‘This apperception is indispensable because in the past non-state actors have been mere critics instead of playing their rightful role as eulogistic vehicles in the course of development.’
- ‘Where people differ is in the way that each of them typically makes use of the equipment; and this typical mode of apperception and responsiveness is what is meant in psychology by their type.’
- ‘From seashore strands to moors and mountains, from sand specks and protozoa to all-embracing panoramas, knowing and feeling were conjoined, not conflicting, modes of apperception.’
- ‘This could throw additional light upon the unconscious psychodynamic processes governing the perception and apperception, both sensory and extrasensory, of potentially threatening stimuli.’
- 1.1 Fully conscious perception.‘an immediate apperception of a unity lying beyond’
- ‘There can be no question of an ultimate justification of morality in the sense of a transcendental deduction of the moral law in terms of the ‘I think’ and the transcendental unity of apperception.’
- ‘In the sabbath, we find a foretaste and an apperception of the common good in the rest we receive for ourselves and the rest we ensure for others.’
- ‘Self-consciousness, or the subject of the transcendental unity of apperception, was likewise impervious to cognition from the Kantian standpoint.’
- ‘These kinds of mental acts seem to be less naturally treated as atomic elements in a bundle, bound by a passive unity of apperception.’
- ‘He was the first to distinguish explicitly between perception and apperception, i.e., roughly between awareness and self-awareness.’
Mid 18th century: from French aperception or modern Latin aperceptio(n-), from Latin ad- ‘to’ + percipere ‘perceive’.
In this article we explore how to impress employers with a spot-on CV. | <urn:uuid:c36eca23-0b2d-453e-838d-85715f6a41d9> | {
"dump": "CC-MAIN-2018-34",
"url": "https://en.oxforddictionaries.com/definition/us/apperception",
"date": "2018-08-19T11:14:59",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221215077.71/warc/CC-MAIN-20180819110157-20180819130157-00488.warc.gz",
"language": "en",
"language_score": 0.9383286833763123,
"token_count": 521,
"score": 3.359375,
"int_score": 3
} |
Speech & Language
In school, children and young adults use language for everything. The ability to assimilate class content into learning, reflect on what one knows, then produce a written or oral piece of work to demonstrate learning depends upon language processing skill. Further, the ability to use clear and smooth speech to communicate knowledge is crucial to showing what one has learned. Informed assessment and treatment can get to the heart of speech and language difficulties to remediate underlying processing issues and improve academic outcomes. The Speech Language Pathologist will work with families and teachers to diagnose and treat speech and language processing issues.
Central Auditory Processing Disorder (CAPD)
CAPD means that one can hear everything, but has difficulty processing what they hear, neurologically. CAPD affects not only how much language one can access from what they hear, but also interrupts the learning trajectory a student accesses to acquire literacy. Cumulatively, this makes a huge impact on how a student can access the curriculum. Thorough assessment can isolate these processing levels, while therapy aims at implementing strategies to reduce the impact of CAPD while remediating underlying neurological processing difficulties. The Speech-Language Pathologist and Educational Therapist work in consult with the student’s Audiologist, and in some cases, Paediatrician and Educational Psychologists to assess and treat auditory processing difficulties and disorders.
Student diagnosed with Dyscalculia have trouble with processing mathematical information and math language. These students can have difficulty with word problems, as well as consolidating math learning to progress with the curriculum at the expected pace. When a student has difficulty with this, they can get left behind; the more this happens the harder it is to catch up. Therapy focused on processing math language will support the student to become an autonomous math learner and regain their love of numbers and problem-solving. The Educational Therapist is uniquely positioned to isolate subtle and broad problem areas. Rather than simply reteaching, as is the case with traditional tutoring, the Educational Therapist, sometimes in consult with the Speech-Language Pathologist, can specifically isolate processing difficulties that contribute to dyscalculia. From there, a remediation plan that supports narrowing the gap in learning, and facilitating independence, can be implemented.
Dyslexia is often misdiagnosed in young and intermediate learners or diagnosed very late when problems have already taken hold. Reading is not a developmental milestone, but rather something we are taught. However, structural differences in the brains of people with dyslexia mean they can have trouble in accessing the neurological processes used to acquire literacy. Thorough assessment to ascertain a student’s literacy profile informs targeted therapy. The Educational Psychologist will work with the Educational Therapist and Speech-Language Pathologist to fully assess and diagnose learning disorders. Educational Therapists can then work with Speech-Language Pathologists to ensure all processing issues are addressed and treated to ensure positive gains are made. A tailored approach uses the brain’s ability to change to ensure that the student can access literacy and use it to learn and communicate effectively in class, as well as to enjoy the pleasure of reading for fun!
These are a group of neurological disorders, for example Attention Deficit Hyperactive Disorder (ADHD), affecting how well a child learns in school. The ability to regulate one’s body, process teacher instructions, and complete a task autonomously are at the heart of academic learning. Unfortunately, these are difficult for a child with attention and memory issues, even though they may be very bright and articulate. Educational Psychologists, and in some cases Paediatricians, work with Occupational Therapists, Speech-Language Pathologists, Educational Therapists and Teachers to form a holistic strategy-based plan. Students benefit from a broad approach to assessment and therapy that can identify strengths and difficulties, include strategies for success, while still working on remediating underlying neurological processing issues.
Dyspraxia affects a student’s ability to conceptualise what they have learned, and to plan and carry out a task to demonstrate learning. Students with dyspraxia can get left behind because they need longer to process for learning and producing their work. Therapy aimed at supporting access to the curriculum and remediating specific processing breakdowns can be very successful in helping students with dyspraxia meet their potential. Occupational Therapy yields positive gains, in consult with the Speech and Language Therapy and Educational Psychology to form a holistic plan for success.
Autism Spectrum Disorder (ASD)
While people with ASD are each very different, there are some common difficulties that may affect academic and social success. The ability to think flexibly about the world and from other people’s perspectives affects how one learns and engages with the learning context. Difficulties with planning and self-evaluation can affect the quality of work a student produces, even though the student may be highly intelligent. Thorough assessment aims at getting to the heart of each student’s processing profile, learning style, and personality to inform how the team approaches the support process. With targeted intervention, children with ASD can enjoy academic success and a higher quality of life. ASD is usually diagnosed by a Psychologist, while treatment is carried out with many professionals including Speech-Language Pathologists and Occupational Therapists, in consult with the family and teachers. | <urn:uuid:8b696fc0-4d42-4aa0-8c6b-c4a5c7d30413> | {
"dump": "CC-MAIN-2020-40",
"url": "https://www.thrivedevelopment.com.my/academic",
"date": "2020-09-26T07:39:52",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400238038.76/warc/CC-MAIN-20200926071311-20200926101311-00617.warc.gz",
"language": "en",
"language_score": 0.9486493468284607,
"token_count": 1088,
"score": 3.8125,
"int_score": 4
} |
White-Top Pitcher Plant
Part of the Wild Garden, white-top pitcher plants can be found along the Boardwalk in the Bog area. Upon reopening, pitcher plants and other carnivorous plants will take center stage as part of the “Trapped” exhibit that features larger than life sculptures.
The white-top pitcher plant (Sarracenia leucophylla) is an herbaceous, perennial plant species native to the gulf coastal plain of the southeastern United States. Found in bogs, savannas, flatwoods and cypress depressions, from the Florida panhandle to southeast Mississippi, these iconic carnivorous plants can form large colonies where they create stunningly beautiful vistas. The white coloration atop the pitcher makes this species one of the most striking and readily identifiable. Despite its beauty, the pitcher is actually a modified leaf with a much more practical purpose: to lure, trap and digest insect prey.
The development of carnivory in pitcher plants is an evolutionary adaptation enabling these plants to grow in acidic, anaerobic soils where critical nutrients (e.g., Nitrogen) are severely limiting. Pitchers are passive traps (i.e., without moving parts) and lure insects using a combination of scent and color. Upon arrival, insects are attracted to a nectary at the base of the hood. Insects unable to safely navigate this nectary fall to the bottom of the pitcher where a combination of downward-pointing hairs and waxy (slippery) cuticle prevents escape. Digestive enzymes released by the plant facilitate the decomposition process and critically needed nutrients are absorbed during the process.
The white-top pitcher plant is a fire-adapted species and benefits from periodic dormant season burns. One of the most common pitcher plant species in the southeast, white-top pitcher plant still faces widespread habitat loss from agricultural conversion, residential development, and fire suppression. Several large populations do exist on protected land, but poor management practices (e.g., fire suppression, hydrologic alteration) continue to negatively impact this species. The white-top pitcher plant is also popular in the cut flower industry, and although harvesting of pitchers does not generally kill plants, it is injurious nonetheless and should be avoided.
The white-top pitcher plant is a variable species with pitchers ranging in color from almost pure white to white with prominent green and red venation. Pure white forms are popular among collectors, as are selections devoid of anthocyanin (red pigmentation). While variation in pitcher color receives the bulk of the attention by collectors, the flowers of the white-top pitcher are quite stunning in their own right. Large, single red flowers emerge in March and April and are pollinated by several different bee (and fly) species. ‘Tarnok’ is a unique horticultural selection that exhibits a floral mutation where all flower parts (i.e., petals, stamens & pistil) have been replaced by additional whorls of reddish-green sepals, giving the flower a double or triple effect.
The white-top pitcher plant is not native to central Florida but has performed well at Bok Tower Gardens. This species is also remarkably cold hardy and can be grown as far north as Kentucky and Virginia. This is a relatively easy species to cultivate, provided plants have access to full sun and ample water. Supplemental fertilization is unnecessary and should be avoided, as should tap water, particularly where municipal water sources are alkaline and have a high dissolved salt content.
White-top pitcher plant should do well in a 1:1 Canadian peat and sand mix provided the soil is never permitted to dry out. Conversely, soils should never be inundated for long periods of time either. In colder climates, dead pitchers can be removed during the dormant season and a layer of straw (or pine straw) applied to help overwintering. As always, please remember to purchase your plants from a reputable source.
This blog was written by Patrick Lynch, Plant Accessioning Curator and photographed by Cassidy Jones, Social Media Coordinator. | <urn:uuid:9dff3e2a-a25e-4086-97fa-002402bb3b6f> | {
"dump": "CC-MAIN-2022-27",
"url": "https://boktowergardens.org/blog/feed-me-seymour/",
"date": "2022-07-05T19:58:58",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104597905.85/warc/CC-MAIN-20220705174927-20220705204927-00454.warc.gz",
"language": "en",
"language_score": 0.9429817795753479,
"token_count": 834,
"score": 3.640625,
"int_score": 4
} |
When we study the Bible we will find that the word hell is translated four different ways. This is not a mistake. It was done this way to express the types or areas that hell occupies and how they will affect sinners and sinning angels.
The words used are “Sheol,” “Hades,” “Gehenna,” and “Tartarus.” All of these words have been translated “hell” or “hell fire” in the Bible. Let’s look at all four.
- In the King James Bible, the Old Testament term Sheol is translated as "hell" 31 times, and as "the grave" 31 times. Sheol is also translated as "the pit" three times. Modern translations typically render Sheol as "the grave", "the pit", or "death." Both Sheol and Hades are related to death, or the temporary abode of the dead. Some Bible translations render them as “grave,” or “pit.” In all of these cases it is where the body goes at the time of death. It is generally agreed that both Sheol and Hades do not typically refer to the place of eternal punishment.
- Hades is the Greek word traditionally used for the Hebrew word Sheol, The Greek translations of the Hebrew Bible. Like other first-century Jews literate in Greek, Christian writers of the New Testament employed this usage. While earlier translations most often translated Hades as "hell", as does the King James Version, modern translations use the transliteration "Hades", or render the word as "to the grave", "among the dead", "place of the dead" or similar statements. In Latin, Hades could be translated as Purgatory after about 1200 AD, but no modern English translations render Hades as Purgatory. The New Testament use of Hades builds on its Hebrew parallel, Sheol which was the preferred translation in the Septuagint.
- In the New Testament, both early (King James Version) and modern translations often translate Gehenna as "hell". Young's Literal Translation and New World Translation are notable exceptions, simply using the word "Gehenna". All the references to Gehenna, except James 3:6, are from the lips of Christ himself, and there is an obvious emphasis on the punishment for the wicked after death as being everlasting. The term Gehenna is derived from the Valley of Hinnom, traditionally considered by the Jews as the place of the final punishment of the ungodly.
Another word that is translated hell is used in 2 Peter 2:4.
2 Peter 2:4
For if God spared not the angels that sinned, but cast them down to hell (Tartarus), and delivered them into chains (pits) of (dense) darkness, to be reserved unto judgment;
Used only in this verse, “hell” is Strong’s number 5020, which means “Tartaros (the deepest abyss of Hades); to incarcerate in eternal torment.” This word is never used in reference to humans, only for demons, and it does not mention fire. Tartarus is a Greek name for a subterranean place of divine punishment that is lower than Hades.
In the New Testament, both early and modern translations usually translate Tartarus as "hell", though a few render it as "Tartaro". The word Tartarus is only found once in the Bible.
And the angels which kept not their first estate, but left their own habitation, he hath reserved in everlasting chains under darkness unto the judgment of the great day.
This second example in Jude of God’s punishment for disobedience describes certain angels, not those who live in heaven and glorify God, but those who did not stay within the limits of authority that God gave them but left the place where they belonged.
Once pure, holy, and living in God’s presence, they (some angels) gave in to pride and joined Satan to rebel against God. They left their positions of authority and their dwelling with God, resulting in eventual doom. Peter explained that God “did not spare even the angels when they sinned” (2 Peter 2:4). Scholars differ as to which rebellion Jude referred. This could refer to the angels who rebelled with Satan (Ezekiel 28:15).
You were blameless in your ways from the day you were created till wickedness was found in you.
Through your widespread trade you were filled with violence, and you sinned. So I drove you in disgrace from the mount of God, and I expelled you (Satan), guardian cherub, from among the fiery stones.
Your heart became proud on account of your beauty, and you corrupted your wisdom because of your splendor. So I threw you to the earth; I made a spectacle of you before kings.
More likely this verse pertains to the sin of the “sons of God” as described in (Genesis 6:1-4).
When human beings began to increase in number on the earth and daughters were born to them,
. . . the sons of God saw that the daughters of humans were beautiful, and they married any of them they chose.
Then the LORD said, "My Spirit will not contend with humans forever, for they are mortal; their days will be a hundred and twenty years."
The Nephilim were on the earth in those days, and also afterward, when the sons of God went to the daughters of humans and had children by them . . . .
Demons, in this case are the Sons of God who are evil, twisted beings, so nothing they do should surprise us. As to a distinct motivation, one speculation is that the demons were attempting to pollute the human bloodline in order to prevent the coming of the Messiah. God had promised that the Messiah would one day crush the head of the serpent, Satan (Genesis 3:15). The demons in (Genesis 6) were possibly attempting to prevent the crushing of the serpent (Satan) and make it impossible for a sinless “seed of the woman” to be born. This is not a specific biblical answer, but it is biblically plausible.
Nephilim were offspring of the "sons of God" and the "daughters of men" before the Deluge (flood) according to Genesis 6:4; the name is also used in reference to giants who inhabited Canaan at the time of the Israelite conquest of Canaan according to Numbers 13:33.
An interpretation given in the book of Enoch in the Apocrypha, states that angels came to earth and took women as sexual partners. Though not in the Bible, Jewish theology at this time held that some fallen angels (demons) were held in chains and some were free to roam this world to oppress people. Jude’s readers apparently understood his meaning, as well as the implication that if God did not spare his angels, neither would he spare the false teachers. Pride and lust had led to civil war and to the angels’ fall. The false teachers’ pride and lust would lead to judgment and destruction.
As for these disobedient angels, God has kept them chained in prisons of darkness, waiting for the Day of Judgment. These angels were imprisoned in Tartarus (1 Peter 3:19-20; 2 Peter 2:4).
1 Peter 3:19
After being made alive, he went and made proclamation to the imprisoned spirits,
1 Peter 3:20
. . . to those who were disobedient long ago when God waited patiently in the days of Noah while the ark was being built.
Some scholars describe the “chains” as metaphors for the confinement of “darkness”; others take them to be literal chains in a dark pit somewhere in the lowest abode in hades. These sinful angels will be kept in this place of punishment until the great Day of Judgment, when they will face their final doom.
Your questions and comments are always appreciated.
2015 – 2016 | <urn:uuid:84ce1432-e436-4ef4-9321-a3c2a468d229> | {
"dump": "CC-MAIN-2017-43",
"url": "http://scriptureinsights.org/the-grave-hell-hades/",
"date": "2017-10-17T16:39:27",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187822145.14/warc/CC-MAIN-20171017163022-20171017183022-00068.warc.gz",
"language": "en",
"language_score": 0.9601976275444031,
"token_count": 1687,
"score": 3.40625,
"int_score": 3
} |
Ganga, the Hindu river goddess, is identified with the purity of water, which nourishes the land and is the source of all life. The Ganges River, located in the northeast of India is believed to cleanse bathers of their sins. Water rituals, including Ganga worship, demonstrate the crucial role of water in India, which periodically experiences devastating floods or drought. Like most of the Hindu gods, Ganga has an attendant animal, in this case an aquatic creature called a makara, here with a crocodile’s body and an elephant’s trunk; the makara also sometimes appears as a dolphin or fish.
Ganga Devi, goddess of the river Ganges, standing on an aquatic creature (makara) | <urn:uuid:be3cb7ab-d413-4957-b364-c4dbfd54bb6f> | {
"dump": "CC-MAIN-2020-40",
"url": "https://www.imj.org.il/en/collections/376979",
"date": "2020-09-27T20:24:24",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600401578485.67/warc/CC-MAIN-20200927183616-20200927213616-00470.warc.gz",
"language": "en",
"language_score": 0.9483963251113892,
"token_count": 155,
"score": 3.578125,
"int_score": 4
} |
Children in nurseries will soon be learning through Moocs (Massive Open Online Courses) as the internet revolution changes the face of learning, according to the man who first pioneered their use in higher education.
These teachers see the internet and digital technologies such as social networking sites, cell phones and texting, generally facilitating teens’ personal expression and creativity, broadening the audience for their written material, and encouraging teens to write more often in more formats than may have been the case in prior generations. At the same time, they describe the unique challenges of teaching writing in the digital age, including the “creep” of informal style into formal writing assignments and the need to better educate students about issues such as plagiarism and fair use.
"Lots people want to get started with game based learning, gamification and serious games in their training. We’ve been curating game related content for over a year and a half while conducting our own research and case studies. Here are 100 articles related to games and learning. Some of them are research-based, while others just offer an interesting perspective to spark discussion. Take what you need and share this with a colleague."
This study focused on how students perceive the use of mobile devices to create a personalized learning experience outside the classroom. Fifty-three students in three graduate TESOL classes participated in this study. All participants completed five class projects designed to help them explore mobile learning experiences with their own mobile devices, incorporating technologies such as YouTube and VoiceThread. We identified characteristics of these mobile users in Mobile Language Learning (MLL), and the results illuminate how MLL opens up new pedagogical scaffoldings.
“Curation is an important skill to develop, especially in an environment in which more and more organizations shift towards self-directed learning for their workers. Now is the time for learning and performance professionals to develop this new skill set.” | <urn:uuid:c8d216f9-f66c-46fa-b354-586012f3bbe3> | {
"dump": "CC-MAIN-2014-10",
"url": "http://www.scoop.it/t/learningfutures/p/4008078560/2013/09/23/robotic-plant-learns-to-grow-like-the-real-thing-tech-09-august-2013-new-scientist",
"date": "2014-03-07T18:11:08",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999650252/warc/CC-MAIN-20140305060730-00063-ip-10-183-142-35.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.951126754283905,
"token_count": 390,
"score": 3.546875,
"int_score": 4
} |
Definitions for imitation
ˌɪm ɪˈteɪ ʃənim·i·ta·tion
This dictionary definitions page includes all the possible meanings, example usage and translations of the word imitation.
the doctrine that representations of nature or human behavior should be accurate imitations
something copied or derived from an original
copying (or trying to copy) the actions of someone else
caricature, imitation, impersonationadjective
a representation of a person that is exaggerated for comic effect
fake, false, faux, imitation, simulatedadjective
not genuine or real; being an imitation of the genuine article
"it isn't fake anything; it's real synthetic fur"; "faux pearls"; "false teeth"; "decorated with imitation palm leaves"; "a purse of simulated alligator hide"
The act of imitating.
Samuel Johnson's Dictionary
Etymology: imitatio, Latin; imitation, French.
Since a true knowledge of nature gives us pleasure, a lively imitation of it, either in poetry or painting, must produce a much greater; for both these arts are not only true imitations of nature, but of the best nature. Dryden.
In the way of imitation, the translator not only varies from the words and sense, but forsakes them as he sees occasion; and, taking only some general hints from the original, runs division on the groundwork. Dryden.
Imitation is the act of copying or reproducing someone's behavior, appearance, expression, or actions. It is often used as a learning strategy, especially in social and behavioral contexts. It involves replicating an observed behavior of a model or prototype with the goal of achieving a similar outcome.
the act of imitating
that which is made or produced as a copy; that which is made to resemble something else, whether for laudable or for fraudulent purposes; likeness; resemblance
one of the principal means of securing unity and consistency in polyphonic composition; the repetition of essentially the same melodic theme, phrase, or motive, on different degrees of pitch, by one or more of the other parts of voises. Cf. Canon
the act of condition of imitating another species of animal, or a plant, or unanimate object. See Imitate, v. t., 3
Etymology: [L. imitatio: cf. F. imitation.]
Imitation is an advanced behavior whereby an individual observes and replicates another's behavior. Imitation is also a form of social learning that leads to the "development of traditions, and ultimately our culture. It allows for the transfer of information between individuals and down generations without the need for genetic inheritance." The word imitation can be applied in many contexts, ranging from animal training to international politics.
The Roycroft Dictionary
The sincerest form of insult.
The numerical value of imitation in Chaldean Numerology is: 1
The numerical value of imitation in Pythagorean Numerology is: 2
I love when people say 'Imitation Game' is such a crowd pleaser, yes, it's a crowd pleaser but the guy kills himself. We've achieved something, it's a beautiful challenge.
For those who intend to discover and to understand, not to indulge in conjectures and soothsaying, and rather than contrive imitation and fabulous worlds plan to look deep into the nature of the real world and to dissect it -- for them everything must be sought in things themselves.
By three methods we may learn wisdom First, by reflection, which is noblest Second, by imitation, which is easiest and third by experience, which is the bitterest.
Lukashenko will make an imitation of democracy like he did every time when he badly needed cash, the West has quite a short memory.
If music in general is an imitation of history, opera in particular is an imitation of human willfulness; it is rooted in the fact that we not only have feelings but insist upon having them at whatever cost to ourselves. The quality common to all the great operatic roles, e.g., Don Giovanni, Norma, Lucia, Tristan, Isolde, Br?nnhilde, is that each of them is a passionate and willful state of being. In real life they would all be bores, even Don Giovanni.
Popularity rank by frequency of use
Translations for imitation
From our Multilingual Translation Dictionary
- imitacióCatalan, Valencian
- napodobenina, imitace, napodobeníCzech
- efterligning, imitationDanish
- Nachahmung, Imitation, Imitat, KopieGerman
- jäljitelmä, jäljittely, imitaatio, kopioFinnish
- 模倣, 真似, 模造品Japanese
- imitasjonNorwegian Nynorsk
- imitare, imitațieRomanian
- подражание, имитацияRussian
Get even more translations for imitation »
Find a translation for the imitation definition in other languages:
Select another language:
- - Select -
- 简体中文 (Chinese - Simplified)
- 繁體中文 (Chinese - Traditional)
- Español (Spanish)
- Esperanto (Esperanto)
- 日本語 (Japanese)
- Português (Portuguese)
- Deutsch (German)
- العربية (Arabic)
- Français (French)
- Русский (Russian)
- ಕನ್ನಡ (Kannada)
- 한국어 (Korean)
- עברית (Hebrew)
- Gaeilge (Irish)
- Українська (Ukrainian)
- اردو (Urdu)
- Magyar (Hungarian)
- मानक हिन्दी (Hindi)
- Indonesia (Indonesian)
- Italiano (Italian)
- தமிழ் (Tamil)
- Türkçe (Turkish)
- తెలుగు (Telugu)
- ภาษาไทย (Thai)
- Tiếng Việt (Vietnamese)
- Čeština (Czech)
- Polski (Polish)
- Bahasa Indonesia (Indonesian)
- Românește (Romanian)
- Nederlands (Dutch)
- Ελληνικά (Greek)
- Latinum (Latin)
- Svenska (Swedish)
- Dansk (Danish)
- Suomi (Finnish)
- فارسی (Persian)
- ייִדיש (Yiddish)
- հայերեն (Armenian)
- Norsk (Norwegian)
- English (English)
Word of the Day
Would you like us to send you a FREE new word definition delivered to your inbox daily?
Use the citation below to add this definition to your bibliography:
"imitation." Definitions.net. STANDS4 LLC, 2023. Web. 3 Dec. 2023. <https://www.definitions.net/definition/imitation>. | <urn:uuid:589cba2c-ffbb-410a-a77e-1987e0042d5b> | {
"dump": "CC-MAIN-2023-50",
"url": "https://www.definitions.net/definition/imitation",
"date": "2023-12-03T05:20:22",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100484.76/warc/CC-MAIN-20231203030948-20231203060948-00769.warc.gz",
"language": "en",
"language_score": 0.8507997989654541,
"token_count": 1707,
"score": 3.546875,
"int_score": 4
} |
What are the 6 guidelines for presenting visual aids?
Presentation Tips: Guidelines for Visual Aids
- Use bullet points only (no sentences)
- Minimum font size guideline is 28 point.
- Use color.
- Use one simple font.
- Use upper and lower case.
- 4 x 6 rule: Use either four lines of text with six words per line, or six lines of text with four words per line.
What are the presentational aids?
Presentational aids are items other than the words of a speech that are used to support the intent of the speaker. In particular, they can be visual aids, audio aids or other supporting technology. Visual aids include projectors, physical objects,. photographs, diagrams, charts and so on.
What is the best presentation aid?
The type of presentation aids that speakers most typically make use of are visual aids: pictures, diagrams, charts and graphs, maps, and the like. Audible aids include musical excerpts, audio speech excerpts, and sound effects. A speaker may also use fragrance samples or a food samples as olfactory or gustatory aids.
What is the most important visual aid?
The first point to consider is what is the most important visual aid? The answer is you, the speaker. You will facilitate the discussion, give life to the information, and help the audience correlate the content to your goal or purpose.
What are six ways to deliver an effective presentation?
6 Ways to Create More Effective Presentations
- Write a statement of purpose for the presentation and keep it to one sentence. …
- Use visualizations and have data tables available to hand out as needed. …
- Write out insights; don’t just show graphs. …
- Be brief and cut out all extraneous information from your presentation.
What are 3 characteristics of an effective visual aid?
Visual aids must be clear, concise and of a high quality. Use graphs and charts to present data. The audience should not be trying to read and listen at the same time – use visual aids to highlight your points. One message per visual aid, for example, on a slide there should only be one key point.
What is the main goal of a presentation aid?
Presentation aids can fulfill several functions: they can serve to improve your audience’s understanding of the information you are conveying, enhance audience memory and retention of the message, add variety and interest to your speech, and enhance your credibility as a speaker.
What makes a great presentation?
Good presentations are memorable. They contain graphics, images, and facts in such a way that they’re easy to remember. A week later, your audience can remember much of what you said. Great presentations are motivating.
What are the good presentation skills?
How can you make a good presentation even more effective?
- Show your Passion and Connect with your Audience. …
- Focus on your Audience’s Needs. …
- Keep it Simple: Concentrate on your Core Message. …
- Smile and Make Eye Contact with your Audience. …
- Start Strongly. …
- Remember the 10-20-30 Rule for Slideshows. …
- Tell Stories. | <urn:uuid:55800392-6f43-4119-aa8a-06ea27c5de44> | {
"dump": "CC-MAIN-2021-43",
"url": "https://sarrahconference.com/diseases/what-are-the-6-types-of-presentation-aids.html",
"date": "2021-10-25T04:18:27",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587623.1/warc/CC-MAIN-20211025030510-20211025060510-00665.warc.gz",
"language": "en",
"language_score": 0.8832727670669556,
"token_count": 667,
"score": 3.515625,
"int_score": 4
} |
Creature Feature – Gopher Tortoise
This week’s featured creature is the Gopher Tortoise.
Originating 60 million years ago, the gopher tortoise is one of the oldest living species on the planet. It is named as such because it digs deep burrows — like a gopher, a species of burrowing rodent. It became the official state tortoise of Florida in 2008 and is considered a keystone species.
Scientific Name: Gopherus polyphemus
© The Nature Conservancy
Gopher Tortoise Fact File
Size: Individuals can measure up to 28cm long and weigh up to 4.5kg
Distribution: These long-lived reptiles can be found from southern South Carolina through the southern half of Georgia, into Florida, and west into southern Alabama, Mississippi and Louisiana. However, it’s nearly extinct in South Carolina and Louisiana and rare in both Mississippi and Alabama
Diet: These tortoises are herbivore scavengers and opportunities grazers. The diet consists of plants primarily, of which they consume over 300 species. They also eat mushrooms and fruits such as the gopher apple and nettles. A very small proportion of their diet is composed of fungi, lichens, carrion, bones, and insects.
Behaviour: The gopher tortoise is a keystone species, meaning that it’s very important to the health of the ecosystem it inhabits. Gopher tortoises share their burrows with more than 350 other species, providing shelter to hundreds of different animals ranging from frogs to owls and even endangered indigo snakes
IUCN Status: Vulnerable. Their main threat is one faced by many species worldwide: habitat loss. One of their favorite habitats, longleaf pine forest, once covered 90 million acres unbroken from Virginia to Florida to Texas. Less than 5% of original longleaf pine forest remains today | <urn:uuid:8d07d96a-b746-4c2d-a933-395e32b3a8d0> | {
"dump": "CC-MAIN-2023-40",
"url": "https://wiseoceans.com/creature-feature-friday-gopher-tortoise/",
"date": "2023-09-30T16:34:38",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510697.51/warc/CC-MAIN-20230930145921-20230930175921-00466.warc.gz",
"language": "en",
"language_score": 0.9358473420143127,
"token_count": 393,
"score": 3.65625,
"int_score": 4
} |
The complete form of this special version of "Thank you
means "honorable." Chisô
means "feast" or "entertainment." Sama
originally meant "lord," and is a way of turning a noun into a personification
" or "were
So the real meaning of gochisôsama deshita is "you were an honorable host."
There are several ways to say this. In informal situations, people like to say gochisôsan, essentially a shortened version of the full phrase with the honorific sama changed to a more friendly san. The Kansai dialect spoken by natives of Kyoto and Osaka likes to replace "sa" and "da" sounds with "ha" sounds, and uses gochisôhan.
Also, the phrase isn't restricted to right after the meal. If someone takes you out to dinner, and you see them the next day, you can say Kinô wa gochisôsama deshita, "Thank you for being such an honorable host yesterday." The same goes for today, last week, or whenever. It makes a good icebreaker in conversation if you know how to use it properly. | <urn:uuid:32b937d3-2e22-4919-8085-ed9b595b02c6> | {
"dump": "CC-MAIN-2018-13",
"url": "https://everything2.com/title/gochisosama",
"date": "2018-03-18T10:33:46",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257645604.18/warc/CC-MAIN-20180318091225-20180318111225-00246.warc.gz",
"language": "en",
"language_score": 0.9477100968360901,
"token_count": 248,
"score": 3.09375,
"int_score": 3
} |
Bluff City bicentennial: How the Cotton Exchange shaped Memphis
Without cotton, Memphis as we know it wouldn't exist.
The industry was fueled by the fertile soil in the Mississippi Delta, technological advancement that brought the cotton gin, slaves, later replaced by sharecroppers and others working for free or cheap, and a global demand for the crop.
Although cotton was bought and sold in Memphis for most of the half-century before the Civil War, it wasn't until after the decade after the war that a group of growers, buyers, financiers and others whose livelihoods were tied to the crop formed the Memphis Cotton Exchange in 1873.
The first location of the Cotton Exchange was in a modest building near Madison Avenue and Second Street. It then moved to a larger building near the same intersection. The final stop was the 12-story building at 65 Union Ave., where it moved in the mid-1920s and remained until it closed in 1978.
If the story of Memphis had to be told through a single building, that one at the corner of Union Avenue and Front Street — the center of what was then known as Cotton Row — is among the few that could tell it.
BLUFF CITY BICENTENNIAL:From comic to tragic, here are 200 pieces of Memphis' history
In 2006, the first floor became home to the Cotton Museum, which looks back on how cotton influenced everything in Memphis from the economy to the music to large cultural events.
"This floor sat empty from 1978 to 2006," said Ann Bateman, Cotton Museum manager. "They basically walked out and locked the door. All the stuff was still here. I wouldn’t call it preserved. It was just left. ... They just literally walked out and shut the door and turned the lights out."
Bales of cotton and pints of bourbon
Much like the name Wall Street refers to the home of the New York Stock Exchange and the financial industry ecosystem surrounding it, Cotton Row, the strip of Front Street in Downtown Memphis between Monroe and Gayoso avenues, was known simply as "the street" to cotton traders.
The street, including 28 buildings and two vacant lots, is now on the National Register of Historic Places. According to the National Register nomination form submitted the year before the exchange closed, the cotton trade got its formal start within the first decade of Memphis' founding.
"From 1826, when 300 bales were brought to Memphis by wagon, to the present, cotton has been important to Memphis," the application said, adding that by the late 1880s and 1890s, that number skyrocketed to 400,000 to 700,000 bales of cotton every year.
Much of that cotton was bought and sold along Cotton Row.
In the heart of it all was the Memphis Cotton Exchange. And on the ground level where the museum now sits was "the floor," said Calvin Turley, who started his career inside the building when it was still functioning as the exchange and still runs his cotton company from the fourth floor.
On the floor, cotton merchants would gather to agree on the value of bales of compressed cotton that sat about 5 feet tall and 2 feet wide.
"For most of the history of cotton merchandising in this country, the value of a 500-pound bale of cotton was determined by a sample — approximately shoe box size, let's say — which you would look at and read like you would a book," Turley said.
The color of the cotton, the leaf content, and the length of the fibers when bunches were pulled apart all contributed to its value.
"You needed to be where the brokers and merchants were so you could go look at samples and determine the value of this commodity that your intention was to trade," he said. "There was every good reason to have a place where people would meet one another."
That need to be close created a cotton ecosystem.
Inside the Cotton Exchange building, cotton classers were on the top floor where the natural light was best. It was their job to examine cotton samples and sort them into different classes based on quality. The "squidge" worked alongside the classer as an apprentice learning the business. Until bright light bulbs were developed to mimic sunlight, this work could be done only on clear, sunny days and near windows.
Porters — usually black — maintained the street, moved cotton bales and performed other labor-intensive jobs.
Outside the building, cotton factors, who lent money to farmers to buy seed and merchants to buy cotton, set up alongside cotton insurers, warehouse representatives, steamship agents and others.
"It was to be a place where rules were made, people could gather, markets were shown on the board," Turley said. "It was all about the buying and selling of cotton."
Just as that proximity facilitated business, it also created a social order maintained by lively games of dominoes, practical jokes and often-raucous drinking.
"There was a lot of drinking on the street, which was maybe a combination of a certain level of anxiety produced by speculation during the season and then nothing to do in the summertime," Turley said. "One of my first jobs was going to the Cotton Bowl liquor store and buying half pints of Old Yellowstone bourbon, which was not a very respectable brand, but it kept fires going."
Download the app: Get the latest news from The Commercial Appeal straight to your phone.
The death and rebirth of Downtown
The buildings on Cotton Row didn't match the elaborate architecture of some of the others that were rising around it, but even that was a sign of the time.
"Although built during a period of exuberant architectural expression, the Cotton Row buildings are primarily a product of function rather than prevailing fashion," the National Register nomination form said.
These buildings were there to get business done, not to look pretty. But just as the technology of the cotton gin helped to make the cotton industry viable, technology eventually made Cotton Row and the exchange obsolete.
While some of the world's leading cotton companies — including Dunavant Enterprises and Hohenberg Cotton, which later sold to Cargill — came from the shops on Cotton Row, as technology developed, traders could make deals over the phone from miles away and had no reason to come to Cotton Row.
As businesses disappeared and consolidated on Cotton Row, others in Downtown were shutting down too, although for different reasons.
According to Charles Crawford, a historian with the University of Memphis, as cars became more common, more and more people were willing to commute for work. As they moved east to the Memphis suburbs, businesses followed and East Memphis became the new central business district.
Opposition to school integration and the assassination of Dr. Martin Luther King Jr. at the Lorraine Motel only accelerated the already declining Downtown.
By the time the Memphis Cotton Exchange closed in 1978, much of Downtown was already desolate.
"You could have fired a gun down Main Street and wouldn't hit anyone," Crawford said.
But just as the exchange building followed the trend of a waning Downtown, it helped lead the revival.
Henry Turley, who helped pioneer a wave of Downtown development and revitalization, bought the building in the mid-1980s with partner Clyde Patton. Henry Turley, Calvin Turley's brother, had already renovated two vacant buildings when he and Patton set their sights on this one.
"The building was nearly empty," Henry Turley said. "There was nothing on Main Street. This place was pitiful."
Henry Turley persuaded tenants to move in by offering part ownership of the building rather than just rental space.
After this building, Henry Turley went on to lead a new wave of residential development to encourage more density Downtown, an effort that is still underway decades later.
As for the exchange building, it is still being used for what it was always used for — offices. Now, it houses advertisers, lawyers, real estate developers and, yes, even cotton traders.
The Commercial Appeal has been providing thorough coverage of Memphis since before the Cotton Exchange existed. Click here for a special offer to get unlimited digital access. | <urn:uuid:258011c4-2bfb-4521-bb49-f02178d10c32> | {
"dump": "CC-MAIN-2021-04",
"url": "https://www.commercialappeal.com/story/news/local/memphis200/2019/04/22/memphis-bicentennial-memphiscotton-exchange-mississippi-delta/3331133002/",
"date": "2021-01-15T14:12:09",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703495901.0/warc/CC-MAIN-20210115134101-20210115164101-00192.warc.gz",
"language": "en",
"language_score": 0.983870267868042,
"token_count": 1674,
"score": 3.546875,
"int_score": 4
} |
In Japan we continually face the threat of natural disaster, but despite this, our understanding of the forces of nature remains rather limited. It is thus imperative that we respect nature and remind ourselves of how important it is to incorporate natural disaster prevention measures, environmental preservation strategies, and economic considerations into our approaches to the development of land and the creation of urban centers. The Hyogoken-Nanbu earthquake-an earthquake of formidable power that occurred directly below a densely populated urban center with an advanced infrastructure-caused tremendous damage for three main reasons: 1) earthquake resistance of structures was insufficient; 2) infrastructure and other fundamental urban systems were deficient; 3) post-earthquake crisis management was deficient. All three of these explanations are closely linked. There is no guarantee that structures built to particular standards will be able to withstand all the assaults of nature. Therefore, in addition to reinforcement of earthquake resistance of structures, a comprehensive measure for earthquake disaster prevention should be developed from a wide view point.
The 1995 Hyogoken-Nanbu Earthquake demonstrated that we had forgotten about the devastation caused by previous earthquakes-and the danger this forgetfulness poses. Further, it revealed the need to fully understand the profundity of the damage created by a major earthquake, particularly since our urban centers are now so densely populated and have such complex infrastructures. To minimize disaster in the future, it is essential that people place a high value on disaster prevention and maintain a high level of awareness of what it entails, all of which can only be accomplished through ongoing education and training.
The single most important thing we can do to minimize the likelihood of another disaster is to thoroughly analyze why the Hyogoken-Nanbu earthquake was so devastating. This requires engineers and researchers in each relevant field to conduct detailed, broad-ranging studies based on information such as earthquake motion observation records and structural damage surveys.
This proposal is a compilation of items that the JSCE considers desirable from an academic point of view, and some of them must await future research and development before being realized. It is our sincere hope that this paper will serve the needs of various organizations working to develop earthquake disaster prevention measures.
The Hyogoken-Nanbu earthquake severely damaged many civil engineering structures and was caused by the activity of an inland fault which was, unfortunately, near a large urban center. Earthquake motions in near field of an active fault with a magnitude of 7, however, has not been incorporated into conventional earthquake-resistant design standards. The very strong earthquake motions of the Hyogoken-Nanbu earthquake, which had a maximum acceleration of about 8 m/s2, a maximum velocity of about 1 m/s, and a maximum displacement of about 30-50 cm, were widely observed near the fault, the first time such observations have been made in Japan. The severity of the damage can be attributed to the extremely strong earthquake motion-forces beyond the design criterion-that directly struck above-ground structures built before the introduction of elasto-plastic design, as well as underground structures which had been considered relatively safe. Many structures built with the latest earthquake-resistant technologies, however, were not severely damaged, an indication that strong earthquake motions near a fault can be overcome through engineering.
The return period of an active fault is thought to be about 1,000 years, so through the course of history it has been rare for active faults to directly strike major urban centers and cause severe damage. Expressed in a time frame more relevant to human life, the likelihood of such a disaster occurring over a period of 50 years is roughly 5%. Since the level of risk is low, strategic judgments must be made in order to maintain the capacity of civil engineering structures to withstand earthquakes. However, there have been quite a few instances in which serious damage has resulted from inland earthquakes with a magnitude of 7 or more. Therefore, even though the risk level is low, it is still possible for strong earthquakes of this type to strike somewhere in Japan, so their potential for disaster should not be ignored. To take full advantage of the bitter experience of the Hyogoken-Nanbu Earthquake, therefore, it is necessary to incorporate the effects of earthquake motions in near field of inland faults into earthquake-resistant design considerations.
Two types of earthquake motions should be considered in assessing the aseismic capacity of civil engineering structures. The first type is likely to strike a structure once or twice while it is in service. The second type is very unlikely to strike a structure during the structure's life time, but when it does, it is extremely strong. The second type ground motion includes those generated by interplate earthquakes in the ocean and those generated by earthquakes by inland faults. The concepts behind these two types of motion have been incorporated into the existing earthquake-resistant design of some structures, and these two types of the ground motions are called "Level I earthquake motions" and "Level II earthquake motions." Objectives for and characteristics of these earthquake motions in earthquake-resistant design are as follows:
(1) Level I earthquake motions is the level in which structures are not damaged when these motions strike.
(2) Level II earthquake motions is the level in which an ultimate capacity of earthquake resistance of a structure is assessed in plastic deformation range.
Level I earthquake motions are used in conjunction with the elastic design method and are established as earthquake motions for static load analysis or elastic dynamic analysis. There are many different types of civil engineering structure, and systems of and knowledge about the design methods for each of them have been developed through experience. These systems and the knowledge accumulated should be respected. In existing design systems of road bridges Level II earthquake motions are treated as design earthquake motions with an elastic response of 1 G on standard ground. However, since the earthquake motions in the Hyogoken-Nanbu earthquake were very destructive, a need to re-evaluate that Level II earthquake motions for very strong earthquake motions generated in the near field of inland faults.
A problem specific to direct inland earthquakes is that the relative displacement caused by the dislocation of an earthquake fault reaches the ground surface and structures straddle the fault. Using existing technology to deal with this situation is problematic because of the difficulty of specifying the exact locations of faults and the inevitability in many cases of linear structures crossing faults. Solutions to these problems require further research and development.
The following concepts are used to determine Level II earthquake motions.
(1) Level II earthquake motions generated by active inland faults are determined based on indentification of active faults that threaten an area and assumptions of source mechanism, through comprehensive examination of geological information on active faults, geodetic information on diastrophism, and seismological information on earthquake activity. To be able to do this, considerable effort must be put into establishing engineering methods.
(2) Since the Hyogoken-Nanbu earthquake, research in Japan on the above points has been advancing. However, the accuracy of methods for forecasting earthquake return periods and magnitudes, as well as the characteristics of the motions of earthquakes caused by active inland faults is still insufficient for establishing a basis for earthquake-resistant design. Therefore, when earthquake motions cannot be specified directly using information on an active fault, strong motion records caused by near field earthquakes, such as the Hyogoken-Nanbu earthquake, should be used to create a Standard Level II earthquake motions.
(3) It is thought that earthquake motions that are generated in the near field by a large interplate earthquake occurring near land have different characteristics from earthquake motions generated through the movement of an inland fault. Since there are no records on very strong earthquake motions of this type, there are a lot of unknowns about the characteristics of these earthquake motions. More research needs to be done on very strong earthquakes generated by earthquake motions near interplates.
Below is a discussion of how Level II earthquake motions are expressed.
(1) Level II earthquake motions are basically used for earthquake-resistant design based on damage control concepts. Therefore, the dynamic characteristics of earthquake motions should be expressed concisely, such as in the response spectrum or time history waveforms.
(2) Ground levels where earthquake motions are given
Effects of vertical motions: A lot of attention has been paid to the three-dimensional effects of the motions of the Hyogoken-Nanbu earthquake, particularly the vertical motions, on damage to and destruction of structures. Considerable effort has been made to clarify these effects. Thus far it has not been proven that the vertical motions were the primary cause of the destruction of major civil engineering structures. It is important to continue with detailed research on the effects of the three-dimensional characteristics of earthquake motions on the destruction of structures.
In this section, the expected aseismic performance of civil engineering structures against Level I and II earthquake motions is discussed, and design methods for achieving this performance are proposed. Civil engineering structures are of many different types, but they may be categorized as follows. 1) Above-ground structures such as bridges, tanks, dams, towers, etc.; 2) in-ground structures such as subways, buried pipelines, tunnels, etc.; and 3) various types of foundation such as piles, caissons, etc. and soil structures such as dikes, retaining walls, etc.
It is quite difficult to define a unified aseismic performance level for these different types of civil engineering structures. Hence, in this chapter, aseismic performance and design methods are proposed separately for each category.
(1) Earthquake resistance to Level I earthquakes
In principle, no damage should occur to any structure when earthquake motion of Level I occurs. Accordingly, the dynamic response during motion of this level should not exceed the elastic limit.
(2) Earthquake resistance to Level II earthquakes
Important structures and structures requiring immediate restoration in the event of an earthquake should, in principle, be designed to be relatively easily repairable; even if damage is suffered in the inelastic range. Accordingly, the maximum earthquake response of such structures must not exceed the allowable plastic deformation or the limit of ultimate strength. For other structures, complete collapse should not occur even if damage is beyond repair. Accordingly, deformation during an earthquake of this level should not exceed the ultimate deformation.
The degree of importance of structures can be determined based on the following factors:
(3) Important issues in the earthquake-resistant design of above-ground structures and related topics for research and development In evaluating the dynamic response of a structure to Level I earthquakes, linear multi-mode response analysis using response spectra or time history earthquake motions is recommended. Further, an investigation of the three-dimensional effects, including vertical motion, should be carried out when necessary.
In evaluating the dynamic response of a structure to Level II earthquakes, elasto-plastic time history response analysis is recommended. However, it is also acceptable to use practical and more convenient methods based on equivalent linearization analysis or design spectra corresponding to the allowable ductility factor. For structures with a low degree of static indeterminance, a rigorous verification of the ability to carry sustained loads is required, especially in the case of a Level II earthquake. Accordingly, it is desirable to investigate the accuracy of various elasto-plastic analysis methods and compare them with test results. For any structures with a high degree of static indeterminance, including steel and concrete structures, an ultimate deformation analysis that takes into account the damage process is recommended.
In the design of most steel structures, the allowable stress method alone is used, and no investigation of load capacity or deformability is carried out. However, earthquake-resistant design should in the future include investigation of these characteristics even in the case of steel structures. In particular, it is necessary to promote research to increase the deformability of structures, such as investigations related to structural configuration and the limits of sectional stress and strain.
Since the earthquake response of short-period structures is largely determined by the effect of dynamic interactions of the foundation-ground system in the nonlinear range, research into design methods that take account of this should be promoted. It may prove possible to use a simplified procedure in which the effect of dynamic interactions is incorporated into seismic design by lengthening the natural period of the total structural system and increasing the damping coefficient.
In order to enhance the earthquake resistance of structures, introduction of new technologies such as seismic isolation and active control is recommended. Seismic isolation increases the deformability and damping capacity of relatively short-period structures, while the use of active control incorporating energy absorbing mechanisms can increase the damping capacity of long-period structures.
The basis of earthquake-resistant design for underground structures is the stability and deformation behavior of the ground when subjected to earthquake inputs. Knowledge of three-dimensional displacement behavior, including depth-wise movements, is critical to the earthquake-resistant design of large tunnels, whether of shield or cut-and-cover type. Ground displacements along the structure axis are important in the case of extended structures of small cross-section, such as buried pipes. This means that the earthquake response of the near-surface ground should be thoroughly investigated. Since ground liquefaction and resulting ground displacement have a great influence on the earthquake resistance of underground structures, the stability of the ground under earthquake excitation should be studied in adequate detail.
(1) Retained earthquake resistance of structures
The function of structures should be retained after a Level I earthquake. In the case of a Level II earthquake, the damage should be limited such that there is no fatal damage to the structure's functions and functions can be restored within a short period.
(2) Use of flexible structures
To ensure that structures retain earthquake resistance after Level II earthquakes, it is highly recommended that structures and materials with good flexibility be used. Further, total collapse of a structure due to the collapse of a single member should be prevented by designing structural details so as to ensure brittle failure does not occur.
(3) Plans for lifeline systems
In designing trunk lines for lifeline systems such as water, sewerage, electricity, gas, and telecommunications designs best able to maintain functionality after a Level II earthquake should be chosen, taking into account the topography, ground conditions, and the city layout in the vicinity. If this is difficult for economic reasons or because of ground conditions, continued functionality (or rapid restoration) after a disaster should be ensured by selecting the most appropriate route, adopting a multi-route system, using a block system, or implementing some alternative measure.
(4) Underground structures straddling faults
When the location of an active fault is well identified, such measures as increasing the flexibilities of structures, duplicating lines, and isolation of line systems from the casing structure may be considered. However, if such measures are technically difficult to implement, operational measures including the provision of alternative systems should be considered.
(1) Seismic stability of foundation structures
In the case of a Level I earthquake, the objective of earthquake-resistant design for a foundation structure is to maintain the original engineering function of the superstructure which the foundation supports. One principle of design is, wherever possible, to prevent soil liquefaction in ground with a high liquefaction potential by implementing suitable ground improvements.
In cases where it is judged that ground improvements would be difficult, however, the function of the superstructure should be maintained by proper design and/or reinforcement of the foundation structure and/or the superstructure itself.
In the case of Level II earthquakes, the objective of earthquake-resistant design for a foundation structure is to ensure that no serious damage occurs to the superstructure supported by the foundation. Where it would be difficult to implement ground improvements, the foundation structure should be reinforced or the whole structural system should be re-evaluated, or both, to minimize displacement of the foundation due to seismic response and lateral ground displacement, thus preventing serious damage to the superstructure.
(2) Seismic stability of quay walls, dikes, and embankments
There may be no need for seismic stability along the entire length of this type of structure from an economic viewpoint, since quay walls, dikes, embankments, retaining walls, and similar structures are long, continuous structures which can be easily repaired when slight damage occurs. It is recommended that segments of relatively high importance be isolated and designed for high seismic stability.
For Level I earthquakes, the original functions of relatively important sections of quay walls, dikes, retaining walls, and embankments should not deteriorate, maintaining the original design requirement after the earthquake. Slight damage to other less-important sections is allowable unless it would have a detrimental effect on adjacent structures. The objective of earthquake-resistant design is, however, to ensure that damage can be repaired within a short period and the whole system returned to functionality.
For Level II earthquakes, the objective of earthquake-resistant design in the case of important sections of quay walls, dikes, retaining walls, and embankments is that the damage should not seriously affect the structures they support and adjacent facilities, even if some degree of damage occurs. In the case of important structures which form an essential part of an emergency transportation route, the aim is to ensure that original functions are maintained. For ordinary sections, it is necessary to ensure that, even if damaged, there are no detrimental effects on adjacent areas, such as by secondary damage.
(3) Important issues in the earthquake-resistant design of ground improvements, foundations, quay walls, dikes, retaining walls, and embankments, and related research and development topics
If a soil mass that includes a large amount of gravel also has some sandy matrix, it may liquefy depending on its density, fine-material content, hydraulic conductivity, etc. Accordingly, present design standards and codes should be re-evaluated and, if necessary, revised to include evaluation of the possibility of liquefaction for Holocene soil deposits and reclaimed fill with a gravel content.
Recently, detailed evaluations of the liquefaction potential of relatively dense sand have been described. These recent investigations revealed that, at blow counts above about twenty as measured by standard penetration tests, resistance to liquefaction increases rapidly with rising blow count. It was also revealed that the amplitude of cyclic shear stresses required to cause soil liquefaction rapidly increases as the number of loading cycles involved increases. This recent information suggests that the present design standards and codes should be re-evaluated and, if necessary, revised to properly take into account the liquefaction potential of dense sand, particularly in the case of the high stress amplitude and relatively small number of loading cycles in a near field earthquake.
It is also necessary to improve understanding of the mechanism of liquefaction-related large ground displacement and to develop methods of predicting it.
The behavior of piles, caissons, buried structures, and other similar structures in a liquefied soil mass undergoing lateral displacement is poorly understood. It is highly important to foster research into design methods for foundations and buried structures exposed to this situation.
The seismic behavior of quay walls, dikes, embankments, and retaining walls is also poorly understood. Accordingly, there is a great need to foster studies on the development of methods for evaluating the settlement and displacement of ground, and also the dynamic earth pressure caused by an earthquake. Methods are also needed for increasing the seismic stability of ground. This requires relevant field observations, model tests, etc.
(1) Basic policies on aseismic diagnosis
Earthquake resistance diagnosis of existing civil engineering structures is in two stages: primary diagnosis using approximate methods and secondary diagnosis using detailed methods.
Primary diagnosis should be based on damage to civil engineering structures caused by the Hyogoken-Nanbu earthquake. After ground conditions and the ages, design standards, and outlines of the structural characteristics are examined, structures requiring aseismic reinforcement and those requiring a detailed aseismic capacity examination by secondary diagnosis are selected. In primary diagnosis, the following five factors are taken into the consideration: 1. effect on human life when a structure is damaged; 2. effect on evacuation, rescue, emergency medical services, and activities for preventing a secondary disaster; 3. effect on provision of basic requirements for daily life and economic activities of the area; 4. substitution of system function by providing another structure; and 5. changes in design conditions after construction.
Objects for secondary diagnosis, which is based on drawings and specifications and ground conditions, are structures judged in primary diagnosis to require a detailed examination of aseismic capacity. Secondary diagnosis is used to judge whether a structure has the required aseismic capacity to withstand Level I and Level II earthquake motions, and to select structures for reinforcement. In secondary diagnosis, the bottom line in judging the aseismic capacity of a structure is that it does not collapse even when damaged beyond repair. In secondary diagnosis, on-site measurements and testing, and surveys on the ground conditions should be conducted, and the aseismic capacity of the structure to withstand the earthquake motions through redesigning and/or numerical analysis.
(2) Establishing data bases for aseismic diagnosis
For the smooth implementation of primary diagnosis, it is urgent that data bases (design standards and age of the structure) for existing civil engineering structures be established.
If the structure is old and adequate data on it cannot be obtained, primary diagnosis should be done in a strict manner and the site surveys and tests required for secondary diagnosis should be conducted.
(3) Aseismic capacity of an overall structure
In selecting parts of a structure for aseismic reinforcement, it is necessary to thoroughly take into consideration the effects of reinforcement on the aseismic capacity of the overall structure.
(4) Earthquake disaster prevention as a system
In selecting structures for aseismic reinforcement, it is necessary to attempt to effectively improve the earthquake disaster prevention capacity of the overall system which consist of structures.
(1) Basic policies of aseismic reinforcement
In aseismic reinforcement of an existing civil engineering structure, as with a new structure, both Level I and Level II earthquake motions must be taken into consideration. The in-service period of the structure should be considered the same as that of a new structure.
The target aseismic capacity of a structure for reinforcement should also be the same as that of a new structure. In short, as with a new structure, the importance of the structure and the risk of Level I or Level II earthquake motions are taken into consideration when the target aseismic capacity of the structure is established.
With some existing civil engineering structures, increasing the aseismic capacity to the level of a new structure is problematic because of difficulties with construction methods or because of financial constraints. In such cases, the importance of the structure should be carefully examined, and alternative measures, such as the establishment of a quick restoration system after an earthquake, should be adopted. In addition, issues pertaining to demolition and reconstruction should also be examined.
(2) Determining a priority for reinforcement
Determination of which structures have priority for aseismic reinforcement is based on the importance of the structure, as well as the risk of an earthquake in the area. It is also necessary to examine economic factors and the potential effects of reinforcement on the earthquake disaster prevention capacity of the overall system which consist of structures.
Clarification of the reasoning behind the process for determining which structures have priority for aseismic reinforcement is required.
(3) Aseismic reinforcement methods
Feasibility, safety, economic factors, and the effects of aseismic reinforcement on the surrounding environment must all be carefully examined when selecting an aseismic reinforcement method. Therefore, new construction methods and new materials appropriate for the structural characteristics of the structure and the environment of the site should be developed and applied.
(4) Evaluating the aseismic capacity of a reinforced structure
The aseismic capacity of a reinforced structure is evaluated with quantitative methods. This requires verification of the validity of the evaluation methods by, if necessary, conducting tests with full-size models, numerical analysis, and earthquake observations of reinforced structures. A thorough verification of evaluation methods for determining aseismic capacity is needed when a new construction method or new materials are used.
It is vital not only to evaluate the aseismic capacity of reinforced parts of a structure; it is also necessary to evaluate the aseismic capacity of the overall structure and to assess the safety against other loads such as winds and floods.
Further, it is necessary to evaluate how the earthquake disaster prevention capacity of the system consisting of reinforced structures is improved.
(5) Maintenance and management, and repair
As with new structures, reinforced structures require thorough periodic inspections. It may be necessary to conduct earthquake observations and various measurements in order to check whether the target aseismic capacity is being maintained.
(1) Development of aseismic diagnosis techniques based on structural characteristics
There are many different types of civil engineering structure, and the aseismic diagnosis method used for a particular structure must be appropriate for its structural characteristics. It is necessary to establish through research and development rational and appropriate aseismic diagnosis methods for each type of civil engineering structure.
(2) Development of aseismic reinforcement techniques
There exists large number of civil engineering structures that require aseismic reinforcement. In many cases, aseismic reinforcement work must be done while a structure is being used, which necessitates strict limitations on work periods and spaces as well as restrictions related to the surrounding environment, such as vibration and noise. It is therefore urgent to develop proper aseismic reinforcement techniques that satisfy these conditions based on the characteristics of each type of civil engineering structure.
(3) Construction of data base for design documents
The construction of data base for design documents is essential for conducting appropriate and reasonable aseismic diagnosis and reinforcement, as well as for restoring earthquake-damaged structures. Each organization responsible for a civil engineering structure should put considerable effort into research on and development of construction of data bases.
(1) Need for a seismic hazard assessment system
In Japan, open spaces formed by streets, roads, and parks are lacking in most urban areas, a result of inadequate effort to plan public facilities. Further, certain areas are densely packed with houses on small lots that do not meet present building and earthquake resistance codes. Such urban communities are less resilient to disasters as well as less comfortable to live in than those in other advanced countries.
Fundamental improvement of the urban environment is one of the most serious issues facing Japan. Since improvements are by no means possible within a couple of years, efforts must be initiated to attain them as early as possible. A "regional seismic hazard assessment system" is one key element to be taken into consideration in such efforts. This is explained below.
Through this process of analysis, evaluation, and publication, people in the community will gain an understanding of the present exposure to hazard. This will lead to a re-evaluation of land prices, which will in turn stimulate spontaneous improvements to the community environment.
On demand from the community, local governments are expected to encourage improvements to the environment through systematic assistance including expertise planning and financial support. Further, governments are also expected to facilitate coordinating these improvements with the normal urban-redevelopment and land-readjustment projects.
(2) Review and revision of urban/regional plans and infrastructure planning guidelines
An important element in urban/regional plans has always been safety in times of disaster. However, plans have not always been well coordinated with urban/regional disaster plans.
The urban infrastructure usually consists of a hierarchy of systems, each with different size and coverage. For example, the street system consists of arterial roads, collector-distributor streets, and local streets. In the case of Japan, however, this infrastructural hierarchy has not been well established, and it is lacking in both quality and quantity. As has been discussed for years, it is necessary to improve and extend Japanese planning standards.
Further, not all the minimum requirements for public facilities-such as evacuation/rescue routes and open spaces useful in case of emergency-have yet been established. To increase the seismic safety of society, an urgent review and revision of planning standards for such facilities is needed. These standards will be also useful in the assessment system described above.
Delays in rescue operations and fire-fighting aggravated the Hyogoken-Nanbu earthquake disaster, and revealed the inadequacy of current emergency management systems in Japan. Measures for disaster mitigation include several that can be implemented both pre- and post-disaster. Among them, the following require urgent consideration:
(1) Integrated use of various disaster information systems: Various disaster information systems are being constructed by both the public and private sector. However, none are intended or designed to be linked to each other. It is desirable to develop a technology for integrating these independent systems, and to carry out repeated drills prior to a disaster so as to master the integration functions.
(2) Preparing disaster management strategies: Disaster management involves serious decision-making issues such as whether evacuation vehicles or rescue vehicles should have priority, and whether use of water-dropping helicopters is appropriate in urban fire-fighting. Certain strategies for emergency management may be quite different from those used in normal situations, and may at first be considered unacceptable to the community. Through in-depth discussions prior to a real disaster, mitigation strategies that have community-wide consensus should be prepared for various disaster situations.
(3) Drill improvement: A large-scale earthquake disaster is likely to require efforts beyond the capacity of public emergency management agencies, so local communities should be asked to organize effective disaster drills that go beyond the conventional focus on evacuation and early fire fighting. These drills should be more comprehensive, encouraging people to think about what they themselves can do in such a disaster. Drill methods should be changed from the prepared-scenario type to an improvisational type aimed at improving adaptability in an emergency.
(4) Cultivation of disaster managers: Since large-scale disasters are rare, the lessons of past disasters tend to be lost without experts such as disaster managers, and disaster preparedness programs are tend to lack consistency and continuity. However, varied duties and Japan's tradition of periodic transfers tends to inhibit the cultivation of such trained experts. Disaster managers, including high-level decision makers, need to be cultivated to facilitate the early establishment of efficient disaster management systems in Japan.
The earthquake resistance of the infrastructure, schedules for seismic reinforcement of existing structures, and plans for post-disaster reconstruction are closely related to cost and the cost burden. Besides cost-benefit evaluations usually performed before determining the cost burden, a number of other cost-related issues arise, as follows.
(1) As with the Hyogoken-Nanbu earthquake, the cost burden sometimes exceeds the ability of the affected community to pay. Moreover, the occurrence of such a disaster in particular community is quite low in probability.
(2) Disaster-related damage extends not only to the economic sphere, but also to human life and even mental health.
(3) Increased investment to secure a safe community may result in a lower budget for new projects. Hence, the trade-off between the two needs to be evaluated from a socio-economic point of view.
A quantitative evaluation of the hazard mitigation achieved by disaster-related investment is important; it must be done together with the assessment mentioned in 4.1. The evaluated socio-economic effects will form the basis of planning standards for disaster preparedness.
Various legislative and financial relief measures were adopted after the Hyogoken-Nanbu earthquake without in-depth discussion. Some of these gave inconsistent relief to the various types of facilities, and left much room for improvement as regards rule standardization. Rules for financing reinforcement of existing facilities, post-disaster recovery, and reconstruction should be established, especially as regards placing an appropriate cost burden on the various regions and generations. These should take into account some of the important factors listed below. | <urn:uuid:3f44b2e6-8ea3-4c6f-b5c1-41d36b86c8dd> | {
"dump": "CC-MAIN-2018-30",
"url": "http://www.jsce.or.jp/committee/earth/propo-e.html",
"date": "2018-07-23T15:00:42",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676596542.97/warc/CC-MAIN-20180723145409-20180723165409-00549.warc.gz",
"language": "en",
"language_score": 0.9421940445899963,
"token_count": 6562,
"score": 3.328125,
"int_score": 3
} |
Questions About Immunity
Your immune system protects you from infections. COVID-19 vaccination will help protect you by creating an antibody response. After receiving the vaccine, if you are exposed to COVID-19 your body is ready to fight off the virus and reduce or eliminate illness.
How long will the vaccine protect those that receive it? Pfizer reports that the vaccine is ninety-percent effective. Moderna reports that their vaccine is ninety-four-percent effective. While the studies haven’t indicated how long protection will last, the FDA predicts it to be effective for several months and possibly a year. Vaccine experts are continuing to study the virus and vaccine to learn more.
Are you immune to COVID-19 after recovering from it? The extent to which antibodies that develop in response to SARS-CoV-2 infection are protective is still under study. If these antibodies are protective, it’s not known what antibody levels are needed to protect against reinfection. Therefore, even those who previously had COVID-19 can and should receive the COVID-19 vaccine.
What is herd immunity? Experts do not know what percentage of people would need to get vaccinated to achieve herd immunity to COVID-19. Herd immunity is a term used to describe when enough people have protection—either from previous infection or vaccination—that it is unlikely a virus or bacteria can spread and cause disease. As a result, everyone within the community is protected even if some people don’t have any protection themselves. The percentage of people who need to have protection in order to achieve herd immunity varies by disease. Learn more about herd immunity works.
Healthcare workers are prioritized by the CDC for COVID-19 vaccine because their work places them at risk, but just as importantly, vaccinating healthcare workers protects vulnerable patients. Accepting the vaccine, contributes to herd immunity (high level of immunity in the community) so that we can help end the pandemic. By taking the COVID-19 vaccine, we can also set an example for others because high levels of acceptance are so important for everyone’s safety. | <urn:uuid:71657ab9-005c-43aa-ae23-f7731b6f7747> | {
"dump": "CC-MAIN-2021-17",
"url": "https://yalehealth.yale.edu/yale-covid-19-vaccine-program/questions-about-immunity",
"date": "2021-04-23T16:39:43",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039596883.98/warc/CC-MAIN-20210423161713-20210423191713-00101.warc.gz",
"language": "en",
"language_score": 0.9573947191238403,
"token_count": 434,
"score": 3.921875,
"int_score": 4
} |
The Kem C. Gardner Policy Institute coordinates and facilitates focus groups to collect in-depth descriptive information for governments and business. Focus groups can be used to gain greater understanding about why and how people hold certain opinions about a topic, program, or organization. The group dynamic of focus group discussions can help individuals explore the nuances of a topic in a way that differs significantly from the one-on-one nature of survey research and in-depth interviews.
Focus groups can be a successful tool for a variety of projects, such as determining program needs, evaluating program outcomes, assessing customer satisfaction, exploring areas of concern within an organization, brainstorming potential policy reforms, and developing better questionnaires for survey research.
Focus groups are used to better understand:
- The feelings and concerns of employees within an organization
- The experiences and backgrounds of individuals with relatives who have cancer
- The needs, questions, and suggestions of individuals regarding a newly established organization in a community | <urn:uuid:c710ca4b-1289-4ed2-a5f7-dfc4937982d4> | {
"dump": "CC-MAIN-2018-05",
"url": "http://gardner.utah.edu/survey-research/focus-groups/",
"date": "2018-01-17T19:57:29",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084886964.22/warc/CC-MAIN-20180117193009-20180117213009-00660.warc.gz",
"language": "en",
"language_score": 0.9081405401229858,
"token_count": 192,
"score": 3.0625,
"int_score": 3
} |
The world will be transformed. By 2050, we will be driving electric cars and flying in aircraft running on synthetic fuels produced through solar and wind energy. New energy-efficient technologies, most likely harnessing artificial intelligence, will dominate nearly all human activities from farming to heavy industry. The fossil fuel industry will be in the final stages of a terminal decline. Nuclear fusion and other new energy sources may have become widespread. Perhaps our planet will even be orbited by massive solar arrays capturing cosmic energy from sunlight and generating seemingly endless energy for all our needs.
That is one possible future for humanity. It’s an optimistic view of how radical changes to energy production might help us slow or avoid the worst outcomes of global warming. In a report from 1965, scientists from the US government warned that our ongoing use of fossil fuels would cause global warming with potentially disastrous consequences for Earth’s climate. The report, one of the first government-produced documents to predict a major crisis caused by humanity’s large-scale activities, noted that the likely consequences would include higher global temperatures, the melting of the ice caps and rising sea levels. ‘Through his worldwide industrial civilisation,’ the report concluded, ‘Man is unwittingly conducting a vast geophysical experiment’ – an experiment with a highly uncertain outcome, but clear and important risks for life on Earth.
Since then, we’ve dithered and doubted and argued about what to do, but still have not managed to take serious action to reduce greenhouse gas emissions, which continue to rise. Governments around the planet have promised to phase out emissions in the coming decades and transition to ‘green energy’. But global temperatures may be rising faster than we expected: some climate scientists worry that rapid rises could create new problems and positive feedback loops that may accelerate climate destabilisation and make parts of the world uninhabitable long before a hoped-for transition is possible.
Despite this bleak vision of the future, there are reasons for optimists to hope due to progress on cleaner sources of renewable energy, especially solar power. Around 2010, solar energy generation accounted for less than 1 per cent of the electricity generated by humanity. But experts believe that, by 2027, due to falling costs, better technology and exponential growth in new installations, solar power will become the largest global energy source for producing electricity. If progress on renewables continues, we might find a way to resolve the warming problem linked to greenhouse gas emissions. By 2050, large-scale societal and ecological changes might have helped us avoid the worst consequences of our extensive use of fossil fuels.
It’s a momentous challenge. And it won’t be easy. But this story of transformation only hints at the true depth of the future problems humanity will confront in managing our energy use and its influence over our climate.
As scientists are gradually learning, even if we solve the immediate warming problem linked to the greenhouse effect, there’s another warming problem steadily growing beneath it. Let’s call it the ‘deep warming’ problem. This deeper problem also raises Earth’s surface temperature but, unlike global warming, it has nothing to do with greenhouse gases and our use of fossil fuels. It stems directly from our use of energy in all forms and our tendency to use more energy over time – a problem created by the inevitable waste heat that is generated whenever we use energy to do something. Yes, the world may well be transformed by 2050. Carbon dioxide levels may stabilise or fall thanks to advanced AI-assisted technologies that run on energy harvested from the sun and wind. And the fossil fuel industry may be taking its last breaths. But we will still face a deeper problem. That’s because ‘deep warming’ is not created by the release of greenhouse gases into the atmosphere. It’s a problem built into our relationship with energy itself.
Finding new ways to harness more energy has been a constant theme of human development. The evolution of humanity – from early modes of hunter-gathering to farming and industry – has involved large systematic increases in our per-capita energy use. The British historian and archaeologist Ian Morris estimates, in his book Foragers, Farmers, and Fossil Fuels: How Human Values Evolve (2015), that early human hunter-gatherers, living more than 10,000 years ago, ‘captured’ around 5,000 kcal per person per day by consuming food, burning fuel, making clothing, building shelter, or through other activities. Later, after we turned to farming and enlisted the energies of domesticated animals, we were able to harness as much as 30,000 kcal per day. In the late 17th century, the exploitation of coal and steam power marked another leap: by 1970, the use of fossil fuels allowed humans to consume some 230,000 kcal per person per day. (When we think about humanity writ large as ‘humans’, it’s important to acknowledge that the average person in the wealthiest nations consumes up to 100 times more energy than the average person in the poorest nations.) As the global population has risen and people have invented new energy-dependent technologies, our global energy use has continued to climb.
In many respects, this is great. We can now do more with less effort and achieve things that were unimaginable to the 17th-century inventors of steam engines, let alone to our hominin ancestors. We’ve made powerful mining machines, superfast trains, lasers for use in telecommunications and brain-imaging equipment. But these creations, while helping us, are also subtly heating the planet.
All the energy we humans use – to heat our homes, run our factories, propel our automobiles and aircraft, or to run our electronics – eventually ends up as heat in the environment. In the shorter term, most of the energy we use flows directly into the environment. It gets there through hot exhaust gases, friction between tires and roads, the noises generated by powerful engines, which spread out, dissipate, and eventually end up as heat. However, a small portion of the energy we use gets stored in physical changes, such as in new steel, plastic or concrete. It’s stored in our cities and technologies. In the longer term, as these materials break down, the energy stored inside also finds its way into the environment as heat. This is a direct consequence of the well-tested principles of thermodynamics.
Waste heat will pose a problem that is every bit as serious as global warming from greenhouse gases
In the early decades of the 21st century, this heat created by simply using energy, known as ‘waste heat’, is not so serious. It’s equivalent to roughly 2 per cent of the planetary heating imbalance caused by greenhouse gases – for now. But, with the passing of time, the problem is likely to get much more serious. That’s because humans have a historical tendency to consistently discover and produce things, creating entirely new technologies and industries in the process: domesticated animals for farming; railways and automobiles; global air travel and shipping; personal computers, the internet and mobile phones. The result of such activities is that we end up using more and more energy, despite improved energy efficiency in nearly every area of technology.
During the past two centuries at least (and likely for much longer), our yearly energy use has doubled roughly every 30 to 50 years. Our energy use seems to be growing exponentially, a trend that shows every sign of continuing. We keep finding new things to do and almost everything we invent requires more and more energy: consider the enormous energy demands of cryptocurrency mining or the accelerating energy requirements of AI.
If this historical trend continues, scientists estimate waste heat will pose a problem in roughly 150-200 years that is every bit as serious as the current problem of global warming from greenhouse gases. However, deep heating will be more pernicious as we won’t be able to avoid it by merely shifting from one kind energy to another. A profound problem will loom before us: can we set strict limits on all the energy we use? Can we reign in the seemingly inexorable expansion of our activities to avoid destroying our own environment?
Deep warming is a problem hiding beneath global warming, but one that will become prominent if and when we manage to solve the more pressing issue of greenhouse gases. It remains just out of sight, which might explain why scientists only became concerned about the ‘waste heat’ problem around 15 years ago.
One of the first people to describe the problem is the Harvard astrophysicist Eric Chaisson, who discussed the issue of waste heat in a paper titled ‘Long-Term Global Heating from Energy Usage’ (2008). He concluded that our technological society may be facing a fundamental limit to growth due to ‘unavoidable global heating … dictated solely by the second law of thermodynamics, a biogeophysical effect often ignored when estimating future planetary warming scenarios’. When I emailed Chaisson to learn more, he told me the history of his thinking on the problem:
It was on a night flight, Paris-Boston [circa] 2006, after a UNESCO meeting on the environment when it dawned on me that the IPCC were overlooking something. While others on the plane slept, I crunched some numbers literally on the back of an envelope … and then hoped I was wrong, that is, hoped that I was incorrect in thinking that the very act of using energy heats the air, however slightly now.
The transformation of energy into heat is among the most ubiquitous processes of physics
Chaisson drafted the idea up as a paper and sent it to an academic journal. Two anonymous reviewers were eager for it to be published. ‘A third tried his damnedest to kill it,’ Chaisson said, the reviewer claiming the findings were ‘irrelevant and distracting’. After it was finally published, the paper got some traction when it was covered by a journalist and ran as a feature story on the front page of The Boston Globe. The numbers Chaisson crunched, predictions of our mounting waste heat, were even run on a supercomputer at the US National Center for Atmospheric Research, by Mark Flanner, a professor of earth system science. Flanner, Chaisson suspected at the time, was likely ‘out to prove it wrong’. But, ‘after his machine crunched for many hours’, he saw the same results that Chaisson had written on the back of an envelope that night in the plane.
Around the same time, also in 2008, two engineers, Nick Cowern and Chihak Ahn, wrote a research paper entirely independent of Chaisson’s work, but with similar conclusions. This was how I first came across the problem. Cowern and Ahn’s study estimated the total amount of waste heat we’re currently releasing to the environment, and found that it is, right now, quite small. But, like Chaisson, they acknowledged that the problem would eventually become serious unless steps were taken to avoid it.
That’s some of the early history of thinking in this area. But these two papers, and a few other analyses since, point to the same unsettling conclusion: what I am calling ‘deep warming’ will be a big problem for humanity at some point in the not-too-distant future. The precise date is far from certain. It might be 150 years, or 400, or 800, but it’s in the relatively near future, not the distant future of, say, thousands or millions of years. This is our future.
The transformation of energy into heat is among the most ubiquitous processes of physics. As cars drive down roads, trains roar along railways, planes cross the skies and industrial plants turn raw materials into refined products, energy gets turned into heat, which is the scientific word for energy stored in the disorganised motions of molecules at the microscopic level. As a plane flies from Paris to Boston, it burns fuel and thrusts hot gases into the air, generates lots of sound and stirs up contrails. These swirls of air give rise to swirls on smaller scales which in turn make smaller ones until the energy ultimately ends up lost in heat – the air is a little warmer than before, the molecules making it up moving about a little more vigorously. A similar process takes place when energy is used by the tiny electrical currents inside the microchips of computers, silently carrying out computations. Energy used always ends up as heat. Decades ago, research by the IBM physicist Rolf Landauer showed that a computation involving even a single computing bit will release a certain minimum amount of heat to the environment.
How this happens is described by the laws of thermodynamics, which were described in the mid-19th century by scientists including Sadi Carnot in France and Rudolf Clausius in Germany. Two key ‘laws’ summarise its main principles.
The first law of thermodynamics simply states that the total quantity of energy never changes but is conserved. Energy, in other words, never disappears, but only changes form. The energy initially stored in an aircraft’s fuel, for example, can be changed into the energetic motion of the plane. Turn on an electric heater, and energy initially held in electric currents gets turned into heat, which spreads into the air, walls and fabric of your house. The total energy remains the same, but it markedly changes form.
We’re generating waste heat all the time with everything we do
The second law of thermodynamics, equally important, is more subtle and states that, in natural processes, the transformation of energy always moves from more organised and useful forms to less organised and less useful forms. For an aircraft, the energy initially concentrated in jet fuel ends up dissipated in stirred-up winds, sounds and heat spread over vast areas of the atmosphere in a largely invisible way. It’s the same with the electric heater: the organised useful energy in the electric currents gets dissipated and spread into the low-grade warmth of the walls, then leaks into the outside air. Although the amount of energy remains the same, it gradually turns into less organised, less usable forms. The end point of the energy process produces waste heat. And we’re generating it all the time with everything we do.
Data on world energy consumption shows that, collectively, all humans on Earth are currently using about 170,000 terawatt-hours (TWh), which is a lot of energy in absolute terms – a terawatt-hour is the total energy consumed in one hour by any process using energy at a rate of 1 trillion watts. This huge number isn’t surprising, as it represents all the energy being used every day by the billions of cars and homes around the world, as well as by industry, farming, construction, air traffic and so on. But, in the early 21st century, the warming from this energy is still much less than the planetary heating due to greenhouse gases.
Concentrations of greenhouse gases such as CO2 and methane are quite small, and only make a fractional difference to how much of the Sun’s energy gets trapped in the atmosphere, rather than making it back out to space. Even so, this fractional difference has a huge effect because the stream of energy arriving from the Sun to Earth is so large. Current estimates of this greenhouse energy imbalance come to around 0.87 W per square meter, which translates into a total energy figure about 50 times larger than our waste heat. That’s reassuring. But as Cowern and Ahn wrote in their 2008 paper, things aren’t likely to stay this way over time because our energy usage keeps rising. Unless, that is, we can find some radical way to break the trend of using ever more energy.
One common objection to the idea of the deep warming is to claim that the problem won’t really arise. ‘Don’t worry,’ someone might say, ‘with efficient technology, we’re going to find ways to stop using more energy; though we’ll end up doing more things in the future, we’ll use less energy.’ This may sound plausible at first, because we are indeed getting more efficient at using energy in most areas of technology. Our cars, appliances and laptops are all doing more with less energy. If efficiency keeps improving, perhaps we can learn to run these things with almost no energy at all? Not likely, because there are limits to energy efficiency.
Over the past few decades, the efficiency of heating in homes – including oil and gas furnaces, and boilers used to heat water – has increased from less than 50 per cent to well above 90 per cent of what is theoretically possible. That’s good news, but there’s not much more efficiency to be realised in basic heating. The efficiency of lighting has also vastly improved, with modern LED lighting turning something like 70 per cent of the applied electrical energy into light. We will gain some efficiencies as older lighting gets completely replaced by LEDs, but there’s not a lot of room left for future efficiency improvements. Similar efficiency limits arise in the growing or cooking of food; in the manufacturing of cars, bikes and electronic devices; in transportation, as we’re taken from place to place; in the running of search engines, translation software, GPT-4 or other large-language models.
Even if we made significant improvements in the efficiencies of these technologies, we will only have bought a little time. These changes won’t delay by much the date when deep warming becomes a problem we must reckon with.
Optimising efficiencies is just a temporary reprieve, not a radical change in our human future
As a thought experiment, suppose we could immediately improve the energy efficiency of everything we do by a factor of 10 – a fantastically optimistic proposal. That is, imagine the energy output of humans on Earth has been reduced 10 times, from 170,000 TWh to 17,000 TWh. If our energy use keeps expanding, doubling every 30-50 years or so (as it has for centuries), then a 10-fold increase in waste heat will happen in just over three doubling times, which is about 130 years: 17,000 TWh doubles to 34,000 TWh, which doubles to 68,000 TWh, which doubles to 136,000 TWh, and so on. All those improvements in energy efficiency would quickly evaporate. The date when deep warming hits would recede by 130 years or so, but not much more. Optimising efficiencies is just a temporary reprieve, not a radical change in our human future.
Improvements in energy efficiency can also have an inverse effect on our overall energy use. It’s easy to think that if we make a technology more efficient, we’ll then use less energy through the technology. But economists are deeply aware of a paradoxical effect known as ‘rebound’, whereby improved energy efficiency, by making the use of a technology cheaper, actually leads to more widespread use of that technology – and more energy use too. The classic example, as noted by the British economist William Stanley Jevons in his book The Coal Question (1865), is the invention of the steam engine. This new technology could extract energy from burning coal more efficiently, but it also made possible so many new applications that the use of coal increased. A recent study by economists suggests that, across the economy, such rebound effects might easily swallow at least 50 per cent of any efficiency gains in energy use. Something similar has already happened with LED lights, for which people have found thousands of new uses.
If gains in efficiency won’t buy us lots of time, how about other factors, such as a reduction of the global population? Scientists generally believe that the current human population of more than 8 billion people is well beyond the limits of our finite planet, especially if a large fraction of this population aspires to the resource-intensive lifestyles of wealthy nations. Some estimates suggest that a more sustainable population might be more like 2 billion, which could reduce energy use significantly, potentially by a factor of three or four. However, this isn’t a real solution: again, as with the example of improved energy efficiency, a one-time reduction of our energy consumption by a factor of three will quickly be swallowed up by an inexorable rise in energy use. If Earth’s population were suddenly reduced to 2 billion – about a quarter of the current population – our energy gains would initially be enormous. But those gains would be erased in two doubling times, or roughly 60-100 years, as our energy demands would grow fourfold.
So, why aren’t more people talking about this? The deep warming problem is starting to get more attention. It was recently mentioned on Twitter by the German climate scientist Stefan Rahmstorf, who cautioned that nuclear fusion, despite excitement over recent advances, won’t arrive in time to save us from our waste heat, and might make the problem worse. By providing another cheap source of energy, fusion energy could accelerate both the growth of our energy use and the reckoning of deep warming. A student of Rahmstorf’s, Peter Steiglechner, wrote his master’s thesis on the problem in 2018. Recognition of deep warming and its long-term implications for humanity is spreading. But what can we do about the problem?
Avoiding or delaying deep warming will involve slowing the rise of our waste heat, which means restricting the amount of energy we use and also choosing energy sources that exacerbate the problem as little as possible. Unlike the energy from fossil fuels or nuclear power, which add to our waste energy burden, renewable energy sources intercept energy that is already on its way to Earth, rather than producing additional waste heat. In this sense, the deep warming problem is another reason to pursue renewable energy sources such as solar or wind rather than alternatives such as nuclear fusion, fission or even geothermal power. If we derive energy from any of these sources, we’re unleashing new flows of energy into the Earth system without making a compensating reduction. As a result, all such sources will add to the waste heat problem. However, if renewable sources of energy are deployed correctly, they need not add to our deposition of waste heat in the environment. By using this energy, we produce no more waste heat than would have been created by sunlight in the first place.
Take the example of wind energy. Sunlight first stirs winds into motion by heating parts of the planet unequally, causing vast cells of convection. As wind churns through the atmosphere, blows through trees and over mountains and waves, most of its energy gets turned into heat, ending up in the microscopic motions of molecules. If we harvest some of this wind energy through turbines, it will also be turned into heat in the form of stored energy. But, crucially, no more heat is generated than if there had been no turbines to capture the wind.
The same can hold true for solar energy. In an array of solar cells, if each cell only collects the sunlight falling on it – which would ordinarily have been absorbed by Earth’s surface – then the cells don’t alter how much waste heat gets produced as they generate energy. The light that would have warmed Earth’s surface instead goes into the solar cells, gets used by people for some purpose, and then later ends up as heat. In this way we reduce the amount of heat being absorbed by Earth by precisely the same amount as the energy we are extracting for human use. We are not adding to overall planetary heating. This keeps the waste energy burden unchanged, at least in the relatively near future, even if we go on extracting and using ever larger amounts of energy.
Covering deserts in dark panels would absorb a lot more energy than the desert floor
Chaisson summarised the problem quite clearly in 2008:
I’m now of the opinion … that any energy that’s dug up on Earth – including all fossil fuels of course, but also nuclear and ground-sourced geothermal – will inevitably produce waste heat as a byproduct of humankind’s use of energy. The only exception to that is energy arriving from beyond Earth, this is energy here and now and not dug up, namely the many solar energies (plural) caused by the Sun’s rays landing here daily … The need to avoid waste heat is indeed the single, strongest, scientific argument to embrace solar energies of all types.
But not just any method of gathering solar energy will avoid the deep warming problem. Doing so requires careful engineering. For example, covering deserts with solar panels would add to planetary heating because deserts reflect a lot of incident light back out to space, so it is never absorbed by Earth (and therefore doesn’t produce waste heat). Covering deserts in dark panels would absorb a lot more energy than the desert floor and would heat the planet further.
We’ll also face serious problems in the long run if our energy appetite keeps increasing. Futurists dream of technologies deployed in space where huge panels would absorb sunlight that would otherwise have passed by Earth and never entered our atmosphere. Ultimately, they believe, this energy could be beamed down to Earth. Like nuclear energy, such technologies would add an additional energy source to the planet without any compensating removal of heating from the sunlight currently striking our planet’s surface. Any effort to produce more energy than is normally available from sunlight at Earth’s surface will only make our heating problems worse.
Deep warming is simply a consequence of the laws of physics and our inquisitive nature. It seems to be in our nature to constantly learn and develop new things, changing our environment in the process. For thousands of years, we have harvested and exploited ever greater quantities of energy in this pursuit, and we appear poised to continue along this path with the rapidly expanding use of renewable energy sources – and perhaps even more novel sources such as nuclear fusion. But this path cannot proceed indefinitely without consequences.
The logic that more energy equals more warming sets up a profound dilemma for our future. The laws of physics and the habits ingrained in us from our long evolutionary history are steering us toward trouble. We may have a technological fix for greenhouse gas warming – just shift from fossil fuels to cleaner energy sources – but there is no technical trick to get us out of the deep warming problem. That won’t stop some scientists from trying.
Perhaps, believing that humanity is incapable of reducing its energy usage, we’ll adopt a fantastic scheme to cool the planet, such as planetary-scale refrigeration or using artificially engineered tornadoes to transport heat from Earth’s surface to the upper atmosphere where it can be radiated away to space. As far-fetched as such approaches sound, scientists have given some serious thought to these and other equally bizarre ideas, which seem wholly in the realm of science fiction. They’re schemes that will likely make the problem worse not better.
We will need to transform the human story. It must become a story of doing less, not more
I see several possibilities for how we might ultimately respond. As with greenhouse gas warming, there will probably be an initial period of disbelief, denial and inaction, as we continue with unconstrained technological advance and growing energy use. Our planet will continue warming. Sooner or later, however, such warming will lead to serious disruptions of the Earth environment and its ecosystems. We won’t be able to ignore this for long, and it may provide a natural counterbalance to our energy use, as our technical and social capacity to generate and use ever more energy will be eroded. We may eventually come to some uncomfortable balance in which we just scrabble out a life on a hot, compromised planet because we lack the moral and organisational ability to restrict our energy use enough to maintain a sound environment.
An alternative would require a radical break with our past: using less energy. Finding a way to use less energy would represent a truly fundamental rupture with all of human history, something entirely novel. A rupture of this magnitude won’t come easily. However, if we could learn to view restrictions on our energy use as a non-negotiable element of life on Earth, we may still be able to do many of the things that make us essentially human: learning, discovering, inventing, creating. In this scenario, any helpful new technology that comes into use and begins using lots of energy would require a balancing reduction in energy use elsewhere. In such a way, we might go on with the future being perpetually new, and possibly better.
None of this is easily achieved and will likely mirror our current struggles to come to agreements on greenhouse gas heating. There will be vicious squabbles, arguments and profound polarisation, quite possibly major wars. Humanity will never have faced a challenge of this magnitude, and we won’t face up to it quickly or easily, I expect. But we must. Planetary heating is in our future – the very near future and further out as well. Many people will find this conclusion surprisingly hard to swallow, perhaps because it implies fundamental restrictions on our future here on Earth: we can’t go on forever using more and more energy, and, at the same time, expecting the planet’s climate to remain stable.
The world will likely be transformed by 2050. And, sometime after that, we will need to transform the human story. The narrative arc of humanity must become a tale of continuing innovation and learning, but also one of careful management. It must become a story, in energy terms, of doing less, not more. There’s no technology for entirely escaping waste heat, only techniques.
This is important to remember as we face up to the extremely urgent challenge of heating linked to fossil-fuel use and greenhouse gases. Global warming is just the beginning of our problems. It’s a testing ground to see if we can manage an intelligent and coordinated response. If we can handle this challenge, we might be better prepared, more capable and resilient as a species to tackle an even harder one. | <urn:uuid:89d723bc-0a09-4bf5-a173-8b6f9d78f36a> | {
"dump": "CC-MAIN-2023-50",
"url": "https://aeon.co/essays/theres-a-deeper-problem-hiding-beneath-global-warming?utm_source=rss-feed",
"date": "2023-12-11T21:32:36",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679518883.99/warc/CC-MAIN-20231211210408-20231212000408-00393.warc.gz",
"language": "en",
"language_score": 0.9485205411911011,
"token_count": 6226,
"score": 3.734375,
"int_score": 4
} |
Silicon (Si) is a chemical element of the periodic table, located in the group 14 and the period 3, and is having the atomic number 14. It is a hard, brittle, lustrous, dark-grey metalloid, whose name comes from the Latin word “silex” or “silicis”, which means flint or hard stone. It is the second most abundant element on earth. It is a member of the carbon group.
Silicon on periodic table
|– p block|
Silicon is a p-block element, found in the fourteenth column (carbon group) and the third row of the periodic table. It has the atomic number 14 and is denoted by the symbol Si.
Silicon element information
|Silicon is found in the third row of the periodic table below the carbon element.|
|Origin of name||Latin word “silex” or “silicis” (which means flint or hard stone)|
|Atomic number (Z)||14|
|Atomic mass||28.0855 u|
|Group||14 (carbon group)|
|Atomic radius||111 pm|
|Covalent radius||111 pm|
|Van der Waals radius||210 pm|
|Melting point||1414 ℃, 2577 ℉, 1687 K|
|Boiling point||3265 ℃, 5909 ℉, 3528 K|
|Electron configuration||[Ne] 3s2 3p2|
|Learn how to write: Silicon electron configuration|
|Electrons per shell||2, 8, 4|
|Learn how to draw: Silicon Bohr model|
|Crystal structure||Face-centered diamond-cubic|
|Phase at r.t||Solid|
|Density near r.t||2.3290 g/cm3|
|Main isotopes||Silicon-28, Silicon-29, Silicon-30|
|Oxidation state||-4, +4|
|Electronegativity (Pauling scale)||1.90|
|Learn how to find: Silicon protons neutrons electrons|
|Learn how to find: Silicon valence electrons|
|Discovered by||Jöns Jacob Berzelius in 1824|
History of silicon
The discovery and naming of silicon are credited to several chemists, including Antoine Lavoisier, Sir Humphry Davy, and Jöns Jacob Berzelius. Lavoisier first suspected that silica might be an oxide of a fundamental chemical element in 1787, but it was not until 1817 that Thomas Thomson gave it the name “silicon,” adding the suffix “-on” to indicate that it was a nonmetal. Berzelius is typically credited with the discovery of the element because he purified and characterized it as a new element in 1824.
Silicon in its crystalline form was not isolated until 1854, when Henri Etienne Sainte-Claire Deville electrolyzed a mixture of sodium chloride and aluminum chloride containing approximately 10% silicon, producing a slightly impure allotrope of silicon. More cost-effective methods have since been developed to isolate several allotropic forms, including the most recent discovery of silicene in 2010.
In the early 20th century, the chemistry and industrial use of siloxanes and silicone polymers, elastomers, and resins were developed, leading to the widespread use of silicon in a range of products, from sealants and adhesives to medical implants and electronics. The solid-state physics of doped semiconductors and the crystal chemistry of silicides were also mapped in the late 20th century, paving the way for the development of modern electronic devices such as computers and smartphones. Today, silicon remains one of the most important elements in the world, with a wide range of applications across multiple industries.
Silicon is the second most abundant element in the Earth’s crust, after oxygen, and is found mainly in the form of silicon dioxide (SiO2), which is commonly known as silica. Silicon also occurs in various other minerals, such as feldspars, micas, and clays. It is also found in some types of rocks, such as granite, gneiss, and sandstone.
Silicon can also be found in living organisms, especially in plants and animals. It is an essential element for many organisms, including humans. Silicon plays a role in the formation of bones, teeth, and connective tissues, and is also important for the growth and development of plants.
Silicon is typically produced by reducing silica (SiO2) with carbon in a furnace. The process involves heating a mixture of silica, carbon, and a reducing agent, such as wood chips or coal, in an electric arc furnace at temperatures of around 2000 ℃. The silicon produced is then typically purified by several refining steps, including acid leaching, filtration, and distillation.
Another common method of producing silicon is through the thermal decomposition of silane (SiH4) gas. Silane is first produced by reacting metallurgical grade silicon with hydrogen at high temperatures. The resulting silane gas is then decomposed at high temperatures to produce pure silicon.
There are also several other methods of producing silicon, including the use of the chemical vapor deposition (CVD) process and the fluidized bed reactor process. The CVD process involves the reaction of a silicon-containing gas, such as silane or silicon tetrachloride, with a substrate surface at high temperatures, while the fluidized bed reactor process involves the reduction of silicon dioxide with a reducing gas, such as hydrogen, in a fluidized bed reactor.
Properties of silicon
Silicon is a hard, brittle crystalline solid with a blue-gray metallic luster.
It has a high melting and boiling point.
It is a poor conductor of electricity and heat.
Silicon is a semiconductor with an electrical conductivity between that of a conductor and an insulator.
It reacts with halogens (fluorine, chlorine, bromine, iodine) to form silicon tetrahalides.
It does not react with water, but reacts with steam to form silicon dioxide and hydrogen gas.
It is not affected by acids except for hydrofluoric acid.
Silicon has a diamond cubic crystal structure.
Each silicon atom is tetrahedrally coordinated with four neighboring silicon atoms.
Silicon has three stable isotopes: 28Si, 29Si, and 30Si.
28Si is the most common isotope of silicon, making up about 92.2% of the natural abundance.
Silicon has several allotropes, including amorphous, crystalline, and black (or metallic) silicon.
Silicon is transparent to infrared radiation.
It has a high refractive index and can be used in lenses and prisms.
Silicon is a hard and brittle material.
It has a high Young’s modulus and is used in the construction of microelectromechanical systems (MEMS).
Uses of silicon
Silicon is the most widely used material in the production of semiconductors. Its unique electronic properties, including its ability to conduct electricity under certain conditions, make it a crucial component in the manufacture of integrated circuits, transistors, and other electronic devices.
Silicon is also widely used in the production of solar cells. It is the primary material used in the manufacture of photovoltaic cells, which convert sunlight into electricity.
Glass and ceramics
Silicon dioxide (SiO2), also known as silica, is a key component in the production of glass and ceramics. Silicon is also used to make synthetic quartz crystals, which are used in watches and other electronic devices.
Silicon-based materials are used in a wide range of construction applications. For example, silicon is used to make high-strength, lightweight concrete, and is also used in the production of sealants and adhesives.
Silicon is biocompatible, meaning it does not cause an adverse reaction when implanted in the human body. This property makes it an ideal material for medical implants such as pacemakers, artificial joints, and other medical devices.
Silicon-based lubricants are used in a variety of applications, including automotive, aerospace, and industrial machinery. These lubricants provide superior performance under extreme conditions and are often used in high-temperature or high-pressure environments.
Silicones, which are derived from silicon, are used in a wide range of cosmetic products. They are often used as emollients, which help to moisturize and soften the skin, and as thickeners, which give cosmetic products their smooth, creamy texture.
Interesting facts about silicon
Silicon is the second most abundant element in the Earth’s crust, making up about 28% of its mass.
Silicon has a high melting point of 1414 ℃ and a boiling point of 3265 ℃.
Silicon is a semiconductor, which means it can conduct electricity under certain conditions and is used extensively in electronic devices.
Silicon has a diamond-like crystal structure and is brittle in nature.
Silicon is used in the production of glass, ceramics, and cement.
Silicon has a unique ability to form silicon-oxygen bonds, which makes it a key element in the formation of silicates, the most common minerals on Earth.
Silicon has isotopes that are used in various medical applications, including diagnosing and treating cancer.
Silicon is also used in the production of solar cells, as it can convert light into electricity.
The element silicon was first isolated in 1824 by Jöns Jacob Berzelius, a Swedish chemist.
The name “silicon” comes from the Latin word “silicis,” which means “flint” or “hard stone.” | <urn:uuid:e37e6188-2799-4c69-8d0b-7db59082de31> | {
"dump": "CC-MAIN-2023-23",
"url": "https://learnool.com/silicon/",
"date": "2023-06-05T03:55:22",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224650620.66/warc/CC-MAIN-20230605021141-20230605051141-00580.warc.gz",
"language": "en",
"language_score": 0.8453370928764343,
"token_count": 2107,
"score": 3.9375,
"int_score": 4
} |
The Body and Reverence (Lvl 3 Lesson Book 2)
This book teaches that reverence is the proper response to God’s imprint in a person, object, moment, or experience.
- Lesson Book
- 48 Pages
- 4 Lessons
Creation awakens wonder and awe in humans. Wonder draws us in, while awe holds us back. Together, wonder and awe form reverence. If we respond to our bodies and the bodies of others with reverence, we enter more deeply into friendships and love. Reverence and love are the antidotes to distraction or a misdirected desire to take control. Reverence is especially needed at Mass where we meet Jesus bodily in the Eucharist.
The Theology of the Body is a positive, beautiful approach to the truth of the body and its meaning. In a culture where the body is considered “an enemy to freedom,” Saint John Paul’s Theology of the Body is necessary more than ever. The Body Matters lesson books series teaches the truths about the body in an accessible, age-appropriate way. It is a great resource for parents, Faith Formation programs and Catholic schools to help children see the sacredness of the body. Here you can find Saint John Paul’s teachings on the body rendered in language accessible to children, with beautiful illustrations, practical examples and solid Catholic theology.
Special pricing for book sets and educator guides available for schools and faith formation programs. Please call us at 972-395-5593, or toll free at 888-855-4791.
There are no reviews yet. | <urn:uuid:3d948ade-0189-4ca7-aae8-db6162257adf> | {
"dump": "CC-MAIN-2023-23",
"url": "https://tobet.org/product/the-body-and-reverence/",
"date": "2023-05-28T08:58:08",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224643663.27/warc/CC-MAIN-20230528083025-20230528113025-00710.warc.gz",
"language": "en",
"language_score": 0.9035493731498718,
"token_count": 327,
"score": 3.046875,
"int_score": 3
} |
Wisdom teeth are your third molars.
Early removal of wisdom
teeth, around the age of 16 or 17, can help you to avoid future problems.
to the American Association of Oral and Maxillofacial
Surgeons, approximately 50 million Americans will need to have
their wisdom teeth removed before the age of 25.
The average mouth has
thirty-two teeth, sixteen on top and sixteen on the bottom. Each tooth has
a special name and use:
The four 1st molars come in around age six and are
called "six year molars".
The four 2nd molars come in around the
age 12 and are called the "12 year molars".
The four 3rd molars
come in around the age of 17 (age range of 15-25) when most individuals become adults thus they are
called "wiser or wisdom teeth".
American Association of
Oral and Maxillofacial Surgeons recommend
evaluation of wisdom teeth by age 25
If we have enough room in the
jaw the eruption of these teeth will be just a normal process of growing
up. However, for allot of people only have enough room for twenty-eight
teeth and there is not enough room for these teeth to erupt properly so they
Wisdom teeth are considered
"impacted" or unable to erupt when they have no place to go or grow.
may grow in sideway (which can destroy your second molar), only partially come through the gum
causing a bacteria trap resulting in recurrent infections or remain trapped
beneath the gum and bone forming fluid filled sac (cyst) or tumors that destroy the jaw or teeth
surrounding this area.
Wisdom teeth can be considered
not functional or useful if they are:
have gum disease-current
data indicates that 25% of patients with third molars have
considerable periodontal disease
around this area**.
move other teeth out of aliment
with your biting.
Benefits of early removal of wisdom
teeth, around the age of 16 or 17, can help you to avoid future problems. At a
tooth roots are not fully
the surrounding bone is
there is less chance of damaging nearby nerves or other
there is less surgical risk
healing is generally
Up To Top
should be removed to:
Reduce the chance of unexplained pain.
Accommodate a prosthetic appliance.
Avoid cavities in wisdom teeth and the teeth
Avoid periodontal disease.
Avoid biting interference .
Avoid disruption of natural
alignment causing teeth to shift.
Avoid bone shrinkage.
formation (a sac filled with infected
fluid around the crown of the tooth, like
a water balloon).
When they cannot erupt into an acceptable position.
When the roots may not be fully developed to
decrease the surgical risk involved with the procedure.
A 20 year study revealed
that out of 865 patients with broken jaws, 65% fractured
their lower jaw in the area of their un-erupted wisdom
40% of adults that
never had their wisdom teeth removed as a teen develop
infection, decay or gum disease by 45
Avoid check biting
25% of adults over age 40
with wisdom teeth need to have them extracted and the risk
of surgical complications in these individuals increases
by 30% from what it is in adolescents***.
Up To Top
We recommend our
patients see Dr. Black to have their wisdom teeth removed. He is well
qualified and highly experienced to determine the age, need and procedures needed to remove them.
This is usually an outpatient procedure done right in his office. The type
and length of surgery will depend upon how developed your wisdom teeth
If they are removed, you
will be will be able to keep the rest of your mouth healthy and your other teeth
If wisdom teeth have
erupted, the key to preserving them is maintaining good oral health
Tooth Extractions Affect Eating Disorders
Data reveal that dental procedures,
specifically third molar surgery, can significantly alter
the course of eating disorders, causing exacerbation or
relapse. No patient indicated that dental therapy was the
primary cause of these multifactorial psychonutritional
A history of
eating disorder should alert the practitioner to the risks
of performing third molar surgery without a medical or
psychotherapy consultation unless there is documentation of
remission. Delay of surgical intervention is recommended if
third molars are asymptomatic. If surgery is necessary, the
surgeon and other members of the psychotherapy team should
establish clear guidelines regarding behavior and postoperative
nutrition and should monitor the patient's nutritional
© 2001 American Association of Oral and Maxillofacial
(Picture courtesy of HMI)
*** Source: Pennsylvania Institute of Oral Surgery, American
Association of Oral and Maxillofacial Surgeons, Feb. 2002
H. Blakey et al., J Oral Maxillofac Surg 60:1227-1233, 2002
Up To Top
February 06, 2008
Map Extraction Guide Patient | <urn:uuid:df87bca3-c1cb-43fc-8c6a-2e43d2f437de> | {
"dump": "CC-MAIN-2016-18",
"url": "http://www.dentalgentlecare.com/wisdom_teeth.htm",
"date": "2016-05-02T19:20:37",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860117405.91/warc/CC-MAIN-20160428161517-00189-ip-10-239-7-51.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.8930810689926147,
"token_count": 1068,
"score": 3.40625,
"int_score": 3
} |
By Helge M. Markusson // The Fram Centre
Satellite data and models show global warming could be 25% higher were it not for the carbon trapping and cooling effect of a greening Earth during the past 40 years.
—A new study reports continued climate-altering carbon emissions and intensive land use have inadvertently greened half of the Earth’s vegetated lands. Green leaves convert sunlight to sugars, thus providing food, fiber, and fuel while replacing carbon dioxide (CO2) in the air with water. The removal of heat-trapping CO2 and wetting of air cools the Earth’s surface. Global greening since the early 1980s may have thus reduced global warming, possibly by as much as 0.25oC, reports the study “Characteristics, drivers and feedbacks of global greening” published in the inaugural issue of the journal Nature Reviews Earth and Environment. Two of the authors, Dr. Jarle W. Bjerke and Dr. Hans Tømmervik works at Norwegian Institute for Nature Research at the Fram Centre in Tromsø, Norway.
Highly credible evidence
This comprehensive study is based on a review of over 250 published articles and new results from multiple satellites, model studies and field observations to detail the geography, causes, and consequences of global greening. “This phenomenal greening, together with global warming, sea-level rise and sea-ice decline, represents highly credible evidence of anthropogenic climate change,” said lead authors Shilong Piao and Xuhui Wang of the Sino-French Institute for Earth System Science in the College of Urban and Environmental Sciences at Peking University, PRC.
Greening of the Arctic
Near-daily observations since the early 1980s from NASA and NOAA satellites reveal vast expanses of the Earth’s vegetated lands from the Arctic to the temperate latitudes exhibiting vigorous greening tendencies, as previously reported by Prof. Ranga Myneni and his Ph.D. students, Taejin Park and Chi Chen, of Boston University, USA. Notably, the NASA MODIS sensors observed pronounced greening during the 21st century in the most populous and developing countries, China and India. Even regions far, far removed from human reach have not escaped global warming and greening. “Svalbard in the high arctic, for example, has seen a 30% increase in greenness concurrent with about 4 degrees increase in mean summer temperature between 1986 and 2015,” said co-author Dr. Rama Nemani of NASA’s Ames Research Center, USA.
The reasons for global greening vary – intensive use of land for farming, large-scale planting of trees, a warmer and wetter northerly clime, re-wilding of abandoned lands, recovery from past disturbances – but is chiefly due to CO2 fertilization.
“It is ironic that the very same carbon emissions responsible for harmful changes to climate are also fertilizing plant growth, which in turn is somewhat moderating global warming” said Dr. Jarle W. Bjerke of the Norwegian Institute for Nature Research.
We have to stop deforestation
Carbon emissions from fossil fuel use and tropical deforestation added 160 ppm of CO2 to the atmosphere during the past 40 years. About 40 ppm of which was passively absorbed by the oceans and another 50 ppm, actively, by plants. The 70 ppm remaining in the atmosphere, together with other greenhouse gases, is responsible for the observed 1oC warming since the early 1980s.
“Plants are actively defending against the dangers of carbon pollution by not only sequestering carbon on land but also by wetting the atmosphere through transpiration of groundwater and evaporation of precipitation intercepted by their bodies,” said co-author Dr. Philippe Ciais, associate director of the Laboratory of Climate and Environmental Sciences, Gif-sur-Yvette, France.
He added, “stopping deforestation and sustainable, ecologically sensible afforestation could be one of the simplest and cost-effective, though not sufficient, defenses against climate change.”
Dr. Hans Tømmervik of the Norwegian Institute for Nature Research came with a cautionary note for high northern regions “in the cool regions of the World, afforestation programs may, however, instigate local warming by reducing the reflectivity of solar radiation back to the atmosphere, and also contribute to increased release of carbon stored in soils; therefore low-stature boreal-arctic vegetation should be kept intact and soil layers should not be disturbed”.
Note to the readers: This article is published after the printed version of Fram Forum was produced. | <urn:uuid:4a469dae-b1b7-478c-ab9f-6ae26c69c606> | {
"dump": "CC-MAIN-2020-40",
"url": "https://framsenteret.no/forum/2020/carbon-emissions-have-made-the-world-a-greener-place-which-has-a-cooling-effect-but-its-not-enough/",
"date": "2020-09-27T14:51:13",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400279782.77/warc/CC-MAIN-20200927121105-20200927151105-00300.warc.gz",
"language": "en",
"language_score": 0.9355936646461487,
"token_count": 970,
"score": 3.75,
"int_score": 4
} |
The seventh book in the Girls Who Could series, The Girl Who Could Rock the Moon inspires girls and boys of all ages to create an exciting new world through the exploration of STEM: science, technology, engineering and math. Mary G. Ross was a gifted mathematician and became the first female Native American engineer in the United States at a time when women in STEM were rare. She was brave, she was bold and she helped take us to the moon. Her equations solved in-flight problems for rockets and jets, and she wrote a traveler’s manual to the planets. Much of her work remains classified to this day, but her inspiring story is yours to enjoy. Does your little one like technology? Do they like to ask questions? Encourage them to see math as a game they’ll want to play with this bright, entertaining read.
The Girls Who Could is a fun, colorful series of stories about real women who have made a difference in the world through inspired action. By giving young girls and boys examples of women who are doing amazing things, children grow up with a template of achievement upon which to grow and expand their own dreams and goals. Simple drawings of children their own age and fun, rhyming prose helps kids connect easily with the message in each story. | <urn:uuid:663c2c53-ec1d-4907-b495-4457d5d16b2d> | {
"dump": "CC-MAIN-2020-10",
"url": "http://earthlodgebooks.com/the-girl-who-could-rock-the-moon-mary-g-ross-and-the-magic-of-stem/",
"date": "2020-02-17T03:20:53",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875141653.66/warc/CC-MAIN-20200217030027-20200217060027-00490.warc.gz",
"language": "en",
"language_score": 0.9765374064445496,
"token_count": 258,
"score": 3.609375,
"int_score": 4
} |
Money Activities Trading and Bartering for equal value!
Merchandise has equal monetary value.
Students will have a trading experience with The Sequoia Family.
The Indian boy or girl will only trade their item for things of equal value!
This product is about a monetary value trading experience for things that the Sequoia Family and the Puritan Family need or use.
Each worksheet has one Indian item available for trade for one, two or three items available from the Puritan Family in each trading experience on 28 different no prep worksheets.
Students may need to add two or three items prices together to find the value of the trade item.
Students will circle, dab, or color the items to show the ones used to trade with the Indian boy or girl.
All worksheet totals on trades are valued at one hundred dollars or less.
These worksheets give students practice differentiate by adding up to three items prices to get the total value of the trade item. | <urn:uuid:156bbf28-a3d1-4510-b320-92f1f347ed4a> | {
"dump": "CC-MAIN-2016-50",
"url": "https://www.teacherspayteachers.com/Product/Money-Activities-Barter-and-Trade-Equal-Value-No-Prep-Worksheets-2893101",
"date": "2016-12-04T10:45:06",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698541317.69/warc/CC-MAIN-20161202170901-00345-ip-10-31-129-80.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.8739787936210632,
"token_count": 202,
"score": 3.359375,
"int_score": 3
} |
Mapping Project Uses Data to Guide WV Conservation Efforts
Monday, December 20, 2021
Conservation experts in the state are expanding a forest mapping tool to assess the impact of development on thousands of acres of public lands.
Board member with the Allegheny Blue Ridge Alliance and West Virginia Highlands Conservancy Rick Webb explained there are dozens of projects underway or under consideration within the three National Forests in the Central Appalachian Highlands region.
He said most projects involve clear-cutting and road-building, which increase the vulnerability of the forests' ecosystem and watershed.
"We're concerned that any new logging done now, a hundred years after the big cut, be done in a way to preserve these forests," said Webb, "to retain their function, to supply clean, cool water."
The Highlands contain the headwaters of major river systems in the eastern U.S, including the Potomac, James and Cheat rivers.
Webb added that the steep mountain slopes and soil types make the area among the most landslide-prone in the country, which affects water quality and is worthy of special conservation attention.
Dan Schaffer is a CSI Geospatial Consultant with the Allegheny Blue Ridge Alliance who helped create the mapping tool, based on Geographic Information System technology.
He said he believes the public should be involved in the review process for development projects that could potentially affect the region's diversity of plants and animals.
"It's murky, it's often driven as much by interest as science," said Schaffer. "And for the average person, they're really taken out of that process. We're trying to give them a seat at the table again."
He added the tool offers accessible information on topography and geography, water quality and soil erodibility, along with locations and boundaries of proposed projects for the Monongahela National Forest in West Virginia, and the George Washington and Jefferson National Forests in Virginia and West Virginia.
get more stories like this via email
Voting-rights advocates are suing the state of Arizona over new regulations they say make it harder for some people to register and would block thousa…
New Hampshire ranks second in the country on measures of child well-being, according to the new 2022 Kids Count Data Book from the Annie E. Casey …
Massachusetts ranks first in the nation for children's well-being, according to the 2022 Kids Count Data Book from the Annie E. Casey Foundation…
Minnesota once again gets a high ranking among states for child well-being, but an annual report says the state's disparities remain a challenge…
Some measurements of children's well-being show warning signs in Iowa in the area of education. The numbers contrast with Iowa's overall ranking in a …
Health and Wellness
Nearly a dozen Iowa youths with disabilities are taking newly developed leadership skills out into the world. A summer academy wrapped up this month…
A coalition of community organizations teamed up in Oregon to force a chronic polluter out of business, and bring environmental justice to a nearby …
Health and Wellness
During National Health Center Week, health-care advocates are highlighting the work Community Health Centers are doing to improve access to care … | <urn:uuid:80005cc5-77a1-4747-95bc-95aaede6b159> | {
"dump": "CC-MAIN-2022-33",
"url": "https://www.publicnewsservice.org/2021-12-20/environment/mapping-project-uses-data-to-guide-wv-conservation-efforts/a76988-1",
"date": "2022-08-15T00:39:41",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572089.53/warc/CC-MAIN-20220814234405-20220815024405-00331.warc.gz",
"language": "en",
"language_score": 0.9417731761932373,
"token_count": 649,
"score": 3.09375,
"int_score": 3
} |
When electromagnetic waves are bounced back and forth between two reflectors, a standing wave is formed. From the wavelength of the standing wave, the frequency of the waves can be determined.
- Convenient all-in-one set includes control unit, transmitter and receiver as horn antennae, microwave probe, microwave benches, grating, slit plates, prism, and reflection/absorption plates
- With the same set, all aspects of microwave physics can be studied quantitatively: polariziation, reflection, transmission, refraction, propagation, diffraction, interference, inverse square law, standing waves, conservation of energy in reflection and transmission
- Very detailed experiment guides for all experiments
The wavelength of a standing wave is measured and the frequency determined. By extrapolation of the oscillating state on the reflector is determined.
What you can learn about
- Electromagnetic waves
- standing waves
- Distance law | <urn:uuid:562c6b78-a746-4793-92d2-5b1ad972c725> | {
"dump": "CC-MAIN-2019-30",
"url": "https://www.phywe.com/en/standing-waves-in-the-range-of-microwaves.html",
"date": "2019-07-17T01:13:35",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525004.24/warc/CC-MAIN-20190717001433-20190717023433-00527.warc.gz",
"language": "en",
"language_score": 0.8725234270095825,
"token_count": 186,
"score": 4.15625,
"int_score": 4
} |
| North Korean Overview
For decades North Korea has been one of the world's most secretive societies, and one of the few remaining countries still under communist rule.
North Korea emerged in 1948 amid the chaos following the end of World War II, its history dominated by its "Great Leader", Kim Il-sung. After the Korean War, Kim Il-sung introduced the personal philosophy of Juche, or self-reliance, which became a guiding light for North Korea's development.
Kim Il-sung's son, Kim Jong-il, is now head of state. Decades of this rigid state-controlled system have led to stagnation and a leadership dependent on the cult of personality.
Famine in North Korea is estimated to have killed some 2 to 3 million of the nation's 24 million people since 1995, because of acute food shortages caused by natural disasters and economic mismanagement.. Another 300,000 North Koreans have fled to China to live illegally, risking their lives to flee the mass starvation and brutal oppression of Kim Jong Il's Stalinist North Korea regime.
The totalitarian state also stands accused of systematic human rights abuses. Reports of torture, public executions, slave labor, and forced abortions and infanticides in prison camps have emerged. Human rights groups estimate that there are up to 200,000 political prisoners in North Korea.
Diplomatic efforts have so far failed to rein in North Korea's nuclear ambitions and US President George W Bush has named it as part of an "axis of evil".
For more info on North Korea, try | <urn:uuid:69f4c25a-a311-41f8-ac1b-a4922aa47a41> | {
"dump": "CC-MAIN-2018-39",
"url": "http://familycare-foundation.org/north-korean-overview.html",
"date": "2018-09-20T17:18:02",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267156524.34/warc/CC-MAIN-20180920155933-20180920180333-00350.warc.gz",
"language": "en",
"language_score": 0.9444971680641174,
"token_count": 314,
"score": 3.5625,
"int_score": 4
} |
Converting a Surface into a NURBS Surface
You can convert a surface, solid, or mesh into a NURBS surface.
When you first create a surface, you have a choice of creating the surface as a procedural surface or a NURBS surface. You can control this by using the Surface Modeling Mode button in the Create panel on the Surface ribbon. When this button is not selected, surfaces are created as procedural surfaces. When you create a procedural surface, if you also select the Surface Associativity button, if you subsequently modify the underlying curves, the surface updates to reflect those changes.
If the Surface Modeling Mode button is selected, however, as indicated by the blue background, when you create a surface, the resulting surface is created as a NURBS surface. When you create a NURBS surface, the resulting surface does not maintain associativity between the surface and curves from which it was created, so you cannot modify the surface by manipulating the underlying curves. But you can modify the surface by manipulating its control vertices.
You can also convert existing procedural surfaces as well as solids and meshes into NURBS surfaces. Once converted into a NURBS surface, you can modify the surface by manipulating its control vertices.
For example, this surface was originally created as a procedural surface. To convert it into a NURBS surface, on the Surface ribbon, in the Control Vertices panel, click the Convert to NURBS tool.
The program prompts you to select objects to convert. You can use any convenient object selection method. For example, click to select the surface. When you finish selecting the surfaces you want to convert, either press ENTER or right-click. The surfaces are immediately converted into NURBS surfaces. When you move the cursor over the surface, you can see that the surface is now a NURBS surface.
To convert a mesh into a NURBS surface, first convert the mesh into a solid or surface and then convert the surface into a NURBS surface. | <urn:uuid:6bddfd81-0829-4c1e-b1b2-4fbaae67a621> | {
"dump": "CC-MAIN-2022-21",
"url": "https://www.tutocad.com/autocad/videos/converting-a-surface-into-a-nurbs-surface/",
"date": "2022-05-21T10:41:28",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662539049.32/warc/CC-MAIN-20220521080921-20220521110921-00232.warc.gz",
"language": "en",
"language_score": 0.9112111926078796,
"token_count": 416,
"score": 3.515625,
"int_score": 4
} |
Patients who are diagnosed with cancer often receive a tremendous amount of information and it’s common for them to have questions, says gynecologic oncologist Dr. Kathleen Yang with Willamette Valley Cancer Institute.
“Patients want to know, ‘Is this cancer serious? Am I OK? Am I going to survive this?’ When I begin explaining a patient’s condition to them, I usually start with two things. First, I tell them what type of cancer they have. And second, what organ the cancer arises from.”
To answer a patient’s questions about the severity of their cancer, Dr. Yang and other involved physicians need to know more about an individual’s specific disease, including the stage and grade.
Cancer staging is a process by which oncology specialists figure out how advanced the cancer is, says pathologist Dr. Denis McCarthy with Pathology Consultants PC. “Is it localized? Has it been caught very early or has it spread widely?”
Doctors need to know the amount of cancer and where it’s located to choose the best treatment options, which may include surgery, chemotherapy or radiation, or a combination of treatments. Doctors also consider the cancer’s stage to predict the course it will likely take.
“We’ll look at the size of the tumor and what structures it invades through. For example, you can have a tumor located just at the surface of the colon, in the mucosa, that’s considered early cancer. But if it is invading into the muscular wall of the colon or invading through the wall and into the fat surrounding the colon, that’s more serious. That’s all part of the tumor stage,” says Dr. McCarthy.
In general, cancer has four stages. For most cancers, the stage is a Roman numeral from I to IV. Stage I cancers are least advanced and often result in a good prognosis. Stage IV (4) is the highest and means the cancer is more advanced. Sometimes stages are subdivided, using letters such as A and B. Learn more about staging here.
The term “grade” refers to the nature of the cancer. It describes a tumor based on how abnormal the cancer cells and tissue look under a microscope and how quickly the cancer cells are likely to grow and spread.
“The grade of a tumor is basically, how ugly does this tumor look? The closer it looks to normal tissues, the lower the grade,” says Dr. McCarthy. “Uglier tumors tend to be more aggressive, although that is not always the case.”
Working together on a patient’s behalf
Cancer patients will likely see multiple specialists from diagnosis through treatment. To help streamline that process, physicians from about a dozen specialty clinics in the Eugene-Springfield community work together through the Oregon Cancer Alliance, coordinating care and discussing specific cases as a group in what’s known as tumor boards.
“Cancer is a really complex and difficult problem to solve in many cases,” says Dr. Kristian Ferry, a surgeon with Avanté Surgical. “And success really comes from a multipronged approach of treatment of the cancer.”
While in-person tumor board gatherings have transitioned to virtual discussions, due to the current health crisis, the collaboration remains invaluable.
Oregon Medical Group radiologist Dr. Michael Milstein says, “There’s so much involved in diagnosis and treatment — one person can’t do it all. So, it’s really nice to have the expertise of other people to help you provide the best care for the patient.”
“At the very least, everyone takes a look at a patient’s information again. Pathologists look at the case again under the microscope, radiologists look at radiology again. It’s a second look and second looks are always good,” Dr. McCarthy says.
The type of treatment you choose to receive helps determine your prognosis, the likely outcome or course of your cancer and your chances for recovery or recurrence. It can be hard to understand what prognosis means and also hard to talk about, even for doctors.
“Prognosis is an educated guess based on statistical information,” says Dr. Yang. “It does not predict what will happen, so I always try to explain that upfront to the patient.”
You can ask your doctor about survival statistics or you may find statistics confusing and frightening, or too impersonal to be of value to you. It is up to you to decide how much information you want. If you decide you want to know more, the oncologist who knows the most about your situation is in the best position to discuss your prognosis and explain what the statistics may mean.
The Oregon Cancer Alliance encourages patients to learn as much as they can, or choose to, about their condition and to ask questions, so that they feel empowered to make the treatment decisions that are right for them. | <urn:uuid:fca1bade-a2cc-41dc-8237-7f6b79684d75> | {
"dump": "CC-MAIN-2023-50",
"url": "https://www.oregoncanceralliance.com/understanding-your-cancer-diagnosis/",
"date": "2023-11-30T11:47:12",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100184.3/warc/CC-MAIN-20231130094531-20231130124531-00748.warc.gz",
"language": "en",
"language_score": 0.9560744762420654,
"token_count": 1058,
"score": 3.015625,
"int_score": 3
} |
Explaining difficult secondary school physics topics in a simple way.
Gives you an understanding of Newtons law of gravity, difference between mass and weight and the different formula connecting them
Understanding of Heat, Specific Heat Capacity etc.
Force, Work, Energy and Power
Differentiate between work energy and power and calculate them
Gas Laws (Pressure, Temperature and Volume)
In the coming weeks, we will be discussing / teaching more of the necessary topics...
Structure of Matter and Kinetic Theory
The different states of matter and how changing the temperature, pressure or volume affects the changes.
An understanding of waves and their behaviors in different medium
A set of mock test to check your knowledge on specified topics
Learn anything, anywhere. | <urn:uuid:dcbad19f-5754-4232-8733-402ea7d9d303> | {
"dump": "CC-MAIN-2019-39",
"url": "https://crowdclassroom.com/course-details/physics-the-simple-way",
"date": "2019-09-17T19:24:26",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573105.1/warc/CC-MAIN-20190917181046-20190917203046-00543.warc.gz",
"language": "en",
"language_score": 0.8484433889389038,
"token_count": 152,
"score": 3.03125,
"int_score": 3
} |
Prekindergarten is a developmentally appropriate early childhood education program for three- and four-year-old children who may benefit from additional supports, such as speech, language and social development programming.
The program focuses on the development of the whole child: physical, social, emotional, spiritual and intellectual. In prekindergarten, children experience active, experiential learning through play and a comprehensive, integrated program within a prepared environment. Students, family and teachers work together to foster the child's development.
Prekindergarten helps children to:
- Develop language and communication skills
- Develop problem-solving skills
- Learn to co-operate with others
- Make new friendships
- Prepare for the school environment.
Schools Offering Prekindergarten
A list of schools offering prekindergarten can be found
To register, click
Note: Space is limited to 16 students. Priority is given to students who reside in the school catchment area (neighbourhood), who are of the appropriate age and who would most benefit from enhanced programming.
Note: Busing is provided only one way for prekindergarten students. If students attend morning class, they will be bused to school. If they attend afternoon class, they will be bused home. | <urn:uuid:e7640191-5c93-489e-a4c3-f2950ee679fa> | {
"dump": "CC-MAIN-2023-40",
"url": "https://www.spsd.sk.ca/Schools/earlylearningprograms/prekindergarten/Pages/default.aspx",
"date": "2023-09-25T02:45:48",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506676.95/warc/CC-MAIN-20230925015430-20230925045430-00735.warc.gz",
"language": "en",
"language_score": 0.9272480010986328,
"token_count": 271,
"score": 3.328125,
"int_score": 3
} |
Good learners continually add new skills and strategies to their "toolbox" of existing skills and strategies. At Engaging Minds, we believe it's important that students leave each session with a new or enhanced study skill or strategy to add to their ever-expanding toolbox and apply to their school work in the week ahead. This can range from a note-taking technique for an upcoming science exam, to a new organizational or time-management skill to manage school assignments, to ideas on how to start a research report, to a framework for better reading comprehension. These successes will build one upon another and over time will create a strong foundation of skills and strategies from which the student can draw whenever necessary. These successes also work to engage students in the learning process and help develop intrinsic motivation. | <urn:uuid:1a98f476-8ea3-4980-9e20-d0ff4984de66> | {
"dump": "CC-MAIN-2017-34",
"url": "http://www.engagingmindsonline.com/how-we-do-it/measuring-success",
"date": "2017-08-16T17:14:45",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886102309.55/warc/CC-MAIN-20170816170516-20170816190516-00091.warc.gz",
"language": "en",
"language_score": 0.9359131455421448,
"token_count": 158,
"score": 3.453125,
"int_score": 3
} |
Date: September 22, 2006
Creator: Sullivan, Mark P.
Description: The Central American nation of Panama has made notable political and economic progress since the 1989 U.S. military intervention that ousted General Manuel Noriega from power. Under the current administration of President Martin Torrijos, the most significant challenges have included dealing with the funding deficits of the country's social security fund; developing plans for the expansion of the Panama Canal; and combating unemployment and poverty. The United States has close relations with Panama. The current bilateral relationship is characterized by extensive cooperation on counternarcotics efforts, assistance to help Panama assure the security of the Canal and its border with Colombia, and negotiations for a bilateral free trade agreement.
Contributing Partner: UNT Libraries Government Documents Department | <urn:uuid:d44bf3b9-4245-4d8a-8769-6a5998b174e8> | {
"dump": "CC-MAIN-2014-15",
"url": "http://digital.library.unt.edu/explore/partners/UNTGD/browse/?fq=untl_decade%3A2000-2009&fq=str_year%3A2006&fq=str_location_country%3APanama",
"date": "2014-04-20T21:56:11",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00127-ip-10-147-4-33.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.9069516658782959,
"token_count": 158,
"score": 3.015625,
"int_score": 3
} |
What is Congestion in Computer Network?
Congestion is a state occurring in the network layer when the load(no. of packets send to the network ) on the network is greater than the capacity(no. of the packet a network can handle ) of the network. It occurs due to queues in Routes and Switches.
Effects of Congestion
- As delay increases, performance decreases.
- If delay increases, retransmission occurs, making the situation worse.
Congestion Controls are the techniques through which we try to avoid traffic congestion that leads to large queue length which results in buffer overflow and losses of packets to ensure that the user gets the negotiated quality of services. | <urn:uuid:3607d62a-c65e-43ce-a7dc-8297857ca160> | {
"dump": "CC-MAIN-2023-23",
"url": "https://www.techarge.in/computer-network-congestion/",
"date": "2023-06-01T19:11:05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224648000.54/warc/CC-MAIN-20230601175345-20230601205345-00127.warc.gz",
"language": "en",
"language_score": 0.9297367930412292,
"token_count": 143,
"score": 3.859375,
"int_score": 4
} |
Several years ago, a seasoned plumber wrote to the U.S. Bureau of Standards promoting a new procedure for cleaning pipes.
The bureau replied: “The efficiency of the recommended solution is completely undisputed.
However, there is an inherent incompatibility between the aforementioned solution and the basic chemical structures of the commonly used materials in current household and commercial pipeworks.”
The plumber wrote back saying, “Thanks, I really liked it, too.”
Within a few days, the Bureau responded with another letter: “Don’t use hydrochloric acid! It eats holes in pipes!”
Wouldn’t it have been so much easier – and less expensive – to put it simply the first time?
The word communication comes from the Latin communico, meaning share.
We share ideas, thoughts, information and concerns.
Communication can start friendships or make enemies.
Communication needs to be clear and understandable.
Communication requires both effective sending and receiving.
And if we don’t do it effectively, we have wasted our time.
Research psychologists tell us that the average one-year-old child has a three-word vocabulary.
At age two, most children have a working knowledge of 272 words.
A year later, that number more than triples.
At age six, the average child has command of 2,562 words.
As adults, our word accumulation continues to grow but the effective use of them does not necessarily follow.
We can speak up to 18,000 words each day, but that doesn’t mean those messages are clear or correctly received.
In fact, words can often obscure our messages instead of clarifying them.
No one can succeed in business, or in life, for that matter, without developing good communication skills.
The most basic yet crucial leadership skill is communication.
It’s important to continue to evaluate your performance in these fundamental areas:
Speaking. Good verbal skills are essential. You have to be able to explain your requests and instructions, your ideas, and your strategies to people inside and outside your organization. Look for opportunities to hone your speaking skills at conferences, in meetings and among friends.
Listening. Pay attention to the people around you. Repeat and paraphrase what they say to make sure you understand – and to show that you take their opinions seriously.
Writing. The paper trail you leave tells people a lot about how clearly you think and express yourself. Don’t send even the simplest email without rereading it critically to be sure it says just what you want.
Leading meetings. You should encourage other people to share their ideas without letting discussions meander aimlessly. Sharpen your ability to keep meetings on track and elicit productive comments. Remember that every meeting should begin with a solid agenda and conclude with a commitment for action.
Resolving conflict. Conflict can be subtle, but you still must defuse it if you want things to get done. You’ll use a lot of the skills already discussed to encourage people to open up and clear the air about their disagreements.
Persuasion. The right words can stimulate agreements, offer alternate points of view, provoke thoughtful consideration and bring people around to your way of thinking. This is an especially critical skill for sales people, which is all of us in one capacity or another.
Perhaps the most helpful advice came from Peter Drucker, the late management guru, who said, “The most important thing in communication is to hear what isn’t being said.”
Beware of misinterpreting simple messages because of your perception of the sender’s meaning or intent.
Here’s an eye-opening fact: the 500 most common words in the English language have more than 14,000 definitions.
That explains a lot of why verbal interactions often create confusion and misunderstanding.
Two people meet at an art exhibition. “What is your line of work?” asked the woman.
“I’m an artist,” came the reply.
“I’ve never met a real live artist before,” said the woman. “This is so exciting! I’ve always wanted my portrait painted. Could you do that?”
“That’s my specialty!” the artist said.
“Wonderful!” she said. “I just have one request. I want the painting done in the nude.”
The artist hesitated for a minute and then said, “I’ll have to get back to you.”
A few days later the artist called the potential customer to discuss the plan. “I’m willing to do the painting as you requested,” the artist said, “but I have one stipulation. I want to leave my socks on. I need somewhere to put my paint brushes.”
Mackay’s Moral: It is wiser to choose what you say than say what you choose. | <urn:uuid:97aa9cbb-6cc2-4fba-9d11-dd0bbed34e30> | {
"dump": "CC-MAIN-2021-17",
"url": "http://i2p.com.au/communication-need-not-be-complicated/",
"date": "2021-04-15T02:16:19",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038082988.39/warc/CC-MAIN-20210415005811-20210415035811-00524.warc.gz",
"language": "en",
"language_score": 0.9510602951049805,
"token_count": 1050,
"score": 3.015625,
"int_score": 3
} |
Popular Science Monthly/Volume 64/February 1904/The Geographical Distribution of Meteorites
|THE GEOGRAPHICAL DISTRIBUTION OF METEORITES.|
FIELD COLUMBIAN MUSEUM.
SPEAKING broadly, we know as yet of no fundamental reason why meteorite falls should be any more numerous upon one part of the earth's surface than upon another.
Compared with the vast area of space in which meteorites wander, our earth is but a point, which draws into itself from time to time one of these masses. Moreover, it is a rotating and wabbling point, ever presenting new surfaces to the portions of space in which it is traveling. The marksman who displays his skill by shooting glass balls thrown into the air would have the difficulty of his task enormously increased if he should endeavor to strike successively the same point upon the ball, especially if it had in addition to its forward motion one of rapid rotation about a wabbling axis. It is true that there is some prospect of our being able after much study and comparison of data to locate a few meteorite swarms with sufficient accuracy to warrant a conclusion as to what point upon the earth stones from them will strike, but this possibility seems at present quite remote. At present we can only presume that a gentle rain of meteorites has fallen regularly and impartially upon the earth since the morning stars first sang together.
The latest and best calculations, which are by Professor Berwerth, of Vienna, have shown that the number of meteorites actually falling upon the earth at the present time each year, not including of course shooting stars or meteors, is about nine hundred. Two or three of these bodies fall, then, somewhere upon the earth every twenty-four hours. But about three fourths of the earth's surface is covered with water, and the missiles impinging upon this area are lost. Upon the remaining one fourth, however, 225 falls should take place, accompanied by phenomena such as to make the occurrence noteworthy, A large part of the land is, however, unpopulated and our figure of 225 may, therefore, be cut in half in order to take account of this factor. Again, falls taking place in the night would, in many cases, not be observed, and as a last concession we may halve our figure on this account. It would finally seem then that about 55 meteorite falls capable of record might be expected to take place each year, and in a century the total should be 5,500. As a matter of fact, the total number of recorded meteorite falls, including some from as far back as the fifteenth century, is only about 350.
The first conclusion one is likely to draw from results so contradictory is that the original premise is entirely at fault. Yet within the small area of France 50 well-authenticated meteorite falls have taken place within the last one hundred years. We have no reason to suppose France especially favored of the gods in regard to the number of meteorites which it receives and, as it covers only about one one-thousandth part of the earth's surface, we shall find by reversing the calculations made above that our original figure of 900 a year is fully substantiated. The difficulty will be somewhat explained by a glance at the accompanying map. Tracing upon this the locations of known meteorite falls, we see at once that they are largely confined to the civilized nations, or, with the exception of the Semites of Africa and Arabia, to regions inhabited by the Caucasian race. Of a total of 63i known meteorites, 256 are located in Europe and 177 in the United States. In other words, more than two thirds of the whole number known belong to countries which occupy but about one eighth of the land surface.
We reach then the rather curious conclusion that the ability to observe and record meteorite falls is a mark of civilization, and that the relative civilization of regions equally populated may be judged by the numbers of meteorites known from each. The superiority of civilized peoples in this regard comes probably not so much from their greater ability to observe the fall of a meteorite as from their better facilities for recording such an occurrence and for preserving the stone which has fallen. To an unorganized community, the fall of a meteorite is an isolated occurrence, impressive enough at the time, but so infrequent that in the absence of records or means of communication with other communities, it is lost sight of. Civilized communities with their means of records and museums are able to correlate such occurrences, and in time accumulate important knowledge regarding them. So upon the accompanying map there are depicted not only the places where meteorites have fallen, but the isolation of China, the bleakness of Canada, the impenetrability of South America, the hollowness of Australia and the darkness of Africa. Meteorites known from uncivilized countries should for the most part be credited to travelers from civilized nations.
It would be quite superficial, however, to suppose that the distribution of Caucasian peoples is the only important factor affecting the location of known meteorite falls. There are evidences that other factors, the nature of which can hardly be even suggested as yet, affect the place of fall of meteorites. Thus, there appears upon the accompanying map a tendency of these bodies to flock toward mountainous regions. This is indicated by the large numbers of them occurring in India near the Himalayas, in Europe in the vicinity of the Alps, in the United States about the southern Appalachians, and in the Americas up and down the great western mountain range. It is possible that investigation will show that greater gravitational force is exerted at these points, and that thus the number of meteorites drawn in is there
increased, or, again, mountains may present actual mechanical obstacles which stop and accumulate meteorites. Whether either of these hypotheses has any foundation in fact, however, is not known as yet. There are again remarkable differences in the kinds of meteorites found in the two hemispheres. Thus, taking falls and finds together, of the 256 meteorites known from the western hemisphere, 182 are irons and only 74 stones; while from the eastern hemisphere, of 378 known, 299 are stones and only 79 are irons. Professor Berwerth has sought to account for the excess of irons in the new world by the suggestion that the dry air of the desert areas which abound in this hemisphere has preserved meteorites fallen in long distant periods, while those of a similar age in the other hemisphere have been exposed to a moist climate and have for the most part been decomposed. It is true that many of the iron meteorites known from the western hemisphere occur upon the Mexican and Chilean deserts, but quite as many come from the southern Appalachians, where a comparatively moist climate prevails. There are also numerous desert areas in the old world perhaps as fully explored as those of the new, so that on the whole the above explanation seems inadequate.
Other remarkable groupings of meteorites with regard to their geographical distribution may be noted when areas smaller than hemispheres are compared. Thus of a total of nine meteorites belonging to the peculiar class called howardites, five have fallen in Russia. Of the nine meteorites known belonging to the still more remarkable class of carbonaceous meteorites, three have fallen in France and two in Russia.
Again small areas of equal extent and equally well populated vary curiously in their number of meteorite falls. Within the state of Illinois, for instance, no meteorite is known ever to have fallen, while in the state of Iowa, which has about the same area, but a smaller population, four falls have been noted, and from the state of Kansas, which has a larger area than Illinois, but a smaller and less uniformly distributed population, twelve meteorites are known.
It is usual to dismiss inquiries regarding the meaning of such groupings with the remark that they are mere coincidences. But it is the mission of science to investigate coincidences, and however long the task may be of determining the laws which bring about the particular occurrences here referred to, there can be no doubt that they are the result of law and of law which will some day be discerned by the human mind. | <urn:uuid:62378c9c-0a4a-460f-9f56-4c2dfb666050> | {
"dump": "CC-MAIN-2020-50",
"url": "https://en.wikisource.org/wiki/Popular_Science_Monthly/Volume_64/February_1904/The_Geographical_Distribution_of_Meteorites",
"date": "2020-12-05T00:31:07",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141745780.85/warc/CC-MAIN-20201204223450-20201205013450-00646.warc.gz",
"language": "en",
"language_score": 0.9662606716156006,
"token_count": 1673,
"score": 3.796875,
"int_score": 4
} |
A recent study has confirmed a genetic link between gut disorders and Alzheimer’s disease. There have been many previous studies that have linked the two, but this is one of the most complete ones to date.
Signs of Gut Disorders
There are a number of different gut disorders that have been linked to Alzheimer’s disease. These include:
Irritable bowel syndrome: This disorder is characterized by abdominal pain, bloating, and diarrhea or constipation. IBS has been linked to an increased risk of Alzheimer’s disease.
Inflammatory bowel disease: This chronic inflammatory condition can affect the intestines, causing symptoms like abdominal pain, diarrhea, and weight loss. IBD has also been linked to an increased risk of Alzheimer’s disease.
Celiac disease: This autoimmune disorder causes damage to the small intestine when gluten is consumed. Celiac disease has been linked to an increased risk of Alzheimer’s disease.
If you are experiencing any of these gut disorders, it is important to talk to your doctor about your risks for AD.
What is Alzheimer’s Disease?
Alzheimer’s disease is a degenerative brain disorder that leads to memory loss and cognitive decline. It is the most common form of dementia, accounting for 60-80% of all cases. Alzheimer’s disease is characterized by the buildup of amyloid plaques and neurofibrillary tangles in the brain. These plaques and tangles cause damage to neurons and lead to cell death.
There is no known cure for this disease, but there are treatments available that can help slow its progression. Early diagnosis and treatment are important, as they can improve quality of life and extend life expectancy.
The exact cause of Alzheimer’s is unknown, but several risk factors have been identified. Age is the greatest risk factor, as most cases occur in people over the age of 65. Family history also plays a role, as those with a first-degree relative (parent or sibling) with Alzheimer’s are more likely to develop the condition themselves. Other risk factors include head injury, hypertension, diabetes, and obesity.
What Role Does the Gut Microbiome Play in Health and Disease?
The human gut microbiome is composed of trillions of bacteria that play a crucial role in many aspects of our health, from digesting our food to regulating our immune system. Disruptions to the gut microbiome have been linked to a wide variety of diseases, including AD.
Alzheimer’s disease is a degenerative brain disorder that leads to memory loss and cognitive decline. While the exact cause of Alzheimer’s is still unknown, research suggests that the gut microbiome may play a role in its development. Studies have shown that individuals with Alzheimer’s tend to have different types of bacteria in their gut than healthy individuals. For people with AD, having higher levels of inflammation throughout their body is common. The gut microbiome is known for contributing to inflammation as well.
While more research is needed to fully understand the link between the gut microbiome and Alzheimer’s disease, the current evidence suggests that maintaining a healthy gut flora is important for overall health and may help prevent or delay the onset of Alzheimer’s disease.
How Does a GUT Disorder Lead to a Higher Risk of Developing Alzheimer’s Disease?
There are many different types of gut disorders, each with their own unique set of symptoms. However, all gut disorders have one thing in common: they can lead to an increased risk of developing Alzheimer’s disease.
Alzheimer’s disease is a degenerative brain disorder that leads to dementia. Dementia is a broad term used to describe the symptoms of cognitive decline, such as memory loss and difficulty reasoning. While there is no cure for AD, it is possible to manage the symptoms and slow the progression of the disease.
Gut disorders are often associated with inflammation, which has been linked to an increased risk of developing Alzheimer’s disease. Inflammation occurs when the body’s immune system response to an injury or infection. When inflammation persists, it can damage healthy cells and tissues, including those in the brain. This damage can lead to cognitive decline and dementia.
There are many different gut disorders that can lead to an increased risk of developing AD. Some of the most common include inflammatory bowel diseases (such as Crohn’s disease and ulcerative colitis), celiac disease, and irritable bowel syndrome (IBS).
The Link Between Dietary Fiber and AD
A growing body of evidence suggests that there is a link between dietary fiber and Alzheimer’s disease (AD). Dietary fiber is a type of carbohydrate that cannot be digested by the human body. Instead, it passes through the digestive system relatively intact and helps to bulk up the stool.
There are two types of dietary fiber: soluble and insoluble. Soluble fiber dissolves in water and forms a gel-like substance, while insoluble fiber does not dissolve in water and remains largely unchanged as it passes through the digestive system. Both types of fiber are important for maintaining a healthy gut.
Studies have shown that people who eat a diet high in dietary fiber tend to have a lower risk of developing AD. One theory is that dietary fiber helps to reduce inflammation in the brain, which is thought to play a role in the development of AD. Another theory is that dietary fiber promotes the growth of healthy bacteria in the gut, which may help to protect the brain from damage.
There is still much research to be done in this area, but the evidence so far suggests that eating a diet rich in dietary fiber may help to prevent or delay the onset of AD. So if you’re concerned about your risk of developing AD, make sure to include plenty of high-fiber foods in your diet.
There is a growing body of evidence linking gut disorders to Alzheimer’s disease. This is concerning because gut disorders are common and often go undiagnosed. If you have a gut disorder, you should be aware of the potential link to Alzheimer’s and talk to your doctor about ways to reduce your risk. | <urn:uuid:c6f1e293-9f52-4999-a8f5-055060411656> | {
"dump": "CC-MAIN-2022-49",
"url": "https://peoriabg.com/gut-disorders-linked-to-alzheimers-disease/",
"date": "2022-12-05T22:15:02",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711045.18/warc/CC-MAIN-20221205200634-20221205230634-00852.warc.gz",
"language": "en",
"language_score": 0.9540141224861145,
"token_count": 1264,
"score": 3.140625,
"int_score": 3
} |
Discover more from African History Extra
a brief note on Madagascar's position in African history
plus, early industrialization in the Merina kingdom.
The island of Madagascar has for long languished on the periphery of African historiography. The reluctance of some Africanists to look beyond the east African coast stems partly from the perception of Madagascar as insular and more 'culturally' south-Asian than African, despite such terms being modern constructs with little historical basis in Madagascar's society. Recent research on the island's history has bridged the chasm between the island and the mainland, revealing their shared political, economic and genetic history that defies simplistic constructs of colonial ethnography.
The long chain of islands extending outwards from the east African coast through the Comoros archipelago to northwestern Madagascar comprised a series of stepping stones that formed a dynamic zone of interaction between the African mainland and Madagascar. Its on these stepping stones that African settlers continously travelled to Madagascar, establishing settlements along the northern and western coasts of the island and in parts of the interior, where they were joined by south-Asian settlers from the eastern coast to create what became the modern Malagasy society.
The north-western coast of Madagascar was part of the 'Swahili world', with its characteristic city-states, regional maritime trade, and extensive interaction with the hinterland. From these interactions emerged an economic and political alliance which drew the Malagasy and Swahili worlds closer: warring Swahili and Comorian elites recruited Malagasy allies to conduct long-distance naval attacks, Malagasy elites were integrated in Swahili society, and the movement of free and servile Malagasy into the east African coast was mirrored by a similar albeit smaller movement of both free and servile east Africans onto the island.
The evolution of states on the island and their complex interactions with their east African neighbors and the later colonial empires, closely resembles that of the kingdoms on the mainland. At the onset of European imperial expansion on the east African coast, the largest power on the island was the kingdom of Merina, which controlled nearly 2/3rds of the Island during the reign of king Radama (r.1810-28) and Queen Ranavalona (1828-1861). Often characterized as a profoundly sage monarch, king Radama recognized the unique threats and opportunities of the European presence at his doorstep, and like Afonso of Kongo, he invited foreign innovations on his own terms, and directed them to his own advantage. After the relationship between Merina and its European neighbors soured, Radama and his successors created local industries to reduce the kingdom's reliance on imported technology, and like Tewodros of Ethiopia, Radama retained foreign artisans inorder to establish an armaments industry.
<Next week's substack article will explore the history of the Merina kingdom from the 16th century to the late 19th century.>
The early industry of Merina is the subject of my latest Patreon post in which I explore the kingdom's economic history during the early 19th century when the Merina state, foreign capital and local labour, converged to create one of the most remarkable examples of proto-industrialization in Africa.
read more about it here:
Thanks for reading African History Extra! Subscribe for free to receive new posts and support my work. | <urn:uuid:33a3f6aa-9b14-44b2-9ab2-db580d37041a> | {
"dump": "CC-MAIN-2023-50",
"url": "https://www.africanhistoryextra.com/p/a-brief-note-on-madagascars-position",
"date": "2023-11-30T07:36:26",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100172.28/warc/CC-MAIN-20231130062948-20231130092948-00311.warc.gz",
"language": "en",
"language_score": 0.9507473707199097,
"token_count": 686,
"score": 3.734375,
"int_score": 4
} |
Atrial fibrillation, or AFib, occurs when rapid, disorganized electrical signals cause the heart’s two upper chambers, the atria, to contract very quickly and irregularly, or fibrillate. Atrial fibrillation causes blood to gather in the atria because it is not pumping completely into the ventricles, the lower chambers of the heart.
What is atrial fibrillation?
Atrial fibrillation, also known as A-fib or AF, is the most common condition associated with an abnormal rate or rhythm of the heartbeat, or arrhythmia. More than 2.3 million people in the United States are affected by atrial fibrillation, and more than 160,000 new cases are diagnosed every year. Untreated, atrial fibrillation can raise the risk of stroke more than five-fold and has also been shown to double the risk of death.
The UF Health Cardiovascular Center in Jacksonville is uniquely qualified to diagnose and treat atrial fibrillation with access to the latest research and technologies that help control the condition as quickly as possible, so patients can get back to a full, healthy life.
Atrial fibrillation: Types
There are three main types of atrial fibrillation:
- Paroxysmal atrial fibrillation - alternates back and forth between normal and abnormal rhythms
- Permanent atrial fibrillation - abnormal rhythms present all of the time
- Persistent atrial fibrillation - abnormal rhythms last longer than one week or require a small delivery of electrical energy or medications to reset the heart back to a normal rhythm (sinus)
Atrial fibrillation: Risk factors
There are a number of risk factors associated with atrial fibrillation. Patients with one or more of the following conditions have a much higher chance of developing atrial fibrillation than the general population:
Atrial fibrillation: Symptoms
Some people who have atrial fibrillation don’t experience any symptoms. Those who do have symptoms may experience one or more of the following:
- Chest pain
- Heart palpitations
- Irregular heartbeat
- Lightheadedness or dizziness
- Shortness of breath
- Trouble exercising
- Weakness or fatigue
Some patients who experience atrial fibrillation over a long period of time may become used to feeling fatigued and not notice symptoms.
Atrial fibrillation: Diagnosis
When you come to the UF Health Cardiovascular Center, our heart team experts will discuss your symptoms and make an assessment. If our experts suspect that you may have atrial fibrillation, we may suggest one or more of the following tests:
- A blood test can help rule out a thyroid problem or other substances in the blood that could be causing the atrial fibrillation.
- A chest X-ray can help determine if a patient has another condition in the lungs that could be causing symptoms.
- An echocardiogram provides video images of the heart in motion so a doctor can determine if a patient has underlying structural heart disease.
- An electrocardiogram (ECG) is the primary method for diagnosing atrial fibrillation. It measures the activity of the heart.
- A Holter monitor assesses the heart’s activity over a longer period of time (usually at least 24 hours) to give doctors a fuller picture of the heart’s rhythms.
Once our heart team experts have diagnosed your condition, they will work with you to develop an individualized treatment plan.
Atrial fibrillation: Treatment
If you have atrial fibrillation, symptoms may come and go over time, or may be persistent. A cardiologist will discuss the symptoms with you to determine proper treatment based on your unique needs. Treatment will depend on:
- How long you have had atrial fibrillation
- Risk factors that may be present for stroke
- Severity of symptoms
- The underlying cause of the atrial fibrillation
In general, the goals of treatment of this condition are to reset the rhythm of the heart, control the heart rate and prevent blood clots. Our heart team experts tailor a specific therapy on an individual basis, offering many advanced and state-of-the-art treatment options.
The majority of patients with atrial fibrillation need some type of anticoagulant (“blood thinner”) to lower their risk of stroke. Patients who are older and have coexisting cardiac or vascular conditions are at a higher risk. Our heart team experts will be able to accurately determine the level of your risk and prescribe an appropriate therapy for you.
Additional state-of-the-art therapies include:
- Direct-current cardioversion (DCCV) uses electric shock to momentarily stop — and then restart — the heart’s activity. This shock is delivered via paddles or patches on the chest while patients are sedated.
- Drug-induced cardioversion uses medications known as antiarrhythmics to help correct a heartbeat. These medications may be delivered orally (by mouth) or intravenously (by vein).
- Convergent ablation, also known as a hybrid procedure, begins with a cardiothoracic surgeon making a small incision in the chest to deliver heat (radiofrequency energy) followed by a cardiologist performing a standard radiofrequency catheter ablation to alter heart tissue, stopping the arrhythmia. This procedure is best used for patients who have persistent atrial fibrillation.
- Cryothermal balloon ablation uses a cold temperature balloon energy to alter heart tissue, thus stopping the arrhythmia.
- Minimally invasive surgical ablation, also known as a mini-maze procedure, uses small incisions in the chest to reach the heart with an ablation device to alter the heart tissue.
- Left atrial appendage closure introduces a closure device through a large vein in the leg and delivers it to the heart to close the left atrial appendage, which can help reduce the risk of stroke and eliminate the need for long-term blood thinning anticoagulant medications.
- Open surgical (maze) procedure that requires opening the chest cavity to reach the heart and making a number of incisions on the left and right atria to form scar tissue. The scar tissue conducts electricity allowing for the interruption of abnormal rhythms.
- Pacemaker (PPM) implantation and AV nodal ablation places a small device in the chest or abdomen to help control heart rhythms with low-energy electrical pulses after a catheter is passed into the blood vessels and radiofrequency energy is delivered to the heart tissue to allow impulses to transfer from the top of the heart to the lower chambers.
- Radiofrequency catheter ablation inserts a catheter in the blood vessels to reach the heart and uses a high-frequency radio pulse to generate heat, stopping the arrhythmia.
Why choose UF Health Jacksonville?
The UF Health Cardiovascular Center includes internationally recognized heart experts who are leaders in cardiac care, research and education. These accomplished physicians have access to the most sophisticated equipment available and are able to provide our patients with state-of-the-art diagnostic, therapeutic and rehabilitative cardiac services. Working together with cardiothoracic surgeons, our team offers a variety of medications and treatment options that can increase quality of life and minimize risk of stroke.
To better meet your needs, the Center also offers our Comprehensive Atrial Fibrillation Program, which individualizes treatment for each patient based on the type of atrial fibrillation. Other comprehensive heart programs, include a coronary interventional program, nuclear program, electrophysiology program, noninvasive program and peripheral interventional program.
UF Health Jacksonville is renowned for treating patients with complex diseases and being on the forefront of advancing the science of interventional cardiology. Many leading-edge interventional therapies offered in Northeast Florida are only available at the UF Health Cardiovascular Center – Jacksonville. | <urn:uuid:ffa81806-4f2f-4422-9757-3cd825fa4258> | {
"dump": "CC-MAIN-2023-40",
"url": "https://ufhealthjax.org/conditions-and-treatments/atrial-fibrillation-afib",
"date": "2023-10-03T11:36:57",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233511075.63/warc/CC-MAIN-20231003092549-20231003122549-00862.warc.gz",
"language": "en",
"language_score": 0.903568685054779,
"token_count": 1660,
"score": 3.6875,
"int_score": 4
} |
Many classes of 18-30 nt small non-coding RNAs (sRNAs) can be characterized based on their functions in gene regulation and epigenetic control in plants, animals and fungi [1, 2].
Identification of the complete set of miRNAs and other small regulatory RNAs in organisms is essential with regard to our understanding of genome organization, genome biology, and evolution . There are three important classes of endogenous small RNAs in plants, animal or fungi: micro RNAs (miRNAs), short interfering RNAs (siRNAs) and piwi-interacting RNAs (piRNAs). In plants, there are no known piRNA.
MicroRNAs (miRNAs) are small 18-24 nucleotide regulatory RNAs that play very important roles in post-transcriptional gene regulation by directing degradation of mRNAs or facilitating repression of targeted gene translation [4, 5]. While siRNA are processed from longer double stranded RNA molecules and represent both strands of the RNA, miRNAs originate from hairpin precursors formed from one RNA strand [6, 7]. The hairpin precursors (pre-miRNA) are typically around ~60-70 bp in animals, but somewhat larger, ~90-140 bp in plants. In plants, helped by RNA polymerase II, miRNA gene is first transcribed into pri-miRNA. The pri-miRNAs are cleaved to miRNA precursors (pre-miRNA), which form a characteristic hairpin structure, catalyzed by Dicer-like enzyme (DCL1) [7, 8]. The pre-miRNA is further cleaved to a miRNA duplex (miRNA:miRNA*), a short double-stranded RNA (dsRNA) . The dsRNA is then exported to cytoplasm by exportin-5. Helped by AGO1, single-strand mature miRNA will form a RNA-protein complex, named RNA-induced silencing complex (RISC), which negatively regulates gene expression by inhibiting gene translation or degrading mRNAs by perfect or near-perfect complement to target mRNAs [10, 11].
Although some soybean miRNA were previously identified , the number was small and, therefore, the identification of all soybean miRNAs is far from complete. The aim of this study is to expand the collection of miRNAs expressed in soybean by using a deep sequencing approach with the Illumina Solexa platform. Towards this, we generated Solexa cDNA sequencing data for root, nodule and flower tissues since they are all relevant soybean organs to various studies in legume biology and due to their impact on soybean yield. One of the legume-specific traits is the symbiosis existing between the legume root and soil bacteria leading to the nodule. We think the small RNA content of soybean nodules needs to be established since research in other legume species showed a role for small RNA in nodule development [13, 14]. Root tissue is another important organ to analyze due to its role in nutrient-water absorption, which is clearly important to soybean yield. Finally, we selected flower for its direct impact on soybean seed yield. We constructed the small RNA libraries, prepared from these four different soybean tissues and each library was sequenced individually, generating a total of over one million sequences per library. We developed a bioinformatics pipeline using in-house developed scripts and other publicly available RNA structure prediction tools to differentiate the authentic mature miRNA sequences from other small RNAs and short RNA fragments represented in the sequencing data. We also conducted a detailed analysis of predicted miRNA target genes and correlated the miRNA expression data to that of the corresponding target genes using Solexa cDNA sequencing data. | <urn:uuid:660bfaba-843a-4eba-a51a-ca92e47a84bd> | {
"dump": "CC-MAIN-2016-07",
"url": "http://bmcbioinformatics.biomedcentral.com/articles/10.1186/1471-2105-11-S1-S14",
"date": "2016-02-10T01:03:57",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701158481.37/warc/CC-MAIN-20160205193918-00114-ip-10-236-182-209.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.9348394870758057,
"token_count": 785,
"score": 3.265625,
"int_score": 3
} |
Relative dating includes methods that rely on the analysis of comparative data or the context (eg, geological, regional, cultural) in which the object one wishes to date is found.
Correlation of fossil inclusions is a principle of stratigraphy: that strata may be correlated based on the sequence and uniqueness of their floral and faunal content.
SYNONYMS OR RELATED TERMS: chronology CATEGORY: technique DEFINITION: The process by which an archaeologist determines dates for objects, deposits, buildings, etc., in an attempt to situate a given phenomenon in time.
Indestructible grains, preservation in bogs and lake sediments allowed pollen experts to construct detailed sequences of past vegetation and climate.
Can yield environmental evidence as far back as 3mya`Based on observation that the annual growth rings of a few tree species vary in width according to differences in seasonal growing conditions (esp. Successful means of calibrating or correcting radiocarbon dates2.
deposit and can be no alter (no more recent) than the deposit itself Allows to date a field site by dating an artifact because of association Nitrogen, fluorine, uranium, collagen content, gradually reduced by process of chemical decay. Very variable, depends on site's chemical content as well. | <urn:uuid:3007f3aa-32a0-4e55-9dcd-646a5ae245f6> | {
"dump": "CC-MAIN-2020-34",
"url": "https://pushkin-history.ru/relative-and-chronometric-dating-1177.html",
"date": "2020-08-11T03:17:03",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738727.76/warc/CC-MAIN-20200811025355-20200811055355-00181.warc.gz",
"language": "en",
"language_score": 0.924058198928833,
"token_count": 267,
"score": 3.28125,
"int_score": 3
} |
The Achievement Gap in Reading : Complex Causes, Persistent Issues, Possible Solutions Paperback / softback
In this volume prominent scholars, experts in their respective fields and highly skilled in the research they conduct, address educational and reading research from varied perspectives and address what it will take to close the achievement gap-with specific attention to reading.
The achievement gap is redefined as a level at which all groups can compete economically in our society and have the literacy tools and habits needed for a good life.
Bringing valuable theoretical frameworks and in-depth analytical approaches to interpretation of data, the contributors examine factors that contribute to student achievement inside the school but which are also heavily influenced by out-of-school factors-such as poverty and economics, ethnicity and culture, family and community stratifications, and approaches to measurement of achievement.
These out-of-school factors present possibilities for new policies and practice.
The overarching theme is that achievement gaps in reading are complex and that multiple perspectives are necessary to address the problem.
The breadth and depth of perspectives and content in this volume and its conceptualization of the achievement gap are a significant contribution to the field.
- Format: Paperback / softback
- Pages: 232 pages, 6 Tables, black and white; 4 Illustrations, black and white
- Publisher: Taylor & Francis Ltd
- Publication Date: 11/04/2017
- Category: Moral & social purpose of education
- ISBN: 9781138018792 | <urn:uuid:e0026e79-4516-4944-81ec-e739c7f0bba6> | {
"dump": "CC-MAIN-2019-04",
"url": "https://www.hive.co.uk/Product/Rosalind-The-University-of-Texas-San-Antonio-USA-Horowitz/The-Achievement-Gap-in-Reading--Complex-Causes-Persistent/19910700",
"date": "2019-01-19T01:51:26",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583661083.46/warc/CC-MAIN-20190119014031-20190119040031-00385.warc.gz",
"language": "en",
"language_score": 0.9135617017745972,
"token_count": 300,
"score": 3.09375,
"int_score": 3
} |
Year 3 have been looking at SDG 14 (Life below water) in their art lessons. We learnt about the effect of overfishing and plastic pollution on our oceans.
We designed our fish thinking about the colours and types of plastic we would need to collect.
We worked in groups to cut and layer the plastic on top of our designs. We looked for different textures like bubble wrap and netting for scales. To make our fish stronger, we added small pieces of harder plastic from milk bottles, plastic bottles and food packaging. When our designs were full with recycled plastic, we melted it together using an iron. | <urn:uuid:c8bd94d2-5d10-44fa-825a-3c5e4a1932c9> | {
"dump": "CC-MAIN-2020-45",
"url": "https://www.rathfern.lewisham.sch.uk/2019/11/plastic-fish/",
"date": "2020-10-19T21:37:01",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107866404.1/warc/CC-MAIN-20201019203523-20201019233523-00714.warc.gz",
"language": "en",
"language_score": 0.954646110534668,
"token_count": 125,
"score": 3.703125,
"int_score": 4
} |
See Plot Diagram Summary "The Yellow Wallpaper," which takes place in the late 1800s, is set on an isolated country estate where the narrator and her husband, John, a physician, are living for the summer.The narrator has recently given birth and has come down with a "nervous condition" for which she has been prescribed rest. Print The Yellow Wallpaper by Charlotte Perkins Gilman: Summary & Analysis Worksheet 1. In Charlotte Perkins Gilman's 'The Yellow Wallpaper,' why did John believe a rest cure was the best possible.
Literary Analysis Of The Yellow Wallpaper 1558 Words | 7 Pages “The Yellow Wallpaper”- A Woman’s Societal Position “The Yellow Wallpaper”, written by Charlotte Perkins Gilman, is a first-person narration of madness experienced by an unnamed woman in the Victorian era.
The yellow wallpaper summary analysis. The Yellow Wallpaper study guide contains a biography of Charlotte Perkins Gilman, literature essays, a complete e-text, quiz questions, major themes, characters, and a full summary and analysis. The Yellow Wallpaper by Charlotte Perkins Gilman is one example of a feminist social criticism from the late 1800’s. In this short story, the female protagonist is prohibited to do what she wants to do and instead is forced by her husband to rest alone in a room to cure her of her postnatal depression, thus ironically becoming more ill and. Charlotte Perkins Gilman's The Yellow Wallpaper Chapter Summary. Find summaries for every chapter, including a The Yellow Wallpaper Chapter Summary Chart to help you understand the book.
The Yellow Wallpaper Analysis. In “The Yellow Wallpaper,” which was first published in 1892, Gilman extends the rigid gender roles of the time period to create an uncanny horror story of. The Yellow Wallpaper Summary, Analysis & Characters by admin 9/12/2018 | 8:59 0 Posted in Book Summary Before starting The Yellow Wallpaper Summary, let’s discuss the author Charlotte Perkins Gilman , who was born in July 3, 1860 and died August 17, 1935. The Yellow Wallpaper is written as a series of diary entries from the perspective of a woman who is suffering from post-partum depression. The narrator begins by describing the large, ornate home that she and her husband, John, have rented for the summer.John is an extremely practical man, a physician, and their move into the country is partially motivated by his desire to expose his suffering.
Literary Fiction, Gothic or Horror Fiction. When "The Yellow Wallpaper" first came out, the public didn’t quite understand the message. The piece was treated as a horror story, kind of like the 19th Century equivalent to The Exorcist.Nowadays, however, we understand "The Yellow Wallpaper" as an early feminist work. The Yellow Wallpaper study guide contains a biography of Charlotte Perkins Gilman, literature essays, a complete e-text, quiz questions, major themes, characters, and a full summary and analysis. The Yellow Wallpaper Short film adaptation produced by Marie Ashton. The Yellow Wallpaper BBC mini-series directed by John Clive. The Yellow Wallpaper The storyline of this movie is altered and lengthened so that the narrator believes her dead daughter is the one trapped in the wallpaper. Audios. Sample from the Audio Book
A Critical Analysis Of The Yellow Wallpaper By Charlotte Perkins Gilman 1051 Words | 5 Pages. Patel 1 Aditi Patel 3/14/16 English 102 Esposito, Carmine. A Critical Analysis of 'The Yellow Wallpaper ' by Charlotte Perkins Gilman Charlotte Perkins Gilman was a famous social worker and a leading author of women’s issues. A short summary of Charlotte Perkins Gilman's The Yellow Wallpaper This free synopsis covers all the crucial plot points of The Yellow Wallpaper. ''The Yellow Wallpaper'' is an 1892 psychological story and feminist masterpiece by Charlotte Perkins Gilman. In it, the narrator's journal entries describe a descent into madness.
title “The Yellow Wallpaper”. author Charlotte Perkins Gilman. type of work Short story. genre Gothic horror tale; character study; socio-political allegory. language English. time and place written 1892, California. date of first publication May, 1892. publisher The New England Magazine. narrator A mentally troubled young woman, possibly named Jane. point of view As the main character’s. We’ll go over The Yellow Wallpaper summary, themes and symbols, The Yellow Wallpaper analysis, and some important information about the author. "The Yellow Wallpaper" Summary "The Yellow Wallpaper" details the deterioration of a woman's mental health while she is on a "rest cure" on a rented summer country estate with her family. Instant downloads of all 1330 LitChart PDFs (including The Yellow Wallpaper). LitCharts Teacher Editions. Teach your students to analyze literature like LitCharts does. Detailed explanations, analysis, and citation info for every important quote on LitCharts. The original text plus a side-by-side.
Like Kate Chopin's "The Story of an Hour," Charlotte Perkins Gilman's "The Yellow Wallpaper" is a mainstay of feminist literary study.First published in 1892, the story takes the form of secret journal entries written by a woman who is supposed to be recovering from what her husband, a physician, calls a nervous condition. The Yellow Wallpaper Summary “The Yellow Wallpaper” is a short story by Charlotte Perkins Gilman that describes the narrator’s depression following the birth of her child. | <urn:uuid:6b289d8b-b164-4cd3-986c-94d2109f7fd6> | {
"dump": "CC-MAIN-2020-45",
"url": "https://jennieforehand.com/the-yellow-wallpaper-summary-analysis/",
"date": "2020-10-19T23:31:47",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107867463.6/warc/CC-MAIN-20201019232613-20201020022613-00302.warc.gz",
"language": "en",
"language_score": 0.9219840168952942,
"token_count": 1160,
"score": 3.25,
"int_score": 3
} |
Run Faster By Focusing on an Object in the Distance
Researchers have found a way to make running and walking seem less long and tiresome. People who narrow their attention and focus on a specific object in the distance can motivate themselves to push on.
Those of us who've made fitness part of our New Year's resolution may find their focus waining. The winter ritual of hopping on a treadmill and just running is boring, even having a go around the block can be a daunting exercise regardless of the cold. But Olga Khazan of The Atlantic writes that researchers have found a way to make the task seem less long and tiresome, claiming people who narrow their attention and focus on a specific object in the distance—keeping your eye on the prize as the old adage goes—can motivate themselves to push on.
One of the study's co-authors, Emily Balcetis an Assistant Professor of Psychology at New York University, explained in a press release:
“People are less interested in exercise if physical activity seems daunting, which can happen when distances to be walked appear quite long. These findings indicate that narrowly focusing visual attention on a specific target, like a building a few blocks ahead, rather than looking around your surroundings, makes that distance appear shorter, helps you walk faster, and also makes exercising seem easier.”
The findings, published in the journal Motivation and Emotion, were based on two studies. The first involved 66 adult participants that were taken to a New York City park in the summer and asked to walk. From the starting line, an open cooler with cold beverages stood just 12 feet away. The participants were split into two groups. One was asked to focus on the cooler as they walked, while the other was told to walk naturally.
Researchers then asked participants to estimate the distance between the cooler and the starting line. Those who were asked to focus on the cooler perceived the distance as shorter than the other group.
In the second experiment, researchers took 73 participants to a gymnasium and timed them as they walked 20 feet while wearing ankle bracelets, adding 15 percent to their body weight. Similar to the first experiment one group was told to focus on a point in the distance (a cone), while the other group was told to look around and look at the cone.
Participants in the focused group perceived the cone to be 28 percent closer and walked 23 percent faster than the other group. What's more, the focused group found the exercise less physically exhausting than the other group.
The researchers weren't sure what caused the focused participants skewed perception and faster speeds. However, they offered a suggestion:
"When people see goals as within reach, it may mobilize action, producing bursts of energy that result in quicker walking times and an experience of ease."
Read more at The Atlantic
Photo Credit: Tuncay/Flickr
Swipe right to make the connections that could change your career.
Swipe right. Match. Meet over coffee or set up a call.
No, we aren't talking about Tinder. Introducing Shapr, a free app that helps people with synergistic professional goals and skill sets easily meet and collaborate.
Despite incredible economic growth, it is not necessarily an investor's paradise.
- China's stock market is just 27 years old. It's economy has grown 30x over that time.
- Imagine if you had invested early and gotten in on the ground floor.
- Actually, you would have lost money. Here's how that's possible.
Moans, groans, and gripes release stress hormones in the brain.
Could you give up complaining for a whole month? That's the crux of this interesting piece by Jessica Hullinger over at Fast Company. Hullinger explores the reasons why humans are so predisposed to griping and why, despite these predispositions, we should all try to complain less. As for no complaining for a month, that was the goal for people enrolled in the Complaint Restraint project.
Participants sought to go the entirety of February without so much as a moan, groan, or bellyache.
- Facebook and Google began as companies with supposedly noble purposes.
- Creating a more connected world and indexing the world's information: what could be better than that?
- But pressure to return value to shareholders came at the expense of their own users.
SMARTER FASTER trademarks owned by The Big Think, Inc. All rights reserved. | <urn:uuid:e86d30d3-c8fc-4698-8d2a-b3bfa9f14c04> | {
"dump": "CC-MAIN-2019-04",
"url": "https://bigthink.com/ideafeed/run-faster-by-focusing-on-an-object-in-the-distance",
"date": "2019-01-23T11:13:51",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547584331733.89/warc/CC-MAIN-20190123105843-20190123131843-00396.warc.gz",
"language": "en",
"language_score": 0.9694090485572815,
"token_count": 916,
"score": 3.25,
"int_score": 3
} |
Division of labor to maximize fitness is a phenomenon seen in several insects of the order Hymenoptera, including bees, ants and wasps. The extraordinary system entails a single queen responsible solely for reproduction; and the rest for all other tasks; building the nest, protecting against predators, gathering food. The queen ensures authority over reproduction by releasing pheromones that render the worker females sterile, a fascinating fact known for a long time.
The true identify and nature of these pheromones, however, has remained largely elusive, till the recent publication of a new study in Science. Through their work, Oystaeyen et al. (2014) have identified for the first time a conserved class of chemicals that are used to induce sterility in worker population of bumblebees, desert ants and wasps.The study started with a simple question-is a conserved compound responsible for worker sterility in multiple Eusocial species? The suspect was long-chain hydrocarbons, shown previously to act as pheromones in one species of ants.
To answer this question, Oystaeyen et al. used gas chromatography mass spectrometry to identify various chemicals on the cuticles of queens and workers and used deductive reasoning to identify chemicals unique to queens. Synthetic versions of these were used to test the chemical’s effects on worker ovary development in the absence of the queen. In addition, previous data from 64 other eusocial insects was synthesized. The conclusion-in three species, the bumblebee, the desert ant and the common wasp-saturated hydrocarbons are indeed responsible for worker sterility.
This finding sheds light on the evolution of eusocial species. Turns out these hydrocarbons are also used to attract mates in some species and thus it is proposed that their primary objective was in fact to attract a mate and not to suppress worker sexual development.It will be interesting to look out for studies that identify more of these chemicals used for sexual selection in these species.
Oystaeyen et al. (2014) Conserved class of queen pheromones stops social insect workers from reproducing. Science. 343 (6168): 287-290. | <urn:uuid:04f4acd2-a67c-47e1-9635-ca2de703f030> | {
"dump": "CC-MAIN-2017-51",
"url": "https://sapeckagrawal.wordpress.com/2014/02/05/scent-of-a-hymenopteron/",
"date": "2017-12-16T18:39:13",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948588420.68/warc/CC-MAIN-20171216181940-20171216203940-00465.warc.gz",
"language": "en",
"language_score": 0.9351644515991211,
"token_count": 455,
"score": 3.265625,
"int_score": 3
} |
By Laurie Jedamus
Collections Management and Preservation is working on a fascinating project for the John R. Borchert Map Library: We’re cleaning and preparing for display a collection of 36 copper plates that were used to print United States Geological Survey (USGS) maps. The plates in the collection will be on display in an exhibit at Elmer L. Andersen Library from February 13 through May 27.
Each plate is made of solid copper, measures 17 inches by 21 inches, and weighs 14 pounds! All are covered with a variety of substances that include decades-old ink, grease pencil, and general corrosion. We’re removing all these substances to keep them from further damaging the plates, and also to make the plates more user-friendly for an upcoming exhibit.
The USGS owned thousands of these plates, and decommissioned them decades ago when newer technologies came in. When they recently decided not to continue storing them, they made them available to public and nonprofit institutions, including the Borchert Map Library. Plates that remained were then auctioned off to the general public — you can occasionally still find one available online, but most have found homes.
Conservation challenges for copper plates
Our plates were used from the 1880s to the 1950s to print maps of different areas in Minnesota. Lines were engraved into each plate, with separate plates for topographic features, water features, and man-made features for a particular area of Minnesota. Images from the three different types of plates were used for each map, using a different color of ink for each type of information.
Since our expertise is primarily with paper-based materials, it was challenging to find the best way to clean the plates. Our background research included books about the intaglio printing process and conversations with local fine arts printers. They were useful for general background and for advice on the best way to store the plates, but of limited help with advice on cleaning off decades-old ink, since printing plates are usually meticulously cleaned immediately after each use.
Our main resource was conservators. Tom Braun, the objects curator at the Minnesota History Center, has been amazingly helpful. He’s given us advice throughout the project, and has suggested cleaning agents, techniques, and tools to use and avoid. We also posted on Conservation OnLine’s Conservation DistList, a weekly online publication for conservators and people in related fields, which provided some very interesting and helpful suggestions. We even got advice from a conservator at the Rijksmuseum in Amsterdam, who had been working on a similar project involving copper plates used for etchings.
We learned that many common cleaning processes and materials should be avoided because of their potential for damaging the plates. We ruled out using mild acids (like lemon juice), as well as abrasive scrubbers that clean quickly but can also scratch (copper is surprisingly soft, about 2.5 on the Mohs scale, about the same as gold.). Other substances like ammonia that clean well, but have unwanted side effects like changing the color of the copper to an odd pinkish color, were also ruled out.
Our basic process, arrived at after advice from the conservators supplemented by considerable experimentation, relies on multiple cleaning agents ranging from Dawn dish soap to Vaseline, materials including homemade scrubbing blocks made from bookboard scraps and paper towels (surprisingly, they work better than more expensive options), and especially, a lot of elbow grease.
As the plates are cleaned, we are housing them in custom–made enclosures. When it comes time to exhibit them, since the very fine engraved lines are difficult to see once the plates are cleaned, the plates will have very finely powdered charcoal buffed into the lines — an easily reversible way to make them more visible. | <urn:uuid:42822500-5eaf-4863-95f5-8ee12f94948f> | {
"dump": "CC-MAIN-2018-17",
"url": "https://www.continuum.umn.edu/2016/12/restoring-usgs-copper-plates/",
"date": "2018-04-21T02:02:16",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125944851.23/warc/CC-MAIN-20180421012725-20180421032725-00301.warc.gz",
"language": "en",
"language_score": 0.9635588526725769,
"token_count": 768,
"score": 3.21875,
"int_score": 3
} |
This article is to study how social distancing impacts the spread of the corona virus and thus impacting the number of hospital beds needed. This study is based on varying the basic reproduction number Ro and simulating its impact on the spread of the virus using a simple Epidemic model called SIR.
Please note that this article is primarily to illustrate an example of using machine learning algorithms for prediction with the limited data that is publicly available and the opinions of this article should not be interpreted as professional advice.
In this article, I will briefly explain what the basic reproduction number is and a brief overview of the SIR model without going into the mathematics behind it. I will showcase the results of the modelling I did in R programming language to simulate the impact of social distancing using various Ro values.
Exponential disease spread
In this example shown below, 8 people have come into contact with one infected person. Two of them have been infected. Those 2 infect more people and this process continues exponentially.
The obvious solution is to reduce the number of people the infected person contacts. One of the important metric that measures the number of people an infected person can infect is the reproduction number which will be discussed in the section.
Basic reproduction number (Ro)
The basic reproduction number Ro that estimates the speed at which a disease is capable of spreading in a population. This is the total number of people an infected person infects. This is an important number to understand so let's discuss this little bit more below.
This number Ro pronounced R naught.” It’s a mathematical term that indicates how contagious and infectious disease is. Ro tells you the average number of people who will catch a disease from one contagious person. If a disease has an Ro of 6, an infected person will transmit the disease to an average of 6 other people, as long as no one has been vaccinated against it or is already immune to it in their community. Swine flu or H1N1 virus from 2009 had a Ro value of ~ 1.5. The impact was not that much because of vaccines and antiviral drugs. In the case of Coronavirus, the Ro value is estimated to be between 1.5 to 3.5 where 1.5 to 2.5 being used when good social distancing is practiced.
Our goal is to use these Ro values to simulate social distancing impact. A very good social distancing program can be thought of as having an Ro value close to 1. Ro of 2 means an infected person will transmit the disease to 2 other people. If the Ro value is greater than one, the infection rate is greater than the recovery rate, and thus the infection will grow throughout the population. The social distancing program that has Ro values greater than 2 will have trouble containing the virus spread & can potentially overwhelm the population.
Predictive model used for the simulation
Epidemic models are compartment based models that divide the population into separate groups to identify how the disease spreads from one member of a population from one group to another. One of the simplest compartmental models is called SIR which divides the population into 3 groups as described below. One can read more about the mathematics behind this model here.
The two important parameters for this study is
1. Transmission rate
Each infected person can contact as few or as many people a day depending on if social distancing is practiced or not. An infected person can meet few people and infect some of them. Say one infected person meets 6 people and has a 15% probability to infect them. It means the (𝛃) aka the transmission rate is 6 * 0.15 = 0.9 person per day.
2. Rate of recovery (Ɣ)
Rate of recovery is the ratio of the infected person recovered in the time unit. So in this case the infection lasts for 5 days, ⅕ is the gamma value and is currently infected population that recovers each day.
How many additional hospital beds are needed in a state/county to tackle the surge
I used a model built in R programming language to do this study. I initially tried to do this simulation using data from Fairfax county in Virginia but since all the data is not publicly available I used some of the stats from Virginia & USA to fill in the numbers needed for the study. Let's say this county / state has a population of 1 million and announced 5 cases around March 10th. Please note that the model treats all individuals to be the same when in reality older populations with more chronic conditions seem to have higher risk.
A brief summary of the key general stats as of 04/25/20. These are some of the publicly available stats we have used in our calculations. Sources for these stats are from WHO and CDC.
Input parameters used for the simulation
The next section discusses the predictive model output build using R programming language with the above parameters.
Impact of social distancing or lack of it on peak cases
The impact of Social distancing is measured using Ro values. Good social distancing essentially means Ro value is very low. The following graph shows the impact of various Ro values ranging from 1.25 to 2.0. Y-axis shows the number of people (%) and the x-axis shows the number of days. The lower the Ro value, the lower the number of people infected and farther the time to peak giving the county ample time to prepare for it. If we start relaxing the social distancing you will see behavior that is closer to Ro = 2.0 (or possibly above). This is why social distancing, testing, tracing and quarantine are extremely important.
For Ro = 1.25, assuming we continue to do good social distancing and other quarantine methods, you can see 21,500 cases around 197 days. This comes to around 7 months from March 10th which would be the end of Sept / early october. On the other hand, for Ro of 1.5, peak case is at 63,000 at 112 days. The peak increased as well as the time to peak decreased which cuts short the time to prepare
Predicting hospital bed capacity for different Ro values
The following table summarizes the results of the modelling for the 3 different Ro values. Peak value indicates the max number of cases one would find and Days is days since the simulation started (March 10, 2020).
For this simulation we used the availability of the beds as 2.1 beds per 1000 (from stats) & 10% & 16% (from stats) of the people who tested positive are hospitalized. Taking those into consideration, the needed beds are calculated below.
For 16% hospitalization rate
For Ro = 1.25, the SIR model built using R predicts peak cases will happen after 197 days, with the approximate peak date of Sept 23 with hospital beds needed at 3,440 for 16% hospitalization. Assuming 2.1 beds per 1000 people, this county would experience a shortage of 1,340.
Similarly, for Ro = 1.5, the model predicts peak cases will happen after 112 days, with the approximate peak date of June 30th with hospital beds needed at 10,080. Assuming 2.1 beds per 1000 people, this county would experience a shortage of 7,890.
For 10% hospitalization rate
If we were to assume a conservative estimate for the percentage of the population who tested positive to need hospitalization @ 10% the following are the corresponding hospital bed capacity needed.
For Ro = 1.25, the SIR model built using R predicts this county would experience no shortage. For Ro 1.5, it predicts a shortage of 4,200 beds.
Using a SIR model we predicted the extent of the spread for a county with a population of 1 million that had 5 confirmed cases as of March 10th with various reproduction number (Ro) values. Key conclusions include: | <urn:uuid:d72dae48-6747-401a-b84e-3e463402e9d4> | {
"dump": "CC-MAIN-2021-49",
"url": "https://hackernoon.com/using-reproduction-number-ro-to-study-the-impact-of-social-distancing-on-hospital-beds-required-b440324p",
"date": "2021-11-26T23:34:43",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358074.14/warc/CC-MAIN-20211126224056-20211127014056-00343.warc.gz",
"language": "en",
"language_score": 0.9484021067619324,
"token_count": 1608,
"score": 3.21875,
"int_score": 3
} |
Yemen, one of the Arab world's poorest countries, has been devastated by a civil war. Aid agencies have warned that millions of people face hunger, as the country experiences some of the worst violence since the conflict began eight years ago. Air strikes continue in the country, and the UN has condemned 'double tap' attacks which come in rapid succession and target rescuers coming to the aid of the initial victims. Please subscribe HERE 🤍bit.ly/1rbfUog #Yemen #BBCNews
Self-knowledge is a term used in psychology to describe the information that an individual draws upon when finding an answer to the question "What am I like?".
While seeking to develop the answer to this question, self-knowledge requires ongoing self-awareness and self-consciousness (which is not to be confused with consciousness). Young infants and chimpanzees display some of the traits of self-awareness and agency/contingency, yet they are not considered as also having self-consciousness. At some greater level of cognition, however, a self-conscious component emerges in addition to an increased self-awareness component, and then it becomes possible to ask "What am I like?", and to answer with self-knowledge, though self-knowledge has limits, as introspection has been said to be limited and complex.
Self-knowledge is a component of the self or, more accurately, the self-concept. It is the knowledge of oneself and one's properties and the desire to seek such knowledge that guide the development of the self-concept, even if that concept is flawed. Self-knowledge informs us of our mental representations of ourselves, which contain attributes that we uniquely pair with ourselves, and theories on whether these attributes are stable or dynamic, to the best that we can evaluate ourselves.
The self-concept is thought to have three primary aspects:
The cognitive selfThe affective selfThe executive self
The affective and executive selves are also known as the felt and active selves respectively, as they refer to the emotional and behavioral components of the self-concept. Self-knowledge is linked to the cognitive self in that its motives guide our search to gain greater clarity and assurance that our own self-concept is an accurate representation of our true self; for this reason the cognitive self is also referred to as the known self. The cognitive self is made up of everything we know (or think we know) about ourselves. This implies physiological properties such as hair color, race, and height etc.; and psychological properties like beliefs, values, and dislikes to name but a few.69
It resembles the story of Syria I remember Syrians joining the militia opposition against the government and people hated al Assad then Assad bombed more causing people to backtrack😔💔
An important message for all muslims in the world to share : youtube.com/shorts/LtZHdL43YOY youtube.com/shorts/LtZHdL43YOY
British backed terrorist state of Saudi Arabia. The reporting and lack of is stomach churning
God loves you so much that He sent His Holy Son Jesus from heaven to earth, to be born of a virgin, to grow up and die on a cross for our sins, and to be put into a tomb 3 days and rise from the dead the third day, and He (Jesus) went back up to heaven. We must receive Sinless Jesus sincerely to be God's child(John 1:12).After we get saved by grace through faith in Christ, if we truly love the Lord Jesus Christ, then we will obey Jesus(John 14:15). Mark 1:15 "And saying, the time is fulfilled, and the kingdom of God is at hand: Repent ye, and believe the gospel." Jesus said in John 14:15 "If you love Me, keep My commandments. "There's a real hell. It says in Revelation 21:8 "But for the cowardly, & unbelieving, and abominable, and murderers, and immoral persons and sorcerers and idolaters and all liars, their part will be in the lake that burns with fire and brimstone..." Please sincerely receive Holy Jesus and put your true faith and trust in Him today and please repent. Will you have a Real encounter with Holy Lord Jesus and stay in a Genuine relationship with Him daily please?
I love this girls mouth with the first sentence
It is sad.
It's sad Yemen gets very little notice. While Ukraine gets the world's attention. Even Nato's attention. America and NATO nations are sick. They will ignore the thousands slaughtered and focus on Ukraine ? FUK that. FUK Ukraine.
This war will never end untill and unless one eradicate the other side completely or both the sides should come together and start accepting each other as a friend and fellow citizen then only there will be peace. But from the looks of this war it seems that this war is far from being over anytime soon, god bless you all and I pray that Yemen find peace very soon.
Châu chấu, những dòng sông nước chảy như máu, bệnh dịch, hạn han, lủ lụt, các điềm lạ, động đất.. là báo hiệu tận thế
Phản Kito là ĐGH
Sắp tận thế lúc 15 giờ
The UK and US are supporting genocide in yemen
Cover the Cholera outbreak 2.5 million affected and dying in yemen
Yemenis are brave 🇾🇪❤️
The boy who's father was killed said he would fight back so what he's gonna do kill someones else's father. Its a chain and its hard to break. Retaliation is the worst thing when it comes to these types of wars. They never end
The west dont get a shit about Yemen because they dont have blue eyes
Yemen are denied supremacy! Another example of Iraq whom are left behind. Rather retaliate of being a peasant. Bow to a Kingdom. Yemen are not even close to being a real Arabian Knights instead rinse a Kings feet. Useless to be a rebel instead maximize a mind to build a better civilization like a real Kingdom. You all look like a peasant without a crown. Being a ruin stays a ruin including the people in it include all the rest of the countries who follow that status. Being the first civilization who was first on this planet nothing has changed yet amongst all of you. How come? I raised my hands cos I know. God created Adam and Eve that included the people who are amongst just people like animal who are suppose to take care of its land rather kill your brothers instead to maximize its civilization to be beautiful wonderful Powerful is i.👑🦁🌹🖕🏻🖕🏿🖕🏻
the UK is selling the weapons being used this conflict. It is also providing military expertise.
So Russia is preventing investigation of war crimes here, too? Another example that the current regime needs to go! | <urn:uuid:37173899-3b54-420c-93ef-0d6838d3fb68> | {
"dump": "CC-MAIN-2022-33",
"url": "http://lenta.ru.com/air-strikes-target-rescuers-in-yemen-bbc-news-xl-xf0gDYGTEE8QOxMmn4t-vi.html",
"date": "2022-08-14T18:35:36",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572063.65/warc/CC-MAIN-20220814173832-20220814203832-00178.warc.gz",
"language": "en",
"language_score": 0.9508311748504639,
"token_count": 1532,
"score": 3.296875,
"int_score": 3
} |
SQL Synonyms is an alias for a table or a Schema object in a database. They are used to protect client applications from the changes made to name or location of an object.
Synonyms permit applications to function irrespective of user who owns the table and which database holds the table or object.
Create Synonym statement is used create a Synonym for a table, view, package, procedure, objects, etc.
There is a table Customer of efashion, located on a Server1. To access this from Server2, a client application would have to use name as Server1.efashion.Customer. Now we change the location of Customer table the client application would have to be modified to reflect the change.
To address these we can create a synonym of Customer table Cust_Table on Server2 for the table on Server1. So now client application has to use the single-part name Cust_Table to reference this table. Now, if the location of this table changes, you will have to modify the synonym to point to the new location of the table.
As there is no ALTER SYNONYM statement, you have to drop the synonym Cust_Table and then re-create the synonym with the same name and point the synonym to the new location of Customer table.
Public Synonyms are owned by PUBLIC schema in a database. Public synonyms can be referenced by all users in the database. They are created by the application owner for the tables and other objects such as procedures and packages so the users of the application can see the objects.
CREATE PUBLIC SYNONYM Cust_table for efashion.Customer;
To create a PUBLIC Synonym, you have to use keyword PUBLIC as shown.
Private Synonyms are used in a database schema to hide the true name of a table, procedure, view or any other database object.
Private synonyms can be referenced only by the schema that owns the table or object.
CREATE SYNONYM Cust_table FOR efashion.Customer;
Drop a Synonym
Synonyms can be dropped using DROP Synonym command. If you are dropping a public Synonym, you have to use the keyword public in the drop statement.
DROP PUBLIC Synonym Cust_table; DROP Synonym Cust_table; | <urn:uuid:68e6196e-7ca6-46f7-84f6-90c2574cfd23> | {
"dump": "CC-MAIN-2019-04",
"url": "http://scriptndump.com/sap-hana-sql-synonym/",
"date": "2019-01-23T09:14:51",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547584328678.85/warc/CC-MAIN-20190123085337-20190123111337-00051.warc.gz",
"language": "en",
"language_score": 0.8928062319755554,
"token_count": 476,
"score": 3.171875,
"int_score": 3
} |
Introduction: Unheated Movable Seed Starting Greenhouse
For people wanting to grow their own foodcrops in temperate countries, using a seed starting greenhouse allows their vegetable friends to take a headstart in the cold months and hereby thus increase their cropping season/harvest amounts.
- to allow the seed-starting greenhouse to be used ultra-efficient. The greenhouse is planned to be used as a movable greenhouse to germinate seeds of different foodcrop-plots (one plot at a time; leaving it on just long enough untill the germs have come to somewhat to size). When the seed has germinated well enough to resist the outside environment, the greenhouse is then quickly moved to another plot. The crops and plots offcourse are best all of different species (or cultivars), so that the greenhouse does not need to germinate all the seeds/sowbeds at once. If still too many plots need to be germinated at a certain time, try using several of of these (low-cost) greenhouses, and/or apply other options (eg taking content with a light decrease in the possible harvesting gains, ...).
-to reduce hassle: the greenhouse is compact and can be taken along
-to decrease costs: given the materials used, it may not cost anything at all (if all parts may be used from the home), or atleast very less (parts may be gathered from stores, scrapyards, ... cheaply).
Step 1: Materials
- 1 large, sturdy piece of glass, sized to the box (atleast more or less)
- 1 sturdy wooden box
- 2 large woodscrews
- 4 rubber O-rings (sized to the thickness of the large screws)
- 6 small woodscrews
- 1 small piece of wood; sized to the thickness of the glass. In our set-up, we used a stirring stick for paint herefore; these are usually about right in thickness
- 8 aditional woodscrews and 4 rotatable wheels for reusing box bottoms
Step 2: Box Bottom Removal
Remove the box's bottom; a regular screwdriver will do for softwood but an electrical one may be needed for the hardwood type-boxes (as used in the example). Also, were a bit lazy and have weak fore-arms :-) The box-bottoms may be simply discarded (burnt for heat, composted, ...) or may be used for moving potted plants (sturdy bottoms are then required). How this is done may be seen in step 5.
Step 3: Fitting of Screws and Wood-pieces
Fit one side of the box with the 2 screws (outfitted with the rubber rings). The glass will rest on these rubbers. Only one box-side needs to be outfitted with rubbers as the glass will be slightly tilted to one side; hereby thus too resting on only one side of the box.
Also mount the pieces of wood (cut the single large piece of in 4 smaller pieces so that 2 pieces may hold the glass positioned and 2 may secure the glass). The pieces are to be fitted to the box using the 6 woodscrews. The glass is to be slided in between the 2 first wooden pieces (this provides most protection), the glass-securing pieces are only ment for when the greenhouse is moved. Make sure the last 2 pieces may be turned (the screws must not block the turning !)
Step 4: Mounting of the Glass + Tilting Towards Sun
Place the glass on the bottomless box, making sure the small pieces hold it in place firmly. Tilt the box somewhat to the prevailent sun-side (located in the south for northern temperate countries). In our set-up, we don't cut the sides of the box, but instead create a slope with dirt in the soil to tilt the greenhouse. However, if you truly want to, you may cut the sides so that the box is always tilted, even when placed on a flat underground. Depending on the location where the box is placed (shade or plain sun), type of glass used, types of seed to be grown, watering frequency, soil type, and other factors, more or less tilt may be applied. Generally however, only a little bit of tilt is more then plenty (using a little tilt also decreases the chance of the glass falling off accidentally).
Step 5: Reuse of Box Bottoms
Depending on the box used, the 2 bottoms may be reused for moving potted plants. For this, the optional 4 small rotatable wheels are applied to the bottoms, with the use of the 8 additional woodscrews.
Step 6: Enjoy
- watching the seeds germinate within your greenhouse during the cold months
- the all round amazing benefits of pot moving using wheeled platforms ! | <urn:uuid:743ef6c6-3b7b-40b6-a3ec-61941a1de005> | {
"dump": "CC-MAIN-2017-30",
"url": "http://www.instructables.com/id/Unheated-movable-seed-starting-greenhouse/",
"date": "2017-07-27T05:18:22",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549427429.9/warc/CC-MAIN-20170727042127-20170727062127-00524.warc.gz",
"language": "en",
"language_score": 0.9189573526382446,
"token_count": 1005,
"score": 3.09375,
"int_score": 3
} |
The geological wealth of Psiloritis, the intense geomorphology and the variety of its rocks are responsible for the presence of an incredible variety of animals that survive in microclimatic conditions of every area. Hundreds of birds find places for nesting, places for hunting, relaxation after exhaustion from migration or places to hide and mate. One of the few remaining populations of the Cretan wildcat lives here. According to the researchers, it is regarded as an animal ghost.
In the heart of Psiloritis hundreds of tiny animals, beetles, snails, centipedes, isopods have lived for thousands years and continue to evolve silently (and blindlyl)
The isolation of the island makes the rocky mass one of the most important “hot spots” of high biodiversity and endemicity in Greece and have led the last five years in its integration with NATURA network.
Psiloritis is the home for "kokkalas" or lammergeyer, which is one of the biggest and most spectacular raptors in Europe. The Cretan population of this raptor is probably the last viable population across the Balkans, since the use of poisons and growth has led the bird to extinction from the landlocked Greece. It seems that it is the last shelter in Crete.
The carrion buzzards of Psiloritis leave the visitor of the mountains speechless with their enormous “mass” flight. They create large colonies and nest in abrupt rock roofs and “lofts” that are always against the winds that they use in order to ascend in higher altitudes in order to look for dead animals.
Most of the nests at Idi Mountain (Psiloritis) are located in Amari and Pano Riza villages, using the north smooth glacis of Mylopotamos for food hunting.
A population of raptors completes the ornithological peregrination of Psiloritis. War eagles, lannerets, Bonelli’s eagles, haggards and common kestrels.
In the area of Psiloritis we are likely to come across the three types of the Cretan amphibians: the green toad, the Cretan tree frog and the Cretan waterfrog, all the types of the Cretan reptiles (snippets, lizards and the island’s four types of snakes. A unique and sad absence from Idi is the Cretan goat a species that was eliminated from the mountain due to the prevalence of gun-runners during the previous century.
Many caves and precipices of the area host large colonies of protected cheiropteras (bats) in the caves of Erfoi at the lowland Mylopotamos (hosts several hundreds of the species), the cave of Kamilaris at Tylisos village (with at least four types in large populations), at Chonos cave of Sarchos at Krousonas villages (five types), the cave of Kamares and many others.
Among the invertebrates, snails, isopods and several families of ground living beetles present endemic forms that spread exclusively on the mountain area of Idi (Psiloritis).
Seventeen species of snails of Mylopotamos are Cretan endemic species. Recent studies on “warm” points of bioversity based on invertebrate fauna of south Greece, bring out the mountainous area of Psiloritis at the second most important point of south Greece. | <urn:uuid:d419f437-a638-4085-9938-81efa3b98f77> | {
"dump": "CC-MAIN-2020-05",
"url": "https://www.psiloritisgeopark.gr/en/the-park/biodiversity/fauna/",
"date": "2020-01-19T16:39:49",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250594662.6/warc/CC-MAIN-20200119151736-20200119175736-00001.warc.gz",
"language": "en",
"language_score": 0.9403901100158691,
"token_count": 738,
"score": 3.53125,
"int_score": 4
} |
Monoprotic acid: One acidic proton (HCl)
HCl(aq) H+(aq) + Cl-(aq)
Diprotic acid: Two acidic protons (H2SO4)
H2SO4(aq) H+(aq) + HSO4-(aq)
HSO4- (aq) H+(aq) + SO4 2-(aq)
Oxyacids: Acidic proton is attached to an oxygen atom (H2SO4)
Organic acids: Those with a carbon atom backbone, contain the carboxyl group (-COOH). CH3-COOH, C6H5-COOH
A substance is said to be amphoteric if it can behave either as an acid or as a base. Water is amphoteric (it can behave either as an acid or a base).
H2O + H2O H3O+ + OH
acid 1 base 2 acid 2 base 1
Kw = [H3O+][OH-] = [H+][OH-] = 1 1014 at 25°C
Where, Kw is the ion-product constant or dissociation constant for water.
[H+] = [OH-] = 1.0 x 10-7 M at 25oC in pure water.
Figure 14.7 Two Water Molecules React to Form H3O+ and OH-
The pH scale provides a convenient way to represent solution acidity. The pH is a log scale based on 10.
pH in water ranges from 0 to 14. The pH decreases as [H+] increases.
Kw = 1.00 1014 = [H+] [OH]
pKw = -log Kw = 14.00 = pH + pOH
As pH rises, pOH falls (sum = 14.00).
pOH = -log [OH-]
Figure 14.8 The pH Scale and pH Values of Some Common Substances
Calculate the pH of 1.0 M HCl.
Since HCl is a strong acid, the major species in solution are H+, Cl- and H2O
To calculate the pH we will focus on major species that can furnish H+. The acid is completely dissociates in water producing H+ and water also furnishes H+ by autoionization by the equilibrium
H2O(l) H+(aq) + OH-(aq)
In pure water at 25oC, [H+] is 10-7M and in acidic solution even less than that. So the amount of H+ contributed by water is negligible compared with the 1.0M H+ from the dissociation of HCl.
pH = -log [H+] = -log (1.0) = 0
List major species in solution.
Choose species that can produce H+ and write reactions.
Based on K values, decide on dominant equilibrium.
Write equilibrium expression for dominant equilibrium.
List initial concentrations in dominant equilibrium. | <urn:uuid:f4d44d3d-cd7c-4b62-be49-cc6589f7c652> | {
"dump": "CC-MAIN-2020-24",
"url": "http://www.sliderbase.com/spitem-1099-2.html",
"date": "2020-05-31T23:36:29",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347413786.46/warc/CC-MAIN-20200531213917-20200601003917-00570.warc.gz",
"language": "en",
"language_score": 0.8401764035224915,
"token_count": 633,
"score": 3.53125,
"int_score": 4
} |
Understanding Common Cellphone Terminology
CARP RECOMMENDED PARTNER SPONSORED CONTENT
Here’s a quick guide to understand the most common cellphone terms.
Band – For cell phones, a band refers to a specific range of radio frequencies.
Geo-tagging – Thanks to the GPS on your device, a precise location can be associated with what you do. For example, when taking a photo, geo-tagging allows your device to organize your pictures by geographic location, making it easier to manage and find your photos.
LTE – LTE, or “Long Term Evolution” is a frequency used by cell phones, considered the global gold standard in wireless network technology.
MB/GB – stands for MegaBytes and Gigabytes. When it comes to mobile data, it refers to the amount of data used or included in your plan. For internal or external memory, it relates to the storage size your device can hold.
OS – is the software your device uses. It stands for Operating System. The most common ones are iOS, used on iPhones and iPad, and Android, used with Google, Moto or Samsung devices.
Plane mode – (sometimes known as Flight mode) is a setting available on smartphones and other portable devices. When activated, this mode suspends the device’s radio-frequency signal transmission technologies, including Bluetooth, Wi-Fi and Voice and Data services.
Unlocked/locked – refers to the state of a cell phone device and its ability to be used with more than one carrier. Locked means that you can only use your device with the sim card from the carrier you bought it from, whereas unlocked means you can use it with all carriers. Since December 2017, devices sold on the Canadian market must be unlocked. If your device is older, you may have to unlock it to use with other carriers.
Smartphone – an advanced type of cellphone that can do so much more than calling and messaging. A smartphone can also take better photos, connect to the internet much like your computer and gives you access to a wide variety of apps
App – “app” is short for application. It is a software program that you can download in your smartphone. There are different apps offering various benefits such as video calling, music streaming, GPS, access workout routines plus so much more. Some apps are free, while others may cost money. Smartphones have an app store to help find new apps to install. Some app requires an internet connection through Wi-fi or data.
Wi-Fi – a technology that provides a network access to the internet, provided by an Internet Service Provider (ISP), through an internet modem. It is accessible from smartphones without the need for a physical wired connection. Wi-Fi is only accessible within or around buildings where the network is located.
Data – also called mobile internet, is provided by your cellphone service provider when you subscribe to a smartphone plan or a data plan. Data is measured in MB (Megabytes) and GB (Gigabytes) as plans can include anything from 100 MB, 500 MB to 1 GB or more (1024 MB = 1 GB). Data is accessible anywhere you have cellphone network connection.
Cellphone plan – is a package that offers a combination of features. A cellphone plan is provided by telecom companies such as Zoomer Wireless, Fido and Rogers. An example of a plan may consist of minutes, texts and data for a set price per month. Paying for a plan can either be prepaid or postpaid.
Prepaid plan – you pay for your phone service upfront and only pay for what you use.
Postpaid plan – you receive a bill at the end of the month and may include overages for usage beyond your plan limitations.
Porting in – if you are activating a new cellphone plan with another provider, you are able to keep your old cellphone number to your new plan. This process is called “porting” your number or transferring your phone number.
Overage – overage refers to using more than the allowance included in your cellphone plan. Going over your plan results in fees typically called overage charges. These charges vary depending on the plan you are on. For example, if your wireless plan includes 500 MB of data and at the end of the month your usage reached 650 MB, you exceeded your data allowance and will get charged for the 150 MB overage you incurred.
Roaming – Roaming refers to the ability to use your mobile device outside of your home cellphone provider’s network. For example, when travelling to the U.S., your Canadian cellphone will still be able to receive and dial voice calls, send and receive messages and access data while “roaming”. Roaming involves fees that are usually called “roaming charges”.
If your wireless plan only includes usage in Canada, these roaming charges apply every time you use your cellphone when travelling outside of the country.
Device Subsidy – some cellphone providers offer a subsidy on cellphones. These phones are either free upfront, or sold at discount by agreeing to stay on a contract with the carrier.
Contract – for cellphones, it means a commitment to keep your cellphone plan for a specific duration. Contract agreements are 2 years. During those 2 years, you will pay for the plan that you signed up for. Cancelling your plan before the contract ends would lead to early cancellation fees.
Early cancellation fee – the cost of the remaining balance of the phone you purchased. For example, if you are cancelling after 14 months from the purchase of a device worth $250, you would have to pay an early cancellation fee of $104.20
Here’s the computation: Phone cost ($250) divided by contract agreement (24 months) = $10.42 multiplied by months remaining in the contract (10 months) = $104.20 early cancellation fee.
DATA USAGE GUIDE
|Usage||Data Per Hour|
|Web Browsing||Approx. 60 MB|
|Video Calling||Approx. 85 MB|
|Podcasts||Approx. 60 MB|
|Approx. 80 MB|
|Music Streaming||Up to 150 MB|
|YouTube||Approx. 300 MB|
Zoomer Wireless provides affordable cellphones with voice, text and data plans, as well as home phone and tablet to Zoomers across Canada.
Our friendly and Canadian customer service team is just a free phone call away should you have any questions. If you want to inquire about our cellphones or plans, Zoomer Wireless has a large selection of devices and plans for all your wireless needs. Call our dedicated live agents today at 1.888.655.1252 or visit www.zoomerwireless.ca. | <urn:uuid:5c56bdf2-6e94-4e76-bca7-79b17e274ee1> | {
"dump": "CC-MAIN-2021-43",
"url": "https://www.everythingzoomer.com/featured/sponsored-content/2021/10/04/understanding-cell-phone-terminology/",
"date": "2021-10-18T01:03:05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585186.33/warc/CC-MAIN-20211018000838-20211018030838-00169.warc.gz",
"language": "en",
"language_score": 0.9288362264633179,
"token_count": 1390,
"score": 3.078125,
"int_score": 3
} |
What is Standards-Based Grading, Why is It So Hard, and How to Tackle It!
Standards-based grading’s advent came around in 2012, two years after the Common Core State Standards initiative. Since then, many states have adopted or adapted new standards based on the original Common Core State Standards.
Even if your state isn’t a Common Core state, you may be interested in finding out just how similar your standards are to Common Core. (Do a quick Google search of your ELA standards and the Common Core ELA standards, I bet you’ll find a few similarities.)
With so many states adopting new standards about a decade ago, new methods of delivering student progress and feedback were all a buzz.
In came standards-based grading. At first, very few schools jumped on board to give their report cards a total overhaul. After all, parents received traditional letter grades when they were in school, and changing report cards would undoubtedly cause some hiccups.
What is Standards-Based Grading?
Standards-based grading dismantles the traditional report card and turns the report card into a more robust document that represents where students are performing with much more specificity.
For example, a student no longer receives just one letter grade per subject.
Instead, a student may receive several scores under the reading category. This is because teachers assess and grade each individual standard.
Letter grades, which are traditionally based on percentages, take a weighted and averaged measure of ALL student progress throughout the grading period. Thus, the student receives one letter grade per subject.
In addition, traditional grading commonly takes into account schoolwork assignments, homework assignments, quizzes, and tests. All of those grades are added to the grade book, and a culminating grade is spit out.
Standards-based grading commonly uses a rubric to measure student proficiency. Here’s an example of what an elementary standards-based grading rubric can look like.
A school will usually use a four-point system to score a student’s proficiency with each individual standard or skill.
Because more states have close to 20 reading standards, it’s unlikely that each standard will be given a grade during each grading period.
Here’s an example of what one grading period’s ELA report card might look like:
Note that this student is showing that he or she is meeting grade-level expectations for comparing and contrasting points of view, but is not yet meeting grade-level expectations for theme or making connections. That said, this student is displaying that he or she exceeds grade-level expectations for describing a character, setting, or event in depth by using details from the text.
This multi-faceted approach is a key characteristic of standards-based grading and gives stakeholders and parents a broader look at how a student is doing.
Criticism of the Traditional Grading System
- Traditional grading commonly uses homework as a grade.
- Using homework as a grade can unfairly measure a student’s amount of support at home. Grades that are dependent on at-home support have been harshly criticized. It is commonly said that it is inequitable for students whose parents work and do not have time to dedicate to working with their children on schoolwork.
- Even if weighted, a student’s grade may reflect a “passing” grade, even if they’ve scored poorly on most performance-based assessments.
- If a student completes all of their homework but scores low on all assessments, they still may receive a C reflecting that they are performing at an average, expected level at that time. This can lead to students not receiving the support and intervention that is needed.
- Traditional grading does not reflect a students’ progress on the various skills and strategies covered throughout the grading period.
- For example, a student might excel at identifying the main idea and details of text but may struggle significantly with inferencing. This isn’t reflected in a traditional grade.
The Challenges of Standards-Based Grading
If your school has adopted standards-based grading, you know all too well the challenges that the first few years can bring.
- Parents lack education and understanding of standards-based grading. This learning gap can cause many calls, emails, and meetings that can be frustrating and upsetting for parents, administrators, and teachers alike.
- Parents don’t know (or understand) the difference between formative or summative assessment scores.
- Standards-based grades can be (wrongly) “translated” into traditional grading. Parents will commonly look at the highest score possible and equate it with an “A+.” The next highest score is commonly looked at as a “B.” This simply isn’t true, especially when students are receiving multiple scores within one subject like ELA.
- For many perfectionist students, they can be crushed by receiving a score that reflects that they’re meeting grade-level expectations. Commonly, the highest score achievable notes that the student is consistently performing above grade-level expectations. And when that perfectionist student sees that they’re just meeting grade-level expectations, you can expect some fallout. This can be a very harsh reality for your high achievers. 🙁
- When beginning standards-based grading, teachers usually don’t have enough assessments to measure and track student progress on various standards taught throughout that grading period.
- Many schools try to tackle too many standards within one grading period. It is almost impossible to effectively give a score for all 20 standards each grading period.
- There is a lot of grading. This comes with the territory of standards-based grading. Teachers are expected to provide formative assessment scores or feedback to students and summative assessment scores on multiple different standards.
- Teachers do not have enough resources to assess their students across multiple different standards. Most reading curriculums do not come with five different standards-based assessments per standard. Usually, unit tests contain assessment items for multiple different standards. (Not to mention, they’re usually not coded, and teachers have to figure out which question matches which standard on their own.)
Does that last bullet point resonate with you? You’re not alone!
We surveyed HUNDREDS of teachers on The Teacher Next Door Instagram, and the results were astounding! Take a look!
Almost 2/3 of teachers have NOTHING provided to them to help collect student data! Sad, but not surprising.
MOST teachers said that tracking student progress would make differentiating easier!
Make Standards-Based Grading EASIER
After seeing so many teachers struggling with the challenges of standards-based grading, I knew I wanted to help!
I developed assessments for EVERY ELA standard for grades 3, 4, and 5!
My Standards-Based Reading Assessments contain three assessments per standard, meaning that you can formatively assess, summatively assess, and reassess students as needed.
Finally, no more scouring the internet for standards-aligned assessments and certainly no more sitting at the computer for hours creating your own assessments. 🙌
Take a look at your grade level below!
Here’s why you NEED these assessments in your teacher-life:
- Consistently and quickly assess students to drive your reading instruction
- Easily form flexible and fluid small groups based on data
- Short assessments and paired passages that won’t make students cringe
- Add standards-based grades to your grade book with ease
- Regularly communicate student progress with parents and stakeholders
- Print & digital versions included for in-person, hybrid, and virtual flexibility
- Print & go or digitally assign & go – no more hours spent looking for or creating assessments
- Every standard includes 3 assessments so you can formatively assess, summatively assess, and reassess with ease
- 60 assessments total for $29.00 – that’s just 48 cents per assessment
- Develop a routine assessment system that works all year long!
⭐ Still not sure? Try out an assessment for FREE! ⭐
Interested in reading more? Check these posts out! | <urn:uuid:09e16350-6bcd-4c35-bd75-d7297b62cfae> | {
"dump": "CC-MAIN-2022-05",
"url": "https://the-teacher-next-door.com/what-is-standards-based-grading-why-is-it-so-hard-and-how-to-tackle-it/",
"date": "2022-01-23T06:13:17",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304134.13/warc/CC-MAIN-20220123045449-20220123075449-00208.warc.gz",
"language": "en",
"language_score": 0.9494003653526306,
"token_count": 1691,
"score": 3.78125,
"int_score": 4
} |
To Your Health
August, 2019 (Vol. 13, Issue 08)
Obesity Is Bad for the Brain
By Editorial Staff
When we think about obesity (and we think about it a lot these days because it's impacts so many people), we often think about the physical or psychological consequences. Yes, obesity
is a driver of type 2 diabetes, cardiovascular disease, musculoskeletal health issues and even some cancers. And on the psychological side, it can diminish self-worth, leading to anxiety and depression. But what about the impact on the brain itself?
Research implicates obesity in diminishing brain health, specifically brain "thinning" that essentially accelerates the aging process by a decade or more and could contribute to cognitive decline. In the study, people in their 60s with higher body mass index (BMI) and waist circumference were more likely to have a thinner brain cortex six years after initial measurements.
While the cortex does thin naturally with age, the rate of thinning during the study period was significantly greater than normal, which researchers attributed to obesity factors (BMI, waist circumference) after accounting for other potential variables such as high blood pressure, alcohol consumption and smoking. The cortex is "gray matter" in the brain and is responsible for memory, speech, decision-making and sensory perception, among other functions. Study findings appear in the research journal Neurology. | <urn:uuid:0ab43da9-99ce-4389-99fb-184a53d805b0> | {
"dump": "CC-MAIN-2019-39",
"url": "https://www.toyourhealth.com/mpacms/tyh/article.php?id=2673",
"date": "2019-09-19T08:43:58",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573465.18/warc/CC-MAIN-20190919081032-20190919103032-00238.warc.gz",
"language": "en",
"language_score": 0.9460881352424622,
"token_count": 282,
"score": 3.1875,
"int_score": 3
} |
Sprouting of brain happens during critical period. Hearing loss during this period has a devastating effect on the child’s development. Early identification of hearing loss is crucial to minimize this impact. Technological improvements have brought in effective identification procedures. However, challenge lies in the execution of efficient programs, especially, in developing countries.
Keywords: Hearing Loss; Early Identification; Universal Hearing Screening; ABR & OAE.Corresponding Author
: Heramba Ganapathy Selvarajan* | <urn:uuid:ff1ce37c-759e-40f9-8c7d-2b6da162d85e> | {
"dump": "CC-MAIN-2020-29",
"url": "https://rfppl.co.in/view_abstract.php?jid=70&art_id=3454",
"date": "2020-07-08T23:13:43",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655897707.23/warc/CC-MAIN-20200708211828-20200709001828-00144.warc.gz",
"language": "en",
"language_score": 0.8960868120193481,
"token_count": 102,
"score": 3.109375,
"int_score": 3
} |
DUST from the Sahara desert and salt in ocean spray could breach the tough new standards for particulate pollution now being considered in the US, warn some environmental scientists.
The US Environmental Protection Agency has been forced to consider the new standards after a legal action launched by the American Lung Association (ALA). The association argued that the EPA had failed in its duty to review air quality standards every five years, as it should under the Clean Air Act. The courts decided in favour of the association, and ordered the EPA to publish its proposed changes by the end of November. After a period of public consultation, the new standards will come into force in June 1997.
The most controversial feature of the proposed changes is a new limit on emissions of PM2.5particulate matter that is less than 2.5 micrometres in diameter, about a twentieth the diameter of a ...
To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content. | <urn:uuid:da112f6d-b1fa-4230-bef7-b7e558331bf6> | {
"dump": "CC-MAIN-2015-22",
"url": "http://www.newscientist.com/article/mg15220561.300-mother-nature-could-break-us-clean-air-law.html",
"date": "2015-05-22T13:41:59",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207925201.39/warc/CC-MAIN-20150521113205-00061-ip-10-180-206-219.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.9548128247261047,
"token_count": 205,
"score": 3.28125,
"int_score": 3
} |
Before the 19th century, trying to conceal a hearing device was virtually impossible. Most of the hearing instruments of the day consisted of large and unwieldy devices that were awkward to use and were definitely not easily portable. In the years following this era, though, inventors tried to create a smaller and more efficient hearing device that was easier to conceal. The models invented during this period were the beginnings of the popular hearing instruments that we know today. Here are the sonus complete reviews of some advanced hearing devices that are available in the market these days.
The Acoustic Headband
If you talk about headbands today, you probably conjure up stylish bands made of all types of fabrics, metals, and plastics. In the early 19th century, hearing care was the purpose behind headbands, not fashion. The first concealed hearing device created by F.C. Rein was called the “acoustic headband.” These hearing instruments were masked behind a hat or hairstyle and were designed in various shapes, ranging from a barrel to convoluted shells and fluted funnels. What was so unique about these devices was their attention to detail, and many were painted or adorned with lace, silk, or ribbons. Both men and women could benefit from this type of hearing aid without drawing excessive attention to themselves while out in public.
The Acoustic Fan
A woman of the 1800s enjoyed carrying beautiful fans, especially in the warmer spring and summer months. Fans were not only a fashion statement but also a way to keep a woman from passing out due to the tightness of her corset and stays. These delicate fans had another purpose too, and that was to serve as a hearing device. These hearing instruments were used as hearing care for anyone suffering from mild to moderate hearing loss but did not want the cumbersome look of a curving horn protruding from their ear.
For people who suffered from a small amount of hearing loss, the bone conduction fan was the ideal hearing device of the time. It used an odd-shaped design that sent sounds in the form of vibrations of bone in the head and teeth. The other type of hearing device – the air conduction fan – was placed behind the ear, and it made it easier for sound to be directed directly into the ear’s inner canal. Both of these hearing instruments were just early models of what was to come in the following years.
The Acoustic Chair
This hearing device used an ordinary piece of furniture for hearing care. A tube would be placed discretely in the back of a chair so the user could place it comfortably in his or her ear. Used mainly in the early 19th century, some of these hearing instruments wanted to uphold the standard of concealment while others incorporated large trumpets with the design of the chair. The overall objective of this device was to allow individuals with hearing problems to sit and listen to conversations without feeling conspicuous about their deafness.
Hearing instruments have come a long way in the past two centuries. Beginning with early inventions, such as the acoustic headband, people who suffered from hearing damage could live more normal lives than they could before. With the proper hearing device, hearing loss sufferers could go out in public and hear some of the same things that everybody else could hear. | <urn:uuid:be7d0cb5-b086-4014-a86a-1e94327c6491> | {
"dump": "CC-MAIN-2022-27",
"url": "https://unconfidentialcook.com/the-invisible-hearing-device/",
"date": "2022-06-28T21:42:09",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103617931.31/warc/CC-MAIN-20220628203615-20220628233615-00385.warc.gz",
"language": "en",
"language_score": 0.9854748845100403,
"token_count": 665,
"score": 3.28125,
"int_score": 3
} |