text
stringlengths
5.5k
44.2k
id
stringlengths
47
47
dump
stringclasses
2 values
url
stringlengths
15
484
file_path
stringlengths
125
141
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
4.1k
8.19k
score
float64
2.52
4.88
int_score
int64
3
5
The twenty first century begins with a number of significant bicentenary events that shaped the history of Tasmania and forever changed the Aboriginal landscape. – 2002 BAUDIN – 2003 – BOWEN – 2004 COLLINS FREDA GRAY (OMA) is historian and president of the First Settlers 1804 Association, member of Manuta Tunapee Puggaluggalia Historical and Cultural Association. As well as being a white cousin to a number of Lia Pootah Aboriginal people. Freda wrote the following article about the early settlement of Risdon Cove for the Centenary of Federation book written by Kaye McPherson (BscHons) From the Dreamtime to now the Centenary of Federation 2001. In this year the bicentenary of Bowen it is a must to be included on the web site hosting the events that the Lia Pootah people are holding for this bicentenary, and the forth coming events to be held next year for the bicentenary of the arrival of Collins and the beginning of Tasmania as we know it today. Freda Morehouse Gray Looking at special times in history gives Communities the opportunity to look back. Federation one such event. The research undertaken on behalf of the Risdon Vale Neighbourhood House during the Centenary of Federation, has given many opportunities, not only for the written history of the area to be collected, also the memoirs of local residents. Oral history, a very important aspect of any area, the human touch. I sincerely thank members of the Neighbourhood House for their invitation to contribute to this For so long now, almost two hundred years, Tasmanians have been convinced that Lt. John Bowen returned to Sydney after the closing of the Risdon Settlement, and the area remained undeveloped. It was a totally unsatisfactory place for a settlement, we were taught for most of this century. Reinforced, as we drive past, and watch the tide ebb and flow in the now shallow Cove, inhabited only by a beautiful selection of water foul. An attempt was made to draw our attention to the area, by filling in the marshlands, and building not only the 'pyramids', but also examples of the early cottages on the top of the rise, not far from the ruins of 'Restdown'. Excursions by school groups became a regular part of 'history lessons'. Were members of the general community any better informed? One would question this. They continued to pass by on an ever busier road, especially after the collapse of the Tasman Bridge. The old bridge, near the settlement site, was protected from the ever increasing traffic, to the great relief of those who were aware of the historical importance of the structure, while most of the general public would have been unaware of the age. Very little different to the bridges at Richmond and Ross. The old Inn continues to stand 'sentinel' to the approach of the site, having served many purposes over the years. Well past the first century in age, and fast approaching the second. A second Inn, or Tavern, now stands close to the first, its purpose Yes, we knew there was conflict between the Aboriginal Community and the Europeans who had come and taken over their island. Yes, we knew 'muskets and cannons' were used against the 'attacking' men, women and children, with 'spears and stones', at Risdon Cove. How many were 'attacking' the gardener, and the soldiers, who came to his rescue, had never been established, we were told. How many Aboriginal men, women and children lost their lives on that day, has never been established either, to say nothing of those who were wounded, and carried scars for the rest of their lives. The Tasmanian Aborigine was 'extinct', we were firmly convinced, though most knew there were folk of Aboriginal descent on some of the Bass Strait Islands. To the utter surprise of most Tasmanians, the Government decided to 'return' Risdon Cove to the Aboriginal Community, in respect to the 'Massacre'. Most Tasmanians of European descent, are reluctant to pass beyond the gate at Risdon Cove these days, though a notice on the gate assures all are welcome. research undertaken for publications such as "Risdon Cove - From the Dreamtime to Now", begin to uncover, not only the history of our Island State, but very importantly particular areas. In this case, not only Risdon Cove, Risdon Vale as well, an area few Tasmanians would have considered had any history before the building of Housing Department homes almost forty years ago. not the only area little was known about. Tasmanian history generally was not considered 'important', or rather was it 'not relevant' to the community in general, we were convinced. It had been a convict settlement, we all knew, but of course, no respectable Tasmanian family had a convict in their past. Hobart's history was hidden. It was 1959 when our family found our first convict. My Uncle never spoke to me again. I never had the opportunity to tell him he lived with an Aboriginal brother-in-law was a descendant of Dolly Dalrymple. Since my retirement in 1986, I have had time to follow my family back, and have found the most remarkable stories, not only of our family, but the Communities in which they lived. On the 20th February, 1974 a dinner was held to acknowledge the five-hundred and eight persons who left England with Lt. Governor Collins to establish the settlement of Sullivan Cove. As a result of that Dinner, the Hobart Town (1804) First Settlers Association was formed, with more then five hundred members over the twenty-seven years, two hundred active members today. Not only is the history of Hobart's '1804' families being 'uncovered' in great detail, but many other aspects of life in 'Early Tasmania'. There is also great interest in those who lived or visited 'Van Diemen's Land', before the arrival of Lt. John Bowen and Lt. Colonel David Collins. The bicentenary of the visit of Nicholas Baudin is fast approaching, and there is great interest from the local communities in the areas he described and the records he kept. As the Bicentenary of the settlement at Risdon Cove approaches, it is hoped that more of this area will be known, especially of those who were with Lt. John Bowen, both bond and free, and those who followed. Both New South Wales, and Tasmania, were settled as a 'penal stations'. For almost fifty years, to the day, convict transports sailed to Hobart Town, bringing over 74,000 prisoners to the shores of this island state. A total of 63,000 men, including boys as young as nine years, and 11,000 women, with girls as young as nine. Not all were British, not all were white, and certainly, not all were 'Church of England'. Like any community today, they were a great mixture. Some did well, others did not survive long enough to even arrive. Those who became 'settlers' had mixed success, while there were those who just 'bolted for the bush'. On the 20th February, 1954, Queen Elizabeth unveiled a Monument to the 'First Settlers of Hobart Town', in Hunters Street, Sullivan Cove. One wonders how many Tasmanian families knew they had an Ancestor with Lt. Gov. David Collins on the 20th February, 1804. By 1974, many families did know, and so attended the Dinner. My father's family was represented, that of William Richardson. It was another fourteen years before we knew that my mother also had ancestors who came with Collins, James Lord and the Nichols family. Her families were also well represented at the Dinner. It is hoped on the 20th February, 2004, to have the names recorded on, or near to the Monument, so acknowledging those who were at 'The Derwent' in that first year of European settlement, including those who were at Risdon Cove with Lt. John Bowen. Not only the names of those who had authority, in one way or another, but ALL the names, including , as my father would have called them, "just ordinary" people. My father had spent almost all his 91 years, in and around Hobart, a carrier for more than fifty of those years. His convict heritage did not worry him at all. He knew his grandfather, he said, the son of the convict. He knew my mother's family, "They were just ordinary too." "Just ordinary", we may well call them, but for some there needs to be a much better description. It is becoming very difficult to identify several members of Bowen's party, 'the servants', no doubt. Convicts sent to attend members of Bowen's party, and those who were to see to the menial tasks, almost 200 years ago. Especially the 'three female convicts'. Bowen listed '10 female convicts' in his first return, corrected to '3 female convicts' a few weeks later. It would appear the 'free women' had been included. However there was one young woman with the party whose story has been well told, and we can be certain the story of her family will continue for many years to come. The story of Martha Hayes is no doubt the best recorded of those from Bowen's settlement, who remained after his departure, for yes, some did remain. As a teenager, young Martha had taken the eye of Lieutenant John Bowen on the voyage out to Port Jackson, on the transport HMS Glatton. Martha was with her mother Mary Hayes, who had been convicted of a crime, and transported on the Glatton. Her father eventually gaining permission to join the family, sailing on the Ocean, as a 'free settler'. Although there are few personal details for those in Bowen's party, it would seem that Martha came down to Risdon Cove with Lt. Bowen, in September, 1803, and was eventually joined by her parents on the return of Bowen to the settlement, March, 1804. There are many entries for the family of Henry and Mary Hayes over the years, in the diary of the Reverend Robert Knopwood, including the three daughters of Martha Hayes. Two to the young Lieutenant Bowen, and a third to Andrew Whitehead, whom Martha married in 1811, one of the Calcutta convicts, who arrived in Hobart Town in February, 1804. John Bowen made certain Martha and the two girls were well provided for, on a well established farm on the western side of the river, not far from the present Zinc Works, before leaving the settlement in August, 1804. There are many descendants from the two surviving daughters, and no doubt their family story will be told in detail before the Bicentenary of Risdon Cove. To find the names of convicts who have been assigned to settlers is very difficult, to find the name of a female convict, almost impossible. One could imagine that Martha Hayes would have had the services of both a male and a female 'servant', after the departure of Lt. Bowen. As Bowen prepared for his departure from Risdon Cove, there are two names on the Victualling list for transfer to the party of Lt. Governor Collins, for April, 1804 but were not victualled by Collins till the 22 May, 1804. They were Mary Lawler and John Jackson. Were these the 'servants' of Martha Hayes? We may never know, as few if any papers remain for early Hobart Town. However there was one convict with Lt. Bowen for whom there are many reports, official as well as newspaper. That was Dennis McCarty, or as sometimes written, 'McCarthy', an Irish rebel. He is included in the Australian Dictionary of Biography, Vol 2, among some of Australia's leading men and women. A convicted farmer from Wexford in Ireland, he arrived in Sydney in 1800 on the Friendship. He was one of the many Irish Rebels transported at that time. Meehan , the Surveyor was another, who also spent time at Risdon Cove surveying, as well as in Hobart Town. 'Transportation' certainly did not cure Denis McCarty of his 'rebellious' tendencies, and for 'disobedience' he was included in Lt. John Bowen's party to Risdon Cove, in September, 1803. What part McCarty played in the Risdon Cove settlement does not seem to have been recorded, but he must have been of some 'use' to Bowen, and considered worth leaving with Collins, for he was victualled from the 2nd June, 1804, on Collins return for 1804. He certainly became one of the best known characters in early Van Diemen's Land, especially in the New Norfolk area, where he was one of the earliest settlers. He became Constable of New Norfolk in April, 1808. He married Mary Wainwright in December, 1811, who had been born on Norfolk Island, daughter of the First Fleeter Hester Wainwright. No doubt he is best known for the building of the road from Hobart to New Norfolk. A large boulder with a plaque, stands as a monument to the Irish exile, on the bank of the River Farming and road building were not his only must also be added to this list. He became owner and master of the Geordy, a comparatively small vessel, but one he took not only to the south of the island, but also as far as Port Jackson. He took the brig Sophia to Macquarie Harbour, went to Kangaroo Island on the Henietta Packet, with Captain Feen as master. From Knopwood's diary and other reports, it seems that Denis McCarty was at Port Davey before the very well known Captain James Kelly in the whale boat, Elizabeth,on behalf of the 'trader and ex-surgeon' Thomas little vessel was owned by James Gordon Esq. , another who played an important part in early Vandemonian history, including that of Risdon Cove. The story of Denis McCarty will be told many times over the next few years, as the bicentenary of Risdon Cove is acknowledged, through to the naming of New Norfolk in 2008. From the Rev. Knopwood's diary, we know that Captain James Kelly was at 'The Derwent' on the day of the 'massacre', for the Rev. Knopwood recorded that both men heard the firing of As we are reminded in this publication, Risdon Cove did not cease to exist after Lieutenant Bowen left. Another community began to develop. is named, as is William L'Anson. Lt. Colonel David Collins had taken over the responsibility of Risdon Cove, after the departure of Lt. John Bowen, granting areas to settlers for farming. The original intention of the area, we are reminded. William L'Anson was granted some of the land, which he sold to T.W.Birch. By 1812, we learn the area had been purchased by Colonel Andrew Geils, the new Commandant, who had 'trouble with his neighbour', George Guest. families are like a seamless web' the historian Lloyd Robson, once said, 'they keep on going round and round.' The history of Risdon Cove certainly confirms this. William L'Anson, or I'Anson, was the senior surgeon with Lt. Governor Collins. Like so many of the Officers, his time in Van Diemen's Land was short, he died in 1812, his land going to Thomas Birch after his death, it would appear. Little is known of the Senior Surgeon. No family recorded for him, but one could hope some time was allowed for him to go to his 'holiday house'. For it seems, as with New Norfolk, the more affluent members of Hobart's community escaped from Hobart Town to a more restful part of the island. "Restdown" well describing Risdon Cove, very different to the penal 'Camp' at Sullivan Cove. The Royal Hobart Hospital still stands on the original hospital site, almost two hundred years later. What changes since the first three surgeons arrived. Lloyd Robson, in his 'History of Tasmania', describes Thomas William Birch as a 'merchant and trader' particularly involved with 'wheat'. Immediately the connection is made, for that was the important crop John Bowen was to plant as soon as he arrived. The wheat was doing well, we would imagine. No doubt Birch's father-in-law, George Guest was growing wheat on his 300 acres, granted to him on his arrival from Norfolk Island. The farmers were becoming established at Risdon, in conjunction with the holiday resort. 'Restdown', a week-end retreat for Sarah Guest Birch and her growing family. Thomas Birch and Sarah Guest were married in Hobart in September, 1808, only the twenty-third marriage for the Rev. Knopwood. The new house in Macquarie Street, would have been well under way, if not already completed when 'Restdown' became the property of Major Geils. Eventually George Guest's property became part of James Gordon Esq. as 'agent' became responsible for 'Restdown'. Another army officer, James Gordon became a Magistrate, and there are many reports of the various cases he resided at, including the robbery at 'Restdown'. Like William L'Anson, Major Andrew Geils, as 'interim officer in charge at the Derwent', after the death of Governor Collins, no doubt used 'Restdown' as a retreat from official duties, spending his working time at 'Government House', on the site of the present Town Hall. With the brick extensions, it would seem that 'Restdown' had become a very comfortable house indeed. Perhaps also very essential to the well being of the 'interim officer in charge', for Governor Macquarie had left a long list of instructions and 'matters to be seen to', on his departure. James Meehan was again at 'The Derwent', this time to survey the rapidly growing Hobart Town. Governor Macquarie had also stayed a night at New Norfolk, or at Elizabeth Town, as he called it, with Denis McCarty. It is hard for us to imagine such things. With a convicted 'Irish Rebel'! Then Macquarie had great respect for Meehan as a surveyor. The earliest years at the Derwent must have been a fascinating social Risdon Cove was anything but deserted, and the story is an on going tale for a century and a half. Major Geils had returned to Sydney, and the property was to be 'let'. There may have been a great deal of trouble with the 'tenants', but it seems the property was producing well, not only wheat, with orchards becoming well established. Kent is not a name listed in the early musters, unless it was Thomas Kenton, an ex-convict, who was the early tenant. By 1820 Alfred Thrupp and his wife Sarah, were living at 'Restdown'. As would be expected in the still small community of Van Diemen's Land, the Thrupps would have been well known to many of their neighbours, especially Sarah Birch and her father, for Sarah Thrupp was the daughter of Captain John Piper, Commandant on Norfolk Island for some time. Her sister, also born on Norfolk Island, was the wife of David Gibson, another who had come with Governor Collins, very successful in the north of the There are many sayings in our society, which describe many situations, for the earliest community at Risdon, through to the present day. "Life was not meant to be easy." It certainly was not for those responsible for 'Restdown', as it had become known. "Times don't change, only the players." were no more acceptable as tenants, as had been Thomas Kent, it would seem, especially after the robbery. Alfred Thrupp had sold some bullocks at a very worthwhile price, was passed around the district. A group of young men decided to rob him of his takings, Joseph Potaski as leader. Joseph was the brother of Catherine Potaski, the baby born aboard the 'Ocean' as she lay at anchor at Risdon Cove, while Governor Collins, and some of the other officers, decided on the suitability of the site for the settlement. John Potaski, or more correctly Ivan Potaski, as the family now know, rather than a Russian speaking Pole, had been an officer in the Russian Army. He became another of the 308 'Calcutta Convicts', and very useful as an interpreter, when Russian ships were visiting. There is a very detailed report of the robbery given by those who attended the trial, the result of which four of the five young men involved were hung, including Joseph Potaski. Alfred Thrupp was away at the time of the robbery, and the young men must have left few personal possessions, belonging to Sarah and the children. It is fascinating to read the list of goods taken, for there were many items of clothing, for all the family, including young children. There were also 'papers', which were taken out of Potaski's 'breast', looked over and put into the fire. That 'documentary evidence' so important today. Geil's papers, relevant to Guest's grants, it would seem. Some of the participants in the robbery were from the 'road maker's hut', near the Hollow Tree (Cambridge) at Break Neck Hill. Very close to the 'Horse Shoe Inn', though the old steep road is seldom used today. There seems there was another group of young men, with the same intentions as young Potaski and his 'mates'. Perhaps the 'fortunate ones', for they escaped at least with their lives. From the reports, it would seem that John Potaski was leasing some of the property belonging to Geils, though still living on his original property at Kangaroo Point, now Warrane. The original grant of 30 acres, the 1819 Land and stock muster shows. While Alfred Thrupp was 'Agent for Geils', in 1819, no one was living on the property, which consisted of 1815 acres, 80 acres in wheat, 23 acres in barley, 3 acres in beans, 4 acres in potatoes, and 1705 acres in pasture, with 800 sheep. Very surprising indeed. Alfred, Sarah and three children, with seven government male servants, (convicts), and one female convict servant, were living on the 300 acre property granted to Alfred Thrupp at Clarence Plains. With 300 cattle pastured. It is not surprising Major Geils decided to sell, and the property was put in the hands of George Cartwright , Solicitor. Another very interesting person in early Hobart, his family still able to be contacted today. Did the action Geils take against his agent, include Alfred Thrupp? Very possibly, for I have usually associated Alfred and his family with Brighton, where his brother Henry, had 1,200 acres of land. Alfred and Sarah eventually moved into 'Kimberley', the lovely old stone cottage at the top of the hill at Pontville, where they both died and are buried in the old cemetery on the other side of the Church. As the sale to T.G.Gregson took place in 1829, it would be reasonable to assume the auction of Geil's belongings took place not long before. Not only was the building obviously well built, but must also have been beautifully furnished, if neglected. Dr. Tippling in her book 'Convicts Unbound' describes the auction as 'a great social event'. After twenty-five years in the colony, many of those who had come with Lt. Gov. Collins were well established and financially comfortable. Their list of purchases a good indication of their success. Thomas Peters spent almost fifty pounds at the auction, eleven guineas spent on seven cedar cane-bottom chairs. The 'guilded china teaset' cost him a further nine pounds. There were other items of interest to the community, such as saddles, bridles and farm equipment. Andrew Whitehead purchased drawer locks and cupboard locks. It would seem that Major Geils' records survived better that those of Governor Collins, even if some were burned by the young 'robbers'. be interesting to know if Andrew Whitehead took his wife and daughter with him to the sale. Surely Martha would have been most interested in the house, part of which she once shared with John Already the house in which Bowen and Martha Hayes lived had changed hands several times. Surely a 'government house' when first erected. 'Granted' to William l'Anson, 'sold' to Thomas Birch, 'purchased' by Colonel Geils, 'rented' by Thomas Kent, Thrupp was the 'live in' agent in May 1820 when the group of young men attacked the property. The on-going story of the Potaskis is well recorded by the family. Joseph Potaski was executed on the 19 May, 1821, his father died on the 29th August, 1824 only one month after the marriage of his daughter. The family eventually moved to Geelong, the two ladies living well into 'old age'. There are numerous descendants today, for the 'baby born at Risdon Cove on the 18th Thomas William Birch married Sarah Guest daughter Guest on the 12th September, 1808. George Guest had come from Norfolk Island, one of the earliest to accept the request to move to 'The Derwent'. He was granted 24 acres of land, book 1, number 18, in the locality of 'Derwent', frequently described from the site of the City Hall up Campbell Street. A further grant was made to him of 300 acres at Risdon Cove. Again, book 1, number 32. From these numbers, there is little doubt the Risdon Cove grant was 'compensation' for the move from Norfolk Island, even though the 'grant papers' were tossed into the fire the night of the robbery. Here again the family will have the details, for there are many Thomas Birch died in Hobart in 1821 aged only 47. Sarah remarried, and she, with her second husband, John Cox, turned the "largest and most imposing house in Hobart Town" into the well known Macquarie Hotel. The very distinctive house can be seen in many of the early paintings of Hobart, and is still there, though very different in appearance to the earliest days. It is on the corner of Macquarie Street and Victoria Street. The roof line of the original house can be seen above the additions, now used as office accommodation. Restdown continued to be a problem for Major Geils, till George Cartwright was given the task of selling the property. With the arrival of such men as George Cartwright in Hobart Town, there seems to have been a change in the Military rule of the various communities, to Civil one more familiar to the population today. The number of convicts transported to Van Diemen's Land was continuing to rise rapidly, while 'free settlers' were being encouraged to settle in the various areas, with 'land grants' the reward for their efforts, and the investment of their money. Men such as Thomas George Gregson, a very interesting member of a changing community, bought the property in 1829, and no doubt continued to live there till his death in 1874. He was a farmer, from Lowlynn, in Northumberland, England. He was married with three sons and four daughters. He was interested in horse racing and politics, while his property extended to Kangaroo Point. The Robert Knopwood once more visited Risdon Cove, as he had done in the earliest weeks of settlement at 'The Derwent'. On Trinity Sunday, 1829 the now aging Chaplain took Devine Service and administered baptism at the home of T.G.Gregson. On his 'old pony, which is 26 years old and admired by all' he entered for the 17th August. He did not go out at night, he wrote, and if it was necessary, he made sure he was home by 8 o'clock. The original settlers were ageing. The Rev. Robert Knopwood died on the 18 September, 1838, with him were his three special friends, George Stokell, Thomas Gregson and Father Connolly. Thomas Gregson was a 'farmer', and a 'politician', and not only did 'Restdown' prosper over the thirty-five years he was there, but the area owned increased greatly, while the make up of the community was changing. Transportation has ceased, twenty years before the death of Thomas Gregson in January, 1874, few still 'serving time'. Catherine Potaski was enjoying life in the thriving city of Geelong, Victoria, while Martha Hayes Whitehead Williamson died at Brown's River in May, 1871, aged 85. What changes these people must have seen, and with their going, the 'books' were closed, for the greater part of a century. Let us hope that by "2324" the bicentennial years of so much history, they will be well open. "Risdon Cove - from the dreamtime to now" only one of many books telling of Hobart's Hidden History and the other areas settled in those early days. Perhaps it is appropriate to finish these comments with a little oral history. It was probably 1984 when I read my Grade 1/2 class at Flagstaff Gully a story of a space ship landing 'back in time'. One child wondered what was on the school site before the school was built. No one had any idea. The Principal suggested a small group go down to the Clarence Council Chambers and see if anyone could tell them. A group was chosen and the task was undertaken. We came back with pieces of an old map. There was not a photocopier large enough to copy the map in one piece, so we were given several pieces to tape together. I still have copies of the section showing the school site, but regret not having taken care of the other pieces, and filed them safely away. We found that the property had belonged to T.G.Gregson, and extended from Risdon Cove to Kangaroo Creek, Warrane, and included Flagstaff Gully. With Geilston Bay, of course, named from the previous owner, Major Geils. The site of the School had been an apricot and quince orchard. The children were quite familiar with apricots, but no one knew what a quince was like. The search began to find some fruit to show the children. Eventually a large old tree was found at no other place than Risdon Cove. In the garden of the old Inn, the 'Saracen Head'. The old tree still there today, I am always thankful to note. An excursion was arranged for the children to go to Risdon Cove, and they would not only look at the old tree, but go to the site of Mr. Gregson's house. At that time little more than a pile of rubble. I would hope that at least some of the children remember that very enjoyable day, for I certainly do, and have continued to have a particular interest in the man who owned so much land from Risdon to Bellerive. All material on this site is copyright © 2000-2003 MANUTA TUNAPEE PUGGALUGGALIA publishing.
<urn:uuid:af288762-7799-4fe8-b213-90a7a38066ef>
CC-MAIN-2017-17
http://www.tasmanianaboriginal.com.au/liapootah/bowen-reconciliation.htm
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917123318.85/warc/CC-MAIN-20170423031203-00015-ip-10-145-167-34.ec2.internal.warc.gz
en
0.97845
7,129
2.625
3
|Home | About | Journals | Submit | Contact Us | Français| Conceived and designed the experiments: JLM JAE. Performed the experiments: JLM. Analyzed the data: JLM AED. Wrote the paper: JLM AED JAE. Microbial life dominates the earth, but many species are difficult or even impossible to study under laboratory conditions. Sequencing DNA directly from the environment, a technique commonly referred to as metagenomics, is an important tool for cataloging microbial life. This culture-independent approach involves collecting samples that include microbes in them, extracting DNA from the samples, and sequencing the DNA. A sample may contain many different microorganisms, macroorganisms, and even free-floating environmental DNA. A fundamental challenge in metagenomics has been estimating the abundance of organisms in a sample based on the frequency with which the organism's DNA was observed in reads generated via DNA sequencing. We created mixtures of ten microbial species for which genome sequences are known. Each mixture contained an equal number of cells of each species. We then extracted DNA from the mixtures, sequenced the DNA, and measured the frequency with which genomic regions from each organism was observed in the sequenced DNA. We found that the observed frequency of reads mapping to each organism did not reflect the equal numbers of cells that were known to be included in each mixture. The relative organism abundances varied significantly depending on the DNA extraction and sequencing protocol utilized. We describe a new data resource for measuring the accuracy of metagenomic binning methods, created by in vitro-simulation of a metagenomic community. Our in vitro simulation can be used to complement previous in silico benchmark studies. In constructing a synthetic community and sequencing its metagenome, we encountered several sources of observation bias that likely affect most metagenomic experiments to date and present challenges for comparative metagenomic studies. DNA preparation methods have a particularly profound effect in our study, implying that samples prepared with different protocols are not suitable for comparative metagenomics. The vast majority of life on earth is microbial, and efforts to study many of these organisms via laboratory culture have met with limited success, leading to use of the term “the uncultured majority” when describing microbial life on earth . Metagenomics holds promise as a means to access the uncultured majority , , and can be broadly defined as the study of microbial communities using high-throughput DNA sequencing technology without requirement for laboratory culture –. Metagenomics might also offer insights into population dynamics of microbial communities , and the roles played by individual community members . Toward that end, a typical metagenomic sequencing experiment will identify a community of interest, isolate total genomic DNA from that community, and perform high throughput sequencing of random DNA fragments in the isolated DNA. The procedure is commonly referred to as shotgun metagenomics or environmental shotgun sequencing. Sequence reads can then be assembled in the case of a low-complexity sample , or assigned to taxonomic groupings using various binning strategies without prior assembly , , . As binning is a difficult problem, many methods have been developed, each with their own strengths –. Assuming the shotgun metagenomics protocol represents an unbiased sampling of the community, one could analyze such data to infer the abundance of individual species or functional units such as genes across different communities and through time. However, many sources of bias may exist in a shotgun metagenomics protocol. These biases are not unique to random sequencing of environmental DNA. They have also been addressed in studies of uncultured microbial communities using PCR-amplified 16S rRNA sequence data. For example, it has been shown that differences in the cell wall and membrane structures may cause DNA extraction to be more or less effective from some organisms , , and differences in DNA sequencing protocol might introduce biases in the resulting sequences . We also expect that methods to assign metagenomic reads to taxonomic groupings may introduce their own biases and performance limitations . In selecting a particular metagenomic protocol, an awareness of alternative approaches and their limitations is essential. Towards this end, others have endeavored to benchmark the various steps of a typical metagenomic analysis. A few studies have attempted to quantify the efficiency and organismal bias of various DNA extraction protocols using environmental samples, but these have included unknown, indigenous microbes , –. One other benchmark of metagenomic protocols focused mainly on the informatic challenge of assigning reads from a priori unknown organisms to taxonomic groups in a reference phylogeny . In that in silico simulation, the authors randomly sampled sequence reads from 113 isolate genomes, and mixed them to create three “communities” of varying complexity. While that type of informatic simulation of metagenomic reads is a useful approach for benchmarking different binning methods, the models used for such simulations simply can not capture all factors affecting read sampling from a real metagenome sequencing experiment. Even if the model complexity were increased, appropriate values would need to be experimentally determined for the new simulation model parameters. In this work, we describe an in vitro metagenomic simulation intended to inform and complement the in silico simulations used by others for benchmarking. Using organisms for which completed genome sequences were available, we created mixtures of cells with equal quantities of each organism. We then isolated DNA from the mixtures and used two approaches to obtain sequence data. For all simulated metagenomic samples, we created small-insert clone libraries that were end-sequenced using Sanger chain termination sequencing and capillary gel electrophoresis. For one of the samples, we generated additional sequence using the cloning-independent pyrosequencing method on the Roche GS20. The resulting sequence data were then analyzed for biases introduced during metagenome sequencing. For this study, organisms were chosen to represent a breadth of phylogenetic distance, cell morphology, and genome characteristics in order to provide useful test data for benchmarking binning methods. This experiment was not designed to test specific hypotheses about how those factors or others may influence the distribution of reads in a metagenomic survey. Nevertheless, these data can be used to determine appropriate parameter ranges for metagenomic simulation studies, or directly as a test dataset for binning. Organism selection was guided by the data available in the Genomes On Line Database as of November 2007 . Pathogens, obligate symbionts, and obligate anaerobes were removed from consideration for the simulated metagenome because these organisms are difficult to culture in our laboratory setting. We selected ten organisms representing all three domains of life and several levels of phylogenetic divergence. Halobacterium sp. NRC-1 and Saccharomyces cerevisiae S288C were selected to represent the archaeal and eukaryotic domains, respectively. Because it has been shown that cell membrane structure can have a significant effect on DNA extraction efficiency , , , we included both Gram-positive and Gram-negative bacterial species. Five relatively closely-related organisms were selected from among the lactic acid bacteria, a clade of low-GC, Gram-positive Firmicutes (Pediococcus pentosaceous, Lactobacillus brevis, Lactobacillus casei, Lactococcus lactis cremoris SK11, and Lactococcus lactis cremoris IL140) . To provide phylogenetic breadth within the Bacteria, we also included Myxococcus xanthus DK 1622 (a delta-proteobacterium), Shewanella amazonensis SB2B, GenBank Accession #CP000507, (a gamma-proteobacterium), and Acidothermus cellulolyticus 11B (an Actinobacterium). Figure 1 gives the placement of the organisms on the tree of life and Table 1 lists some general features of each organism. These ten organisms were not selected to represent a real, functional community, rather they were chosen to provide sequence data that would best allow the testing of the accuracy and specificity of various binning methods. To this end, we have chosen five phylogenetically diverse species with very different genome compositions and five species that are relatively closely related to each other, with very similar genome compositions. As described in Methods below, cultures for each organism were grown and cells from each culture were counted using flow cytometry. We then constructed two distinct simulated microbial communities that were made by mixing all organisms with different approaches (see Figure 2). The first approach involved mixing the cultures directly prior to extracting DNA from the collection of mixed cells. To this mixture, two DNA extraction techniques were applied in parallel, including an enzymatic extraction with a bead beater (referred to throughout as “EnzBB”), and the Qiagen DNeasy kit (referred to throughout as “DNeasy”). Preliminary sequence data from this mixture included no reads from the halophilic archaeon, Halobacterium sp. NRC-1. One possible explanation for this observation is that upon mixing, the high-salt culture medium in which the Halobacterium cells were growing was diluted, causing them to lyse. If cell lysis occured rapidly, before recovery of the mixed cell pellet, no DNA would be recovered from the lysed cells. To address this possibility, we made a second mixture of cells using a different approach. The second approach involved pelleting a known number of cells from each individual culture, mixing cell pellets, then performing DNA extraction on the mixed pellets using an enzymatic DNA extraction (referred to throughout as “Enz”). Simulated metagenomic DNA samples were then subjected to high-throughput sequencing using Sanger sequencing and pyrosequencing technologies (see Methods for a description of the sequencing protocols). Finally, to assess DNA extraction efficiency for each organism in isolation, an enzymatic extraction with a bead beating step (EnzBB) was applied to each isolate culture separately. Table 1 documents the quantification of total DNA extracted from each organism individually. For each simulated metagenome, we used a BLAST search to map quality-controlled reads back to the set of reference genomes, yielding a count of reads assigned to each organism (see Methods for details). A complete set of read mappings and summaries of the numbers of reads assigned to each organism is given in Table 2. Many reads did not map back to reference genomes using our stringent criteria. Such reads may represent highly conserved sequences that hit multiple genomes making unambiguous mapping impossible, had too few high-quality bases, or they may represent an unknown source of sequence library contamination. To further investigate the origins of unmapped reads, we searched those reads using BLAST against the NCBI non-redundant nucleotide database (see Table 3). We find that many unmapped reads do hit organisms present in our sample, but do so with less than 95% sequence identity. Sequencing errors, either in our data or in the published genome data, may contribute to this category of reads. In general, the lower identity reads follow the taxonomic abundance distribution of mapped high-identity reads. We also found a substantial number of hits to parts of a Lactococcus bacteriophage phismq8. This phage genome was not present (lysogenized) in either of the two reference Lactococcus genome sequences. All of the Lactococcus strains used for this study are the same strains, from the same lab, that were the source for the genome sequencing projects, suggesting that at least one of the Lactococcus cultures had been infected with a virus of external origin in the time since its genome was originally sequenced. The phage may have been actively affecting one of the Lactococcus cultures. Finally, several unmapped reads showed high identity to members of the genus Bacillus. Those reads suggest a low level of Bacillus contamination in one of the simulated metagenomes. By counting the number of reads mapped to each reference genome and normalizing by the total read count, it is possible to estimate the relative abundance of organisms in each simulated metagenomic sample. Figure 3 shows the frequency at which reads are observed for each organism in our samples. These observed read frequencies can be considered as possibly biased estimates of the organism relative abundance in our simulated environmental samples. Given that a known quantity of each organism was mixed in the metagenomic simulation, we next investigated whether estimates of organism relative abundance based on sequencing read counts would match the predicted abundance given the way in which our sample was created. To do so, we must first derive a predicted relative DNA abundance based on the known cell count relative abundances. Because we included an equal number of cells per organism in our mixtures, a simple prediction would be that the number of reads per organism in each sequencing library would be directly proportional to their genome sizes. The relative abundance predicted based on genome size and cell counts (cc*gs) is shown in Figure 3. Using the cc*gs predictor of relative organism abundance, we tested whether the observed abundances followed the expected distribution. We found that that cc*gs is a poor predictor of organism abundance in our sequence libraries (χ2 test, all p-values <<0.001, Bonferroni multiple test correction). However, some organisms in our experiment such as Halobacterium may be polyploid , and for many microbes the copy number of the entire (or some segments) of the chromosome can vary depending on growth phase , or other factors . Also, the amount of DNA from an organism that is available to become part of a sequencing library depends on the efficiency of the DNA extraction protocol. In a mixed sample, organisms with thick cell walls may yield relatively little DNA, leading to an under-representation of that organism in the final sequencing library . For these reasons, simply counting cells and accounting for genome size may not provide us with an accurate prediction of relative organism DNA abundance. We developed an alternative means to predict the relative DNA abundance of organisms by extracting DNA from a known number of cells of each organism in isolation and quantifying the amount of extracted DNA (see Table 1). We did so using the extraction method (EnzBB) that has been demonstrated in previous studies to achieve the maximum DNA yield from even the most recalcitrant cells , . This DNA quantification provides another way to estimate the amount of DNA per cell that we should expect from the simulated metagenomic samples. We predict the reads per organism to be directly proportional to the amount of DNA that can be extracted from each cell. Of course, this prediction based on isolate DNA extraction (DNA quantification) does not provide a perfect expectation of the relative organism abundance in extractions of mixed communities, but it does, at least theoretically, better account for the effects of DNA extraction efficiency and genome copy number per cell. Nevertheless, the observed organism abundance in our sequence libraries does not match the expectation based on DNA quantification (χ2 test, all p-values <<0.001.) While this experiment was not designed to test specific hypotheses about how phylogeny, cell morphology, or genome characteristics may affect the outcome of a metagenomic survey, some interesting observations can be made. For example, because they have been shown to be more recalcitrant to lysis, one might expect that the organisms with the Gram-positive cell wall structure might consistently be under-represented in our libraries relative to the prediction based on isolate DNA extraction. This was not this case in our libraries, where in any given sample, some Gram-positive organisms were more abundant and others less abundant relative to our prediction (Figure 3). One also might expect that closely related organisms that share many genome characteristics would show the same distribution under a given preparation protocol. However, this is not the case with the five lactic acid bacteria, wherein even two strains of the same named species (Lactococcus lactis) differ in their read counts by more than an order of magnitude. In the EnzBB library for example, of the 11552 mapped reads, 3389 reads mapped to the Lactococcus lactis IL1403 genome while only 86 mapped to the Lactococcus lactis SK11 genome (see Table 2 and Figure 3). The difference in read frequencies among members of the same named species cannot be ascribed to a lack of sequence differences among the two strain's genomes causing a failure in read assignment. Whole-genome alignment using the Mauve genome alignment software reveals the two Lactococcus isolates have approximately 87% average nucleotide identity thoughout their genomes and fewer than 1% of subsequences of the length of our reads lack differences to guide taxonomic assignment. Of course, factors other than DNA extraction efficiency may contribute to differences between the predicted number of reads based on isolate DNA extraction and the observed number of reads. These include 1) cloning bias, which refers to the phenomenon whereby some DNA sequences are more readily propagated in E. coli ; 2) sequencing bias, which can refer to the propensity of the polymerase enzyme used for Sanger sequencing to stall and fall off when regions of the molecule with secondary structure are encountered or to errors introduced into pyrosequencing reads where there are homopolymeric runs ; and 3) computational difficulties with accurately and specifically binning reads. Future studies might attempt to disentangle the contribution of each of these factors to overall bias. In terms of the relative abundance of organisms based on sequence reads, all metagenomic samples were significantly different from each other and significantly different from the estimated expected distribution (χ2 test, p-value <<0.001 for all pairwise comparisons, see Table 2 for data.) Halobacterium sp. NRC-1, Saccharomyces cerevisiae S288C, and Lactococcus lactis cremoris SK11 were under-represented in all libraries relative to the prediction based on isolate DNA extraction, whereas Acidothermus cellulolyticus and Shewanella amazonensis SB2B were over-represented in every library. Some organisms, e.g., Pediococcus pentosaceous, Lactococcus lactis cremoris IL1403, and Myxococcus xanthus DK 1622 were much more abundant in one library than in others (Figure 3). The results demonstrate that two libraries created from a single mixture of organisms, but prepared using DNA that has been extracted by different protocols (i.e., Enz, EnzBB, or DNeasy), can produce reads that seem to represent two very different underlying communities. Therefore, the purpose of a metagenomic survey must be taken into consideration when choosing a DNA extraction protocol. While using multiple DNA extraction procedures on a single environmental sample can increase the likelihood that every organism in an environment will be sampled, doing so can also complicate quantitative comparisons of multiple samples. One advantage of sequencing with the pyrosequencing technology over that of clone library-based (Sanger) methods is the elimination of cloning bias. The Enz DNA extraction was split into two samples (Figure 2), one of which was cloned and sequenced using Sanger sequencing while the other was used to construct a library for pyrosequencing. These two libraries, like all others, yielded significantly different taxonomic distributions of reads (all χ2 tests have p-value <<0.001.) However, the χ2 statistic was lower (χ2=381.69) than any of the Sanger library pairwise comparisons, all of which had χ2>10397. This suggests that the effect of DNA extraction is more pronounced than the bias introduced by clone-based sequencing. Additionally, cloning bias has been shown to be influenced by GC content , , and in this experiment, the GC content of the Sanger-sequenced sample (56.0% GC) and the pyrosequenced sample (56.7% GC) using the same DNA extraction protocol were very similar. On the other hand, the GC content of the Sanger-sequenced libraries, using different DNA-extraction methods, ranged from 0.48% to 0.61% (Table 3). The Enz+pyrosequenced metagenome differs from the Enz+Sanger metagenome in the types of reads that failed taxonomic assignment. Whereas very few Enz+Sanger reads failing taxonomic assignment had recognizable sequence identity to organisms in the NCBI non-redundant nucleotide database (547/2638 or 21% of unmapped reads), the majority of the unmapped pyrosequencing reads did have recognizable identity to NCBI database sequences (10171/10347, 98%). Both methods had a modest number of reads that failed taxonomic assignment because the read's sequence identity to the reference organism was below the stringent identity threshold (316 Enz+Sanger reads, between 791 and 2932 Enz+pyrosequencing reads). Additionally, about 0.3% of the Enz+pyrosequencing unmapped reads exhibited sequence identity to an unknown member of the Bacillus genus. We speculate that a small amount of Bacillus DNA may have entered the Enz+pyrosequencing sample prior to emulsion PCR (see Methods), which may have amplified the contaminant. As mentioned before, the primary purpose of this experiment was to generate sequence data that could be used to test the computational tools that are used to analyze metagenomic sequence data. With this in mind, we opted to use several DNA extraction methods in order to maximize the likelihood of recovering sequence data for every organism in our sample. We did not perform technical replicates for each DNA extraction method. However, post hoc comparisons of the different DNA extraction protocols did produce interesting results, prompting us to perform again the same experiments on a smaller scale. While these are not perfect technical replicates, they were performed using exactly the same starting material. These additional simulated metagenomes were created by thawing additional aliquots of the primary frozen culture stocks and mixing them as described below. We did two additional simulations for each of the Enz, EnzBB, and DNeasy protocols and performed Sanger sequencing on the extracted DNA (Figure 4). One of the additional simulations used frozen stock of isolate cultures, the other used frozen stock of isolate cultures with glycerol added to a final concentration of 10%. The so-constructed sequence libraries are not technical replicates of the simulation because they include effects introduced by long-term frozen storage of isolate cultures at −80°C with and without glycerol. Use of glycerol should help prevent cells from lysing, so if large differences were observed between the repeated samples with and without glycerol, it would be reasonable to suspect that cell lysis is an important factor to consider when doing metagenomics with frozen samples. For each additional simulation, we began by retrieving aliquots of Mix #1 (for the additional EnzBB and DNeasy libraries) or by re-creating Mix #2 (for the Enz library). For the additional libraries using glycerol stocks, both Mix #1 and Mix #2 were re-created from the individual stock cultures. As before, the taxon relative abundance distribution for each library is significantly different from every other library (χ2 test, all p-values <0.001). However, if we consider the original libraries to represent an expected organism relative abundance for each DNA extraction protocol, then we can compare the average Chi-square statistic within each DNA extraction protocol to determine which protocol yields the most consistent results. The average Chi-square statistic for the additional libraries is much lower for the DNeasy extraction (average χ2=377.26) than for either the Enz extraction (average χ2=5013.12) or the EnzBB extraction (average χ2=774.96) protocols. This result indicates that the repeatability of the kit extraction method is better than the two other extraction methods (Figure 4). This is in line with expectation, since a possible advantage of kit-based DNA extraction protocols is that variation due to stochastic error should be minimized. In silico simulations of metagenome sequencing are cheap, quick, and easy, The type of in vitro simulation presented here is comparatively expensive, difficult, time-consuming, but captures bias in the metagenomic sampling procedure more faithfully than in silico simulations. Studies such as ours add a layer of complexity and biological realism beyond that attainable with computational simulations alone. With in silico simulations, one can model complex and highly diverse communities, but the models used to sample reads from isolate genomic data are limited in their ability to capture biases introduced by experimental protocol. In particular, biases in sequence coverage (per genome) can be due to growth conditions, organismal growth phase, DNA extraction efficiency, cloning bias, sequencing efficiency, or relative genome copy number. In no case did the relative organism abundance in our sequence libraries reflect the known composition of our simulated community. This suggests that sequencing-based methods alone are insufficient to assess the relative abundance of organisms in an environmental sample. If calibrated by another method, such as fluorescent microscopy, sequencing might be more useful in this regard. The results also highlight the need to standardize as many laboratory techniques as possible when comparing metagenomic samples across environments, timescales, or environmental conditions. Currently, there is no standard approach for metagenomic surveys, making it difficult to make useful inferences when comparing data among different studies. It is important to note that the purpose of a given metagenomic sampling effort will vary, and the methods used should be chosen to best suit that purpose. For example, here we found that using a kit-based DNA extraction protocol produced the most consistent results with repeated sampling. This is important if the goal of a study is to track differences across environments, treatments, or timescales. However, if the goal is to fully catalog all organisms or to know with certainty the relative abundance of organisms in a sample, our results suggest that the kit-based DNA extraction could offer the worst performance of the methods tested here. Of course, there are other factors to consider: the DNA yield from kit-based DNA extractions is considerably lower than alternative methods, it is typically of a lower molecular weight, and it is more costly to acquire. Our ability to make strong conclusions about the source of variation across samples is unfortunately limited by our lack of technical replicates. However, we find the magnitude of this variation striking, even in this simple, well-understood, artificially constructed microbial “community.” Future experiments to tease apart the sources of bias, especially those designed with specific natural communities in mind, will be valuable. In addition to providing sequence data that can be used for benchmarking analytical techniques for metagenomics, it is our hope that this type of simulation can help aid model development for future in silico simulations. For this purpose, sequence data generated in our study is available via the IMG/M , on the BioTorrents file sharing site (http://www.biotorrents.net/details.php?id=47), and via the NCBI's Trace and Short Read Archives. Myxococcus xanthus DK1622 cells were grown in CTTYE (1% Casitone [Difco], 10 mM Tris-HCl (pH 7.6), 1 mM KH2PO4, 8 mM MgSO4) broth at 33°C with vigorous aeration. Cells were harvested when a Klett-Summerson colorimeter read 100 Klett units, or approximately 2×108 cells/ml. Acidothermus cellulolyticus 11B was grown in liquid culture at 55 degrees C on a shaker at 150 rpm. The growth medium consisted of American Type Culture Collection medium 1473, modified by use of glucose (5 g/l) in place of cellulose, pH 5.2–5.5. The five lactic acid bacteria were provided as streaked MRS agar plates, from which single colonies were used to start pure cultures in liquid MRS broth. Halobacterium sp. NRC-1 (ATCC#700922), Saccharomyces cerevisiae S288C (ATCC#204508), and Shewanella amazonensis SB2B (ATCC# BAA-1098) were obtained as freeze-dried stocks and used per recommended protocol to start cultures in the prescribed media. Cultures were grown 12–48 hours until turbid. The cell density of each culture was determined by counting DAPI-stained cells using a Cytopeia Influx flow cytometer. Immediately after counting, the cultures were aliquoted into ten 2 mL cryotubes, flash-frozen in liquid nitrogen and stored at −80°C. Glycerol was added to one of the tubes before freezing to make a 10% glycerol stock solution (except for the Myxococcus xanthus, which was provided as flash-frozen liquid culture.) Two techniques were employed for mixing. Mix #1: One tube of each of the ten cultures was thawed on ice. An aliquot from every tube was added to a single new tube such that each organism contributed an equal number of cells to the final mixture. This final mixture was aliquoted into four 2 mL cryotubes which were flash-frozen and returned to −80°C. Immediately prior to DNA extraction, one of the 2 mL cryotubes of the final mixture was centrifuged for 10 minutes at 10,000 rpm to pellet cells. The supernatant was removed, and the cell pellet was resuspended in TES buffer (10 mM Tris-HCl pH 7.5, 1 mM EDTA, 100 mM NaCl). Mix #2: One tube of each of the ten cultures was thawed on ice. An aliquot from every tube was transferred to a new tube so that the new set of tubes contained an equal number of cells per tube. Immediately prior to DNA extraction, each tube was centrifuged for 10 minutes at 10,000 rpm to pellet the cells. Each cell pellet was resuspended in the lysis buffer that is provided with the DNeasy kit (Qiagen, Valencia, CA), and the contents of all ten tubes were pooled into a single tube. DNA Prep #1 (EnzBB): The resuspended cells were incubated with a final concentration of 50 U/uL lysozyme (Ready-Lyse, Epicentre Technologies) at room temperature for 30 minutes. Further lysis was accomplished by the addition of proteinase-k and SDS to a final concentration of 0.5 mg/mL and 1%, respectively, and incubation at 55°C for 4 hours. Finally, the lysate was subjected to mechanical disruption with a bead beater (BioSpec Products, Inc., Bartlesville, OK), on the Homogenize setting for 3 minutes. Protein removal was accomplished by extracting twice with an equal volume of 25241 phenol:chloroform:isoamyl alchol. The aqueous phase was incubated at −20°C for 30 minutes with 2.5 volumes of 100% ethanol and 0.1 volumes of 3 M sodium acetate before centrifugation at 16,000 g for 30 minutes at 4°C. The DNA pellet was washed with cold 70% ethanol and allowed to air dry before resuspension in TE (10 mM Tris-HCl pH 7.5, 1 mM EDTA.) DNA quantitation was performed using the Qbit fluorometer (Invitrogen). DNA Prep#2 (DNeasy): Qiagen's DNeasy kit (Qiagen, Valencia, CA) per manufacturer's protocol for bacterial cultures. DNA Prep #3 (Enz): Identical protocol to DNA Prep#1 but without the bead beating step. Three small-insert (~2 kb) libraries were constructed by randomly shearing 10 µg of metagenomic DNA using a HydroShear (GeneMachines, San Carlos, CA). The sheared DNA was electrophoresed on an agarose gel, and fragments in the 2–3 kb range were excised and purified using the QIAquick Gel Extraction Kit (Qiagen, Valencia, CA). The ends of the DNA fragments were made blunt by incubation, in the presence of dNTPs, with T4 DNA Polymerase and Klenow fragment. Fragments were ligated into the pUC18 vector using the Fast-Link(TM) Ligation Kit (Epicentre, Madison, WI) and transformed via electroporation into ElectroMAX DH10B(TM) Cells (Invitrogen, Carlsbad, CA) and plated onto agar plates with X-gal and 150 mg/mL Carbenicillin. Colony PCR (20 colonies) was used to verify a >10% insertless rate and ~1.5 kb insert size. White colonies were arrayed into 384-well plates for sequencing. For Sanger sequencing, plasmids were amplified by rolling circle amplification using the TempliPhi(TM) DNA Sequencing Amplification Kit (Amersham Biosciences, Piscataway, NJ) and sequenced using the M13 (−28 or −40) primers with the BigDye kit (Applied Biosystems, Foster City, CA). Sequencing reactions were purified using magnetic beads and run on an ABI PRISM 3730 (Applied Biosystems) sequencing machine. The library for pyrosequencing was constructed using ~5 mg of metagenomic DNA, which was nebulized (sheared into small fragments) with nitrogen and purified with the MinElute PCR Purification Kit (Qiagen, Valencia, CA). The GS20 Library Prep Kit was used per manufacturer's protocol to make a ssDNA library suitable for amplification using the GS20 emPCR Kit and then prepared for sequencing on the Genome Sequencer 20 Instrument using the GS 20 Sequencing Kit. All Sanger-generated sequence data have been submitted to the NCBI Trace Archives, with Trace Archive ID numbers 2261924487 through 2262015859. The pyrosequencing-generated sequence data have been submitted to the NCBI Short Read Archives with Accession number SRA010765.1. Vector sequences were removed with cross_match, a component of the Phrap software package and low-quality bases, i.e. those with a PHRED quality score of Q>=15, were converted to “N”s using JAZZ, the JGI's in-house genome sequence assembly algorithm. We mapped reads back to reference genomes by means of BLAST search . A BLAST database containing the nucleotide sequence of each of the ten genomes (chromosomes and plasmids) was constructed. Reads were searched against that BLAST database, and low-scoring hits (e-value>0.0001) were discarded except for the pyrosequencing-generated reads, for which a threshold of 0.01 was used. Reads not passing BLAST's low complexity filter were considered to have failed QC, this happened frequently for reads containing a large number of <Q15 bases replaced with N. Some reads contained a high fraction of N bases but still passed the low complexity filter, such reads frequently had no significant hit to the 10 reference organisms. Reads with hits were assigned to the genome corresponding to their top BLAST hit only if the top hit had sequence identity >95% and the next highest hit to a different organism had a bit score at least 20 points lower. Such reads are considered “mapped.” In order to investigate possible contamination in sequence libraries, reads without hits were searched against the NCBI non-redundant amino acid database in parallel using mpiBLAST . We thank David Mills, Mitchell Singer, and Alison Berry for supplying cultures for the lactic acid bacteria, Myxococcus xanthus, and Acidothermus cellulolyticus, respectively. We thank Morgan G. I. Langille for comments on a draft of this manuscript. Sequencing was performed at the DOE Joint Genome Institute in Walnut Creek, CA. Competing Interests: Jonathan Eisen is an associate with PLoS as Editor-in-Chief of PLoS Biology. Funding: This project was funded primarily by Laboratory Directed Research and Development Program funds from the Lawrence Berkeley National Laboratory. The work was conducted in part at the U.S. Department of Energy Joint Genome Institute which is supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231. A. Darling was supported by NSF fellowship DBI-0630765. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
<urn:uuid:e04ee150-c18f-48bb-a22b-ea4b16d9cf33>
CC-MAIN-2017-17
http://pubmedcentralcanada.ca/pmcc/articles/PMC2855710/
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917121453.27/warc/CC-MAIN-20170423031201-00366-ip-10-145-167-34.ec2.internal.warc.gz
en
0.942757
7,628
3.34375
3
Management of Venous Thromboembolism * Assistant Professor in hematology **Professor of Medicine Medical college hospital Address for Correspondence: Dr. K. Pavithran, Department of Hematology Medical College Hospital, See also Thromobopoetin by Pavithran Management of Venous Thromboembolism Venous thrombosis is the third most common cardiovascular disease after ischemic heart disease and stroke.1 It is common in whites, affecting 1 in 1,000 individuals every year, and is strongly associated with life-threatening pulmonary embolism (PE). Though the exact incidence is not known in India it is becoming a common clinical problem because the physicians and the surgeons are now more aware of venous thrombosis. In addition to circumstantial predisposing factors eg.surgery, pregnancy, or immobilization), genetic abnormalities, molecular abnormalities of components of the coagulation pathway leading to hypercoagulability and, in turn, to thrombophilia have been found in subjects who have had thromboembolic disease (Table 1).2 Venous thrombosis occur most commonly in the lower limbs though it can occur in any vein in the body. Thrombosis of superficial veins (as occurring in varicosities) is benign and self-limiting. Thrombosis of deep vein is a serious condition. Thrombi localized to the deep veins of the calf are less serious than those involving the proximal veins (popliteal, femoral or iliac veins) because they are often smaller and therefore less commonly associated with long term disability or clinically important pulmonary embolism (PE). On the other hand PE frequently complicates proximal vein thrmbosis. Asymptomatic PE is detected by perfusion lung scanning in about 50% of patients with documented proximal vein thrombosis. Asymptomatic venous thrombosis is found in 70% of patients who present with confirmed PE. Clinical approach - A. Diagnosis of venous thrombosis 1. Clinical Diagnosis of venous thrombosis Clinical diagnosis of venous thrombosis in symptomatic patients lacks both sensitivity and specificity. It is insensitive because many potentially dangerous thrombi do not totally obstruct the veins nor produce inflammation of the vessel wall and therefore produce minimal clinical manifestations. It is non-specific because none of the signs and symptoms is unique to this condition. Other conditions frequently confused with venous thrombosis include muscle strain or tear, superficial thrombophlebitis, lymphangitis, lymph edema, cellulitis and post-phlebitic syndrome. Maneuvers formerly recommended for diagnosis of DVT, such as Homan sign (pain provoked by forced dorsiflexion of the foot) or the Lowemberg sign (pain provoked by the application on the calf of the sphygmomanometer cuff at relatively low pressure) are totally useless and even dangerous as is any other maneuver exerted on the calf at the time of a recent DVT. The standard diagnostic test for deep-vein thrombosis of the lower extremities is ascending phlebography. Phlebography can detect both distal thrombi (in the calf veins, a common site of inception of deep-vein thrombosis) and proximal thrombi (in the popliteal, femoral, and iliac veins), which are the source of most large pulmonary emboli. Other objective diagnostic methods include impedance plethysmography 3 and various forms of real-time B-mode ultrasonography,4 most of which are more sensitive for the detection of proximal than distal thrombosis. With impedance plethysmography, one measures the electrical impedance between two electrodes wrapped around the calf. Venous obstruction proximal to the electrodes decreases the impedance as the leg becomes engorged with blood, an electrical conductor, and delays the characteristic increase in calf impedance when a thigh tourniquet is deflated. The introduction of real-time B-mode ultrasonography has provided a promising alternative to impedance plethysmography, with a sensitivity for proximal thrombi that approaches 100 percent in patients with symptomatic deep-vein thrombosis 4,5. In symptomatic outpatients with suspected deep-vein thrombosis, serial compression ultrasonography had a positive predictive value of 94 percent, superior to the positive predictive value of 83 percent for serial impedance plethysmography 6. In duplex scanning, real-time B-mode ultrasonography is supplemented by Doppler flow-detection ultrasonic imaging, which allows detection of blood flow in any vessel seen. In symptomatic patients with proximal deep-vein thrombosis, its overall sensitivity in a meta-analysis of four well-designed studies was 93 percent, with a specificity of 98 percent7. The sensitivity of d-dimer measured by enzyme-linked immunosorbent assay is 97 percent. Venous thrombosis is unlikely to be present if d-dimer concentrations are not elevated, but a positive result requires confirmation by more specific tests based on imaging. Magnetic resonance venography has 100% sensitivity and over 96% specificity for diagnosis of deep vein thrombosis but this is done only in exceptional cases. Radio nucleotide venography is under development. B. Etiology1. Detailed clinical history: One should suspect thrombophilia in the following situations8 (Table.2). Age - young vs old. In most patients with inherited thrombophilia, thrombotic event occurs before the age of 45 years. In the elderly except Factor V Leiden all others are rare. Race - Factor V Leiden and prothrombin gene mutation (G20210A) are common among healthy whites but are extremely rare among Asians and Africans. Arterial vs venous: In addition to venous thrombosis occurrence arterial thrombosis is also seen with Protein S deficiency and hperhomocystinemia and antiphospholipd antibody syndrome. Site - unusual site, multiple sites occurs more often with inherited thrombophilia. Drugs - history of oral contraceptive and tamoxifen use. The term "thrombophilia" refers to a tendency to have recurrent venous thromboembolism. Hereditary abnormalities that are associated with initial and recurrent venous thromboembolism include congenital deficiencies of antithrombin III, protein C, protein S, or plasminogen; congenital resistance to activated protein C (APC resistance or Factor V Leidin defect), mutation of prothrombin and hyperhomocysteinemia. The most common hereditary condition is APC resistance. Lifetime prevalence of venous thromboembolism in patients with antithrombin III, protein C, protein S is over 50 percent. Initial episodes of venous thromboembolism are rare before the age of 18 years and uncommon after the age of 50. As with the hereditary thrombophilias, acquired disorders vary widely in their propensity to cause venous and arterial thrombotic disease. Mucin secreting carcinomas have very high potential for thrombosis. Antiphospholipid antibody syndrome may manifest solely as a laboratory abnormality or present with venous and arterial thrombosis, stroke or recurrent abortion. Patients with myeloproliferative disorders and paroxysmal nocturnal hemoglobinuria may present with thrombosis at unusual sites9. The main objectives of treatment of DVT are 1) to prevent (both fatal and non-fatal) PE and thrombus extension in the acute phase of the disease 2) to prevent recurrences of venous thromboembolism (VTE) 3) to prevent late sequelae (post phlebitic In highly selected cases of deep vein thrombosis it may be possible to restore the venous patency by surgical thrombectomy with dramatic early results. Antithrombotic regimens modify one or more of these abnormalities. These regimens include drugs that inhibit blood coagulation, such as the various heparins and heparinoids; warfarin; direct thrombin inhibitors; drugs that inhibit platelet function, such as aspirin and dextran; and techniques that counteract venous stasis, such as compression stockings and pneumatic compression devices. Once venous thromboembolism is diagnosed, heparin should be given for four to seven days and warfarin therapy should be initiated. Thrombolytic therapy is reasonable in selected patients with extensive proximal-vein thrombosis or pulmonary embolism. In patients with acute venous thromboembolism and active bleeding or a high potential for bleeding, those who are noncompliant, and those with a history of heparin-induced thrombocytopenia, a filter should be inserted in the inferior vena cava. After an initial bolus of 5000 U, at least 30,000 U per day of UFH should be infused. Lower doses result in higher rates of recurrence10, 11. An APTT value 1.5 to 2.5 times the control value is commonly recommended as a target therapeutic range for heparin. To ensure a continuous antithrombotic effect in patients with venous thromboembolism, heparin should be given for at least four days and not discontinued until the INR has been in the therapeutic range for two consecutive days. In patients with extensive proximal-vein thrombosis or pulmonary embolism, a longer course of heparin should be considered12,13. LMW Heparin or Unfractionated Heparin? Both unfractionated (UFH) and low molecular weight heparins have been found to be equally effective for the management of VTE. Data from four meta-analyses suggest that low-molecular-weight heparins are more effective than unfractionated heparin in preventing recurrence (risk reduction, 34 to 61 percent) and cause less major bleeding (risk reduction, 35 to 68 percent) 14- 17. LMWH are also found to provide a survival advantage in those with malignancy. LMW heparin is effective in most patients when given in weight-based doses (anti-Xa U/kg body weight) without subsequent laboratory monitoring or dose adjustment. The kidneys clear all LMW heparins and caution should be exercised when the creatinine clearance is < 30 mL/min. The correct dose for massively obese persons has not been established, and laboratory monitoring (plasma anti-Xa activity) may be useful in such patients. Doses of various LMWH preparations are dalteparin sodium 200 anti-Xa IU/kg/d subcutaneous, enoxaparin sodium 1 mg/kg q12h subcutaneous or enoxaparin sodium 1.5 mg/kg/d subcutaneous, nadroparin calcium 86 anti-Xa IU/kg bid subcutaneous or nadroparin calcium 171 anti-Xa IU/kg subcutaneous daily, tinzaparin sodium 175 anti-Xa IU kg/d subcutaneous daily. Warfarin or other coumarins should be started within 24 hours after the initiation of heparin therapy, with a target INR of 2.0 to 3.0; higher values are associated with more bleeding but no greater efficacy 18. Therapy should be continued at least for 3 to 6 months. Patients with metastatic cancer are candidates for long-term therapy because they probably have high rates of recurrence. Patients with recurrent disease should receive long-term therapy - for those who had two episodes one year and life long for those who had three episodes or having multiple risk factors. Other newer antithrombotic drugs. Hirudin a new molecule is a potent inhibitor of thrombin but unlike heparin its action is independent of AT III and it has very little effect on platelets. Other newer molecules are 7 E3 a murine monoclonal antibody fragment that competes with fibrinogen for its platelet receptor and recombinant human Factor Xa. Drugs that target factor VIIa/tissue factor (tissue factore pathway inhibitor and NAPc2), block factor Xa (synthetic pentasaccaride and DX9065a), inhibitors of factor VA and VIIIa oral formulation of heparin. Anisoylated plasminogen-streptokinase activator complex or single chain urokinase like plasminogen activator is under investigation. Management of complications of anticoagulant therapy The management of bleeding associated with anticoagulant treatment should be individualized, with therapy depending on the location and severity of bleeding, laboratory-test results, and the risk of recurrent venous thromboembolism. If urgent treatment is needed, vitamin K and plasma or factor IX concentrates should be administered to warfarin-treated patients; protamine should be administered to heparin-treated patients who have a prolonged APTT. In patients who have bleeding while receiving a low-molecular-weight heparin, protamine should be given because it neutralizes the heparin molecules thought to be most responsible Heparin Induced thrombocytopenia Frequency of heparin-induced thrombocytopenia (HIT) is < 1% when either unfractionated heparin or LMW heparin is given for no more than 5 to 7 days. Because of this finding, a platelet count should be checked between day 3 and day 5 of therapy. When the platelet count falls precipitously or in a sustained fashion, heparin therapy should be stopped. The syndrome of HIT is unusual after 14 days of heparin therapy. Recombinant hirudin (lepirudin) has been specifically approved for HIT accompanied by thrombosis. In this setting, lepirudin should be used for temporary anticoagulation and warfarin therapy delayed until the platelet count has risen to > 100,000/muL20. Failure of AnticoagulationThe failure of anticoagulant therapy results in symptomatic recurrent venous thromboembolism. Treatment failure despite adequate anticoagulation occurs in patients with overt or occult cancer and possibly in patients with antiphospholipid antibodies. Post-phlebitic syndrome.This is a syndrome occurring in patients after acute deep vein thrombosis. It occurs in about one third of patients during long-term follow up. This is usually due to valve destruction but can also be due to large proximal vein thrombi that block the overflow. As a result the venous pressure increases during exercise and blood flow is directed from deep to superficial veins resulting in edema, impaired viability of subcutaneous tissue and venous ulcerations. Use of graduated compression stockings may reduce the risk of the syndrome. Venous Thromboembolism during PregnancyThe risk of venous thromboembolism is five times higher in a pregnant woman than in a nonpregnant woman of similar age. Unfractionated and low-molecular-weight heparins do not cross the placenta and are very safe for the fetus, 21,22 whereas coumarin derivatives can cause fetal bleeding and are teratogenic. Low-molecular-weight heparins are suitable substitutes for unfractionated heparin and probably cause less bleeding and osteoporosis, but they are more expensive and the clinical experience with the drug is still limited. Some authorities recommend the use of warfarin during pregnancy for specific patients, such as women with mechanical heart valves, those who have a recurrence while receiving heparin, and those with contraindications to heparin therapy. If venous thrombosis is diagnosed during pregnancy intravenous heparin is given for 5-10 days followed by subcutaneous heparin for the rest of pregnancy. After delivery, warfarin, which is safe for infants and nursing mothers, should be given (with initial heparin overlap) for four to six weeks23,24. Prognosis of venous thrombosis Untreated or inadequately treated venous thrombosis is associated with the high complication rate, which can be reduced by adequate anticoagulant therapy. About 30% of untreated silent or symptomatic calf thrombi extend into popliteal vein and is associated with 40-50% risk of clinically detectable PE. Patients with proximal vein thrombosis who are inadequately treated have a 47% frequency of recurrent thrombo embolism over three months. If properly treated the recurrence rate is only 2% in the first 3 months and 5-10% in the subsequent year. Prevention of Venous Thromboembolism The goal of prophylactic therapy in patients with risk factors for deep-vein thrombosis is to prevent both its occurrence and its consequences. Detection of deep-vein thrombosis is likely to be delayed, as many of the affected patients are asymptomatic. Preventing deep-vein thrombosis in patients at risk is clearly preferable to treating the condition after it has appeared, a view that is supported by cost-effectiveness analysis. The presence of clinical risk factors identifies patients with the most to gain from prophylactic measures, as well as patients who should receive antithrombotic prophylaxis during periods of increased susceptibility, such as postoperatively or post partum. For example a patient with Homocysteinemia should get folic acid supplements as a preventive measure and antithrombotic prophylaxis during surgery. For prophylactic treatment the following risk categories are identified for inpatients. Risk category Risk of VT % Calf vein T Proximal vein T PE High risk1.Gen. Surgery-patient >40 years with recent DVT or PE 2.Extensive pelvic or abd surgery for 40-80 10-20 1-5 3. Major ortho surgery of lower Low risk1.General surgery-patient >40 years lasting 30 minutes or more 2.Immobilization with medical illness 10-40 2-10 0.1-0.7 cardiac disease, stroke, chronic resp disease, bowel disease & malignancy VT- venous thromboembolism, PE - pulmonary embolism, DVT- deep vein thrombosis The methods used for prophylaxis in these patients are early ambulation, low dose unfractionated heparin, intermittent pneumatic compression, graduated compression stockings, oral anticoagulants, dextran and low molecular weight heparin. These methods are used alone or in combination, the choice depending upon the risk stratification. A simple regimen of low dose unfractionated heparin is 5000 U given 2 hours prior to surgery and is continued post operatively at a dose of 5000 U for every 12 hours. 6th ACCP Recommendations24 Treatment of VTE 1 Patients with DVT or PE should be treated acutely with LMW heparin, unfractionated IV heparin, or adjusted-dose subcutaneous heparin. 2. When unfractionated heparin is used, the dose should be sufficient to prolong the APTT to a range that corresponds to a plasma heparin level of 0.2 to 0.4 IU/mL by protamine sulfate or 0.3 to 0.6 IU/mL by an amidolytic anti-Xa assay. 3. LMW heparin offers the major benefits of convenient dosing and facilitation of outpatient treatment. LMW heparin treatment may result in slightly less recurrent VTE and may offer a survival benefit in patients with cancer. Initial Anticoagulation With Heparin 1. Treatment with heparin or LMW heparin should be continued for at least 5 days and that oral anticoagulation should be overlapped with heparin or LMW heparin for at least 4 to 5 days. For most patients, treatment with warfarin can be started together with heparin or LMW heparin. The heparin product can be discontinued on day 5 or day 6 if the INR has been therapeutic for 2 consecutive days. 2. For massive PE or severe iliofemoral thrombosis, longer period of heparin therapy of approximately 10 days is required. 1. Oral anticoagulant therapy should be continued for at least 3 months to prolong the prothrombin time to a target INR of 2.5 (range, 2.0 to 3.0). When oral anticoagulation is either contraindicated or inconvenient, a treatment dose of LMW heparin or unfractionated adjusted-dose heparin to prolong the APTT to a time that corresponds to a therapeutic plasma heparin level for most of the dosing interval should 2. Patients with reversible or time-limited risk factors should be treated for at least 3 months. 3. Patients with a first episode of idiopathic VTE should be treated for at least 6 months. 4. Patients with recurrent idiopathic VTE or a continuing risk factor such as cancer, antithrombin deficiency, or anticardiolipin antibody syndrome needs treatment for 12 months or longer. 5. Symptomatic isolated calf vein thrombosis should be treated with anticoagulation for at least 6 to 12 weeks. Patients with hemodynamically unstable PE or massive iliofemoral thrombosis, who are at low risk to bleed, are the most appropriate candidates. Inferior Vena Caval Procedures Placement of an inferior vena caval filter when there is a contraindication or complication of anticoagulant therapy in an individual with or at high risk for proximal vein thrombosis or PE. Placement of an inferior vena caval filter is also useful for recurrent thromboembolism that occurs despite adequate anticoagulation, for chronic recurrent embolism with pulmonary hypertension, and with the concurrent performance of surgical pulmonary embolectomy or pulmonary thromboendarterectomy. 1. KniffinWD, Baron JA, Barret J, et al. The epidemiology of diagnosed pulmonary embolism and deep venous thrombosis in the elderly. Arch Intern Med 1994; 154:861-866 2. De Stefano V, Finazzi G, Mannucci PM. Inherited thrombophilia: pathogenesis, clinical syndromes and management. Blood 1996; 87:3531-3544 3. Wheeler HB. Diagnosis of deep vein thrombosis: review of clinical evaluation and impedance plethysmography. Am J Surg 1985;150:7-13. 4. Lensing AWA, Prandoni P, Brandjes D, et al. Detection of deep-vein thrombosis by real-time B-mode ultrasonography. N Engl J Med 1989;320:342-345. 5. Elias A, Le Corff G, Bouvier JL, Benichou M, Serradimigni A. Value of real time B mode ultrasound imaging in the diagnosis of deep vein thrombosis of the lower limbs. Int Angiol 1987;6:175-182. 6. Heijboer H, Buller HR, Lensing AWA, Turpie AGG, Colly LP, ten Cate JW. A comparison of real-time compression ultrasonography with impedance plethysmography for the diagnosis of deep-vein thrombosis in symptomatic outpatients. N Engl J Med 1993;329:1365-1369. 7. White RH, McGahan JP, Daschbach MM, Hartling RP. Diagnosis of deep-vein thrombosis using duplex ultrasound. Ann Intern Med 1989;111:297-304 8. Seligsohn U, Lubetsky A. Genetic susceptibility to venous thrombosis. New Engl J Med 2001;344:1222-1231. 9. Matei D, Brenner B, Marder VJ. Acquired thrombophilic syndromes. Blood Reviews 2001; 15:31-48. 10. Brandjes DPM, Heijboer H, Büller HR, de Rijk M, Jagt H, ten Cate JW. Acenocoumarol and heparin compared with acenocoumarol alone in the initial treatment of proximal-vein thrombosis. N Engl J Med 1992;327:1485-1489. 11. Levine MN, Raskob GE, Landefeld S, Hirsh J. Hemorrhagic complications of anticoagulant treatment. Chest 1995;108:Suppl:276s-290s 12. Hull RD, Raskob GE, Rosenbloom D, et al. Heparin for 5 days as compared with 10 days in the initial treatment of proximal venous thrombosis. N Engl J Med 1990;322:1260-1264. 13. Gallus AS, Jackaman J, Tillett J, Mills W, Wycherley A. Safety and efficacy of warfarin started early after submassive venous thrombosis or pulmonary embolism. Lancet 1986;2:1293-1296. 14. Leizorovicz A, Simonneau G, Decousus H, Boissel JP. Comparison of efficacy and safety of low molecular weight heparins and unfractionated heparin in initial treatment of deep venous thrombosis: a meta-analysis. BMJ 1994;309:299-304. 15. Lensing AWA, Prins MH, Davidson BL, Hirsh J. Treatment of deep venous thrombosis with low-molecular-weight heparins: a meta-analysis. Arch Intern Med 1995;155:601-607. 16. Siragusa S, Cosmi B, Piovella F, Hirsh J, Ginsberg JS. Low-molecular-weight heparins and unfractionated heparin in the treatment of patients with acute venous thromboembolism: results of a meta-analysis. Am J Med 1996;100:269-277. 17. Van Der Heijden JF, Prins DMH, Buller HR. Initial treatment of patients with venous thromboembolism. Educational Book, 5th Congress of the European Hematology Association. 2000. 18. Hull R, Hirsh J, Jay R, et al. Different intensities of oral anticoagulant therapy in the treatment of proximal-vein thrombosis. N Engl J Med 1982;307:1676-1681. 19. Elalamy I, Lecrubier C, Horellou MH, Conard J, Samama MM. Heparin-induced thrombocytopenia: laboratory diagnosis and management. Ann Med 2000 ;32 Suppl 1:60-7. 20. Flessa HC, Kapstrom AB, Glueck HI, Will JJ. Placental transport of heparin. Am J Obstet Gynecol 1965;93:570-573. 21. Melissari E, Parker CJ, Wilson NV, et al. Use of low molecular weight heparin in pregnancy. Thromb Haemost 1992;68:652-656. 22. Orme ML, Lewis PJ, de Swiet M, et al. May mothers given warfarin breast-feed their infants? BMJ 1977;1:1564-1565. 23. Ginsberg JS, Greer I, Hirsh J. Use of Antithrombotic Agents During Pregnancy. Chest ; 2001; 119(1 Suppl):122S-131S. 24. Hirsh J, Dalen JE, Guyatt G. The Sixth (2000) ACCP Guidelines for Antithrombotic Therapy for Prevention and Treatment of Thrombosis. Chest 2001; 119: (1 Suppl):1S-2S.
<urn:uuid:75fe6524-4fbf-4622-b546-30cb3a6e3d7b>
CC-MAIN-2017-17
http://www.priory.com/med/thromboembolism.htm
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917122865.36/warc/CC-MAIN-20170423031202-00427-ip-10-145-167-34.ec2.internal.warc.gz
en
0.878529
6,132
2.546875
3
Can You Revive an Extinct Animal? By D.T. MAX, The New York Times, January 1, 2006 Reinhold Rau is one of the last of his breed. He was once part of a team of seven taxidermists who, during the apartheid years in South Africa, mounted mammals and birds for the natural-history museum in Cape Town. You can still see his work there. The leopard moving toward its prey on the third floor is Rau's creation, as is the zebra fawn in a nearby glass case, taking shelter under an adult. Rau loves his work - the stripping of the animal's skin from the body, the construction of the mold that replaces its flesh, the sleight of hand that brings about a permanent version of the animal's old self. "Sometimes when the schoolchildren come and see taxidermy, they almost faint," he told me recently in his accented English (he grew up in Germany). "But it never had that effect on me." During apartheid, displaying South African wildlife trophies behind glass accorded with the regime's image of itself as a first-world power; it showcased its dominion over nature. But since the changeover to a majority black government in the mid-90's, the natural history museum has turned away from Rau's kind of work. In addition to fauna, African culture has become an increasingly important focus; and video installations have superseded mounted animals in an attempt to present the natural world more on its own terms. Over time, all but one of Rau's colleagues in taxidermy have left the museum. Technically, Rau, too, is retired, but as he says, "I never left and they never kicked me out." As a result he can still be found in his office, helping out on the occasional freelance taxidermy assignment. And there, past a mounted South American maned fox and a four-foot-long polyurethane model of a Permian Era reptile, he pursues his true passion: the Quagga Breeding Project. The quagga was a horselike animal native to southern Africa that went extinct in 1883. Its head, neck and shoulders and sometimes the forward part of its flank were covered with stripes; the back part of its torso, its rump and legs were unstriped. An old joke among the Dutch, the first Europeans to settle in South Africa, was that the quagga was a zebra that had forgotten its pajama pants. Rau's goal, which he has been working toward for three decades, is to breed the quagga back into existence. His approach is to take zebras that look more quaggalike than the norm and mate them with one another, generation after generation, progressively erasing the stripes from the back part of their bodies. This may sound preposterous. How likely is it that deliberate breeding can retrace the path of natural selection by which the quagga split off from the plains zebra more than a hundred thousand years ago? But over the years Rau's project has gained some establishment support. Several scientific studies of the zebra family, for instance, have suggested that plains zebras and quaggas were closely enough related to make Rau's project feasible from a genetic point of view. This is important to Rau, because he doesn't seem to want just to create a quagga look-alike but to recreate - or at least closely approximate - the genetic original. And beginning in the late 80's, the Namibian and South African park systems supplied Rau with promising animals so that he could put his ideas into practice. (The South African park system, as well as the natural history museum, also absorbs some of the small, ongoing cost of the project.) Over years of breeding, Rau has made great progress creating zebras that look like quaggas. With each generation, his herds show fewer and fainter stripes in the back. "Even we have been surprised by the progress," he told me. Last January, Henry, Rau's most convincing quagga foal yet, was born on a private preserve outside Cape Town. Rau says that Henry, a third-generation descendant (on his mother's side) of the project's original zebras, is very near to perfection. He has some of the brownish color quaggas had and, if not for a few stripes on Henry's hocks - the joint in the middle of the hind leg - Rau says he would be tempted to announce that the project was done. His life's work would be finished, and to his mind, a great ecological wrong righted. Still, Rau's project raises a lot of questions. Most central among them: If you breed something that looks like a certain animal, does that mean you have actually recreated that animal? What, in other words, gives an animal its identity - its genetic makeup? Its history? Its behavior? Its habitat? The problem, as it was put to me by Oliver Ryder, who directs a project at the San Diego Zoo to preserve rare-animal DNA, is that even if Rau succeeds in creating an animal with the exact appearance of a quagga, uncertainty will remain about what he has done. "We won't be able to know," Ryder said, "how much quagga-ness is in it." Extinction is always a sad story, but the quagga's is sadder than most. The animal once lived in vast herds, often intermixed with gnu and ostriches, "for the society of which bird especially," the 19th-century English explorer Capt. William Cornwallis Harris noted, "it evinces the most singular predilection." That was probably Harris's fancy: in reality little is known about the quagga's behavior other than that it grazed on plains grass and emitted a bark - kwahaah! - in moments of danger or excitement. Dutch settlers recorded the name as "quacha," with a guttural "ch," and that remains the correct way to pronounce it: KWAH-ha. The quagga had an unfortunate series of interactions with humans. Sportsmen on the Cape of Good Hope hunted the animal with enthusiasm. That was the first blow to the quagga. The second blow was an influx of farmers of Dutch or German origin, known as Voortrekkers, in the early 19th century into the quagga's home territory, the Karoo plain. The farmers did not want quaggas sharing grass with their livestock. When they saw a quagga, they shot it. They used the hide and gave the meat to the servants. (An 1838 article in The Penny Magazine commented that while the whites thought quagga tasted no better than horseflesh, "the natives, however, relish it.") The number of quaggas soon went into free fall. By the 1860's, they were scarce. A few decades earlier they numbered in the tens of thousands. By the 1870's there appeared to be none left in Africa. Two things make the extinction of the quagga especially heart-rending. First was the manner of its disappearance. The quagga left the world so quietly that when the Cape Town colony finally put in place legislation to protect the animal, in 1886, the government didn't even know that the last quagga in the world died nearly three years before. Not only were there no quaggas living anywhere anymore, there were also apparently almost no dead quaggas to be found, either: little to nothing in the way of quagga rugs, stuffed quaggas, quagga photographs. An article in The New York Times in 1900 noted that "one poor skin in the Natural History Museum, London" seemed to be "all that remains of this noble creature to prove that it ever existed." The other depressing thing about the disappearance of the quagga is that its fate seemed somehow connected to its ordinariness. The plains zebra, after all, filled a similar niche in nature. It, too, ate grassland that humans wanted for their livestock. But the plains zebra is a regal, stunning animal - "fierce, strong, fleet and surpassingly beautiful" in Captain Harris's words - whereas the quagga was ragtag. It seemed the sketch of something of which the plains zebra was the full realization, and so no one thought to save it. But the quagga never entirely disappeared from memory. Its extinction has always managed to touch a few individuals, at different times and different places. It was one of the motivating forces behind a European pact in 1900 to preserve vanishing African species like the giraffe and the rhinoceros. And Otto Antonius, director of the Vienna zoo from 1924 to 1945, commissioned a painting called "The Extermination of the Quagga" - in which men on horseback fire as donkeylike heads rear up before a fusillade of bullets - as a reminder of the tragedy of the animal's elimination. Today, thanks to Rau, who bought the painting from Antonius's daughter, "The Extermination of the Quagga" belongs to the Quagga Breeding Project and can be found in the museum where Rau has his office. Rau first became interested in the fate of the quagga in 1959, soon after he arrived at the South African Museum, where he came upon a poorly maintained stuffed quagga foal. It offended his professional standards as a taxidermist to see an extinct animal so neglected. "It was most crudely stuffed and stuck in the midst of other animals - I felt someone ought to do something about it," he recalled not long ago, when I first met him in Cape Town. "If you consider how ignorance and greed wiped out the quagga, this is a tragedy," he told me. He began to think he had a responsibility - even a destiny - to "reverse this disaster." Rau is 73, a barrel-chested, compact man with a white beard and eyebrows that curl in spires over his eyes like the native South African plant spekboom. He has the intensity of people who have spent a long time opposing conventional thinking. When he began his quagga project, he expected - correctly - that he would meet with resistance from the scientific community. "Never trust a taxidermist on taxonomy," he said to me, with dry bitterness. At the outset, Rau had no guidelines to follow. He knew as well as anyone that extinction was forever. (Even today, despite "Jurassic Park" fantasies, no one has been able to clone an extinct animal because DNA typically degrades too quickly.) But in the early 1970's he remembered something unusual he saw in his youth. When he was growing up near Frankfurt some 30 years before, he went to a circus where an animal called an auroch was paraded. The auroch, a powerful ox, had been extinct since the early 17th century, but two brothers, Lutz and Heinz Heck, both German zoo directors, each tried in the 1920's to recreate the beast by scrupulously mating existing breeds of cattle with one another - getting the auroch's body size from one, its coloring from another. Rau suspected that it might be possible to reverse the extinction of the quagga the same way. The key would be to find the characteristics of the quagga in the existing zebra population, select zebras that exhibited such characteristics and breed them to bring out those characteristics. Paradoxically, what in part had condemned the quagga to extinction - its close relationship to the plains zebra - might now be its salvation. Of course, there was a scientific objection to overcome. Most taxonomists have held that the quagga and the plains zebra were entirely separate species. While there is no universally accepted definition of a species - it is some combination of genetic difference, physical difference and inability to interbreed and produce fertile offspring (among other evolving criteria) - in general, taxonomists believe they know a species when they see it. By most definitions it is impossible to breed one separate species from another: they have diverged permanently, and you cannot reverse the evolutionary history to rejoin them. But Rau maintained that the quagga was merely a subspecies, or a color variant, of the plains zebra - distantly related enough to look different but closely related enough to be a candidate for interbreeding. At the time, Rau had no captive zebra stock and no land to graze zebras on, so he couldn't test his hypothesis. And when he looked for help in starting up his project, he got nowhere. A letter from one national park official in South Africa described his project as "an academic exercise of very dubious conservation value." But Rau did not let the rejections demoralize him. "As a nonscientist, I could afford to have scientists look and sniff at me," he said. "I did not care about their opinions, and I did not have to care." He began to gather evidence that the quagga had been a subspecies of the plains zebra. For one thing, he discovered, European colonists, following the Hottentots' example, apparently regarded the two animals as interchangeable: they used the word "quagga" for both. Also, Rau knew taxidermy - hides - and he knew that plains zebras were far from uniformly striped. In fact, as you went south, their stripes faded out, and they got browner. In other words, they became more quaggalike. That suggested a spectrum between quaggas and zebras, rather than two boxes with one species in each. The next step was to examine as many preserved quagga specimens as he could. Though there are known photographs of only one living quagga, there are 23 mounted quaggas preserved at various museums throughout the world. (There had been 24 until 1945, when drunken Russian troops occupying a villa where the Germans had moved the museum holdings of Konigsberg threw one out a window.) Rau visited many of the extant quaggas in the early 70's, carefully cataloging their markings. He went to Mainz, Leiden, Munich, Wiesbaden and Amsterdam. He was approaching 40 and had no other obligations in his life. ("I had a fiancée once," he told me enigmatically. "She married an American.") As his expertise in quagga became known, he was invited by museums to restore the tattered quaggas he found, and when he did, feeling he was correcting the errors of previous taxidermists, he made small changes - old preserved skin has very little give - that also had the effect of making the animals look more like plains zebras. If he could not shape the future, it seemed, he could at least remake the past. For all Rau's diligence, he couldn't find any institutional support. Without it, he did not have the resources to keep and breed animals. "I was about to give up," he told me. Then in 1981 he heard from Oliver Ryder, a geneticist associated with the San Diego Zoo. Ryder was looking for blood and skin samples of living zebras, but Rau wrote him back that he had something better - muscle and blood vessels preserved from extinct quaggas. (Whoever had first skinned those quaggas had done a sloppy job, leaving small pieces of flesh connected to the hide.) Ryder was thrilled. It was the moment Rau had been waiting for - he had been keeping bits of quagga flesh in reserve ever since he remounted a quagga in 1969 - and he sent the samples off. Over the next few years, researchers successfully extracted portions of DNA from the quagga tissue - an achievement that in 1984 made front-page news throughout the world. (In the book "Jurassic Park," the successful recovery of quagga DNA emboldens entrepreneurial scientists to try cloning dinosaurs.) Rau, however, was not interested in the big news of the announcement: that cloning extinct animals might one day be possible. He was interested in a related experiment by Ryder, published the following year, which compared the proteins in plains zebras and quaggas and reported that "the quagga probably ought to be considered a variant of the plains zebra and not a distant species." In a letter to Rau, Ryder wrote, "This, I am sure, does not surprise you." For Rau, this was the long-sought confirmation that his dream was achievable. A quagga could, in theory, be bred back into existence from zebras. On the strength of this result, in part, Rau was able to move his project forward. In 1986, the Namibian national parks agreed to supply him with a group of plains zebras, and one year later it sent him a hodgepodge of zebras captured in the Etosha National Park. Rau had arranged with the Cape Department of Nature Conservation for a farm to be set aside for the zebras and to have them delivered there. Some zebras died on the way, and some were the wrong color for the experiment. "I was not impressed overall with the quality of what we had," Rau recalled. Still, after 12 years of trying to get the world's attention, he was glad he could make a start. This past August, on the 122nd anniversary of the death of the last quagga, I visited the Amsterdam Zoo, where the quagga spent its final years. The animal, a mare who was 12 years old at the time of her death, had been so tame she would allow people to pet her when her keeper was present. Consistent with the poor luck of her species, no one realized when she died that she was the last of her kind. After her death, the mare's hide was mounted and put on display at the zoo. Over time its importance became known, but all the same, 15 years ago it was moved to a large room in a zoo building that is not open to the general public. When I approached the zoo about seeing the quagga, a pleasant semiretired biologist named P.J.H. van Bree said he would show it to me. When van Bree opened the door to the room, 50 sets of glass eyes met mine. Along with the quagga were dozens of taxidermic specimens, many from the Dutch past as a colonial power - rows of antelopes, zebras, a grizzly bear in an eight-foot-high plastic bag, a leopard and a black-maned cape lion, who, before humans extinguished it in the mid-1800's, may well have enjoyed more than one helping of quagga. The quagga was in a climate-controlled case, but the front pane was not attached. The animal was in poor shape. Its skin was separating on its back and on a rear leg. I touched the animal's bristly back, its backward pointing ears, its sensitive muzzle. I touched its hind quarters, trying to sense the life that was once inside but finding only the cast the taxidermist had made. To my eye, the quagga - this quagga anyway - looked more like a donkey than a zebra. It had a straight back, and its neck jutted forward. Its stripes were very light at the neck, fading to a moiré silkiness at midframe. Its underlying color was very brown. Across the room there was a glorious example of a mountain zebra looking like a small thoroughbred in a Mary Quant frock, and for me it was hard to believe that the two animals were related at all. The subject of Rau's quagga project came up, and van Bree expressed skepticism. "I have no objection," he said, "but just because a man may look like Napoleon, that does not make him Napoleon." It's true that there is as much evidence that Rau's project is impossible as that it's possible. Since the 1985 study of quagga proteins, researchers have gone back and forth on the genetic and physical differences and similarities between the quagga and the plains zebra. The most recent and extensive analysis, published online last summer in Biology Letters of the Royal Society, suggested to some that the mitochondrial DNA of the plains zebra and the quagga was similar enough for them to be members of the same species but also said that there was no evidence that they had actually interbred. Rau saw the report as an endorsement of his ideas. But an author of the paper, Robert Fleischer of the Smithsonian Institution, told me that the scientists themselves had not been able to reach a conclusion as to what the relationship between the quagga and the plains zebra was. He said ultimately the question cannot be answered. Why not? Partly because no one knows enough about quagga behavior. Species - even subspecies - don't differ just in shape and color from one another; they differ in behavior: foraging habits, social habits, aggressiveness. (It's here, for instance, that the auroch project, to rebreed the extinct European ox, foundered. What the Heck brothers got was a large ox with better horns but not an animal whose behavior necessarily matched that of its extinct antecedent.) Rau's mantra, which he said to me many times, is that the "quagga was nothing more than a southern variant of the plains zebra." He says it behaved exactly like the zebra in the wild. But the truth is that he doesn't know because the information doesn't exist. As van Bree told me, "before 1880 people were not interested in animal behavior." Rau is a taxidermist, trained to recreate appearances, not to delve beneath them. But in the course of my conversations with several scientists, I noticed that those who talked about Rau, even those who condescended to his project, spoke of him with respect. For some, like Oliver Ryder, it doesn't seem to matter whether Rau is breeding a true quagga, or a zebra without its pajama pants, or an animal that looks like a quagga but doesn't share the quagga's genetic makeup. What seems to matter is that Rau does not accept that he is powerless to change the course of the mass extinction that has been under way for the past century. Instead he has reasserted the role of humans as custodians of nature. "They're going to thank us for what we save," Ryder told me. And ultimately what Rau may be saving is a part of ourselves. ' Not one man in a thousand has accuracy of eye and judgment sufficient to become an eminent breeder," Charles Darwin wrote in "The Origin of Species." Rau has turned out to have that rare touch. He is "a country boy," in his words, with a knack for animal husbandry. To make his zebras lose their stripes more quickly, he brought in some lightly striped zebras from South Africa's KwaZulu-Natal region and bred the two groups. In the late summer, I went to see Henry, Rau's star quagga, for myself. Rau drove me out of Cape Town in one of the museum's vehicles for a tour of his animals, which now number more than 100. At first meeting, Rau can seem dogmatic, painting the world as us versus them, black versus white - but as I got to know him, he proved to be quite charming, with a flinty sense of humor. He lives alone in a southern suburb of Cape Town with two dogs and six species of European finch. The days I saw him he wore a striped sweater that had brown discolorations from where it dried on a radiator; it was as if he were working on becoming a quagga, too. Often, in the early days of his project, when Rau did not have the money for game-quality fencing, he put his zebras wherever he found adequate barriers already in place. As a result I saw some of his animals at an explosives factory and others at a particle-accelerator facility. Today, the project still operates on a tiny budget drawn from individual contributions; but because several private game-preserve owners keep the animals as tourist draws, most of his animals live better. Henry, for instance, lives on a private preserve, owned by a wealthy plastics manufacturer, about 45 minutes north of Cape Town. The preserve is large enough that if Henry wants to stay out of sight, it is very unlikely a person can find him, even with a car. "Let's keep thumbs that the little boy will present himself," Rau said as we began our search. Eventually, we found Henry grazing on a heath just down the hill from a gnu and near some bontebok. His stripes began at the head like a bandit's mask, his black comb stood up like a centurion's, but that was where his resemblance to a plains zebra ended: his pelt from his rib cages to his buttocks was a soft, almost-unstriped yellow brown. He also had that moiré silkiness to his middle that I saw in the hide in Amsterdam. "What a lovely thing he is," Rau kept saying, looking through his binoculars. "Look at those stripes. They go nowhere near the belly. That's very quagga." Henry and his group - a stallion and three mares - had the grace of wild things. The sun shone off them, the ocean was behind the hills they ranged over and they seemed to hear a music I didn't. If the bontebok next to them ran, they ran. If one zebra turned and showed us its rump, they all did. The stallion stood apart, seemingly ready to fight if we made any sudden moves. Not that the scene was actually truly wild - a cellphone rang in our S.U.V., there were power lines over the next hill and the landscape was full of vegetation that had come from Australia. Human intervention has changed this landscape in radical ways for 350 years. Whatever progress Rau may have made in bringing the quagga back to the world, we are not in the world the quagga knew, and it seems safe to say we will never be again. Modern technology, though, may eventually carry the quagga project beyond where Rau can take it. Robert Fleischer at the Smithsonian told me that "not now, not in 6 but maybe in 20 years," technology would be available to repair DNA from extinct animals, which might then be used to clone them back to life. The high quality of the DNA samples from the quagga skins might make the quagga a candidate for this revival, Fleischer suggests. That would be very good news, although, arguably, still the easy part. There is nothing natural about a natural landscape remade by humans. What are we bringing these animals back to? "Let it also be borne in mind," Darwin wrote in "The Origin of Species," "how infinitely complex and close-fitting are the mutual relations of all organic beings to each other and to their physical conditions of life." You have to wonder if we are really intelligent enough to redesign nature. This doubt was brought home to me on my way to see Rau, during a stop I made at Addo Elephant National Park, several hundred miles east of Cape Town near Port Elizabeth. Recently some of Rau's rejects from the quagga breeding project were released there, into what had been farmland not long ago, along with some lightly striped zebras bred by the national parks themselves. Rau said they had been sent to be "lions' lunch," but the lions, brought in from the Kalahari, where there are no zebras, didn't bother with them. They turned their attention instead to the local buffaloes. Lions usually have a hard time killing buffaloes - the buffaloes make a circle around their young and hold off the predators with their horns - but these buffaloes had lost their knowledge of how to defend themselves, so they were now easy targets. The park system relies on the sale of buffaloes to help finance the park's expansion; instead predators had pulled down 80 of them. It was a striking example of how hard it is to restore nature once you have damaged it. For instance, even as Rau's creation is making its reappearance, its cousin the Grevy's zebra, an intensely striped zebra native to East Africa, has become threatened with extinction. Rau does not seem to think about these sorts of things much. He seems to accept that nothing he will do can mitigate the larger disaster that may be awaiting the natural world and that only some of the animals he so laboriously rebreeds will go to natural parks while the rest will go to hunting preserves - where they will be targets for sportsmen. I asked Rau whether, given this vision of the future, spending 30 years to erase a half a set of stripes on an obscure extinct animal was worth it. We were driving on a highway outside Cape Town, no antelopes, no spekboom in sight, a long way from the Karoo, the dry plains where the quaggas had once lived in huge herds. "You would find it a bit disillusioning?" he asked. "Not to me. We would have given back to the Karoo - we will have given back to the Karoo - its original zebra. And that will be enough for me." Henry, the closest thing to a quagga in more than a century, on a preserve near Cape Town. D.T. Max, a frequent contributor to the magazine, is working on "The Dark Eye," a cultural and scientific history of mad-cow and other prion diseases.
<urn:uuid:39ab4666-f234-411b-96e3-4efe0e12ebd8>
CC-MAIN-2017-17
http://milkriver.blogspot.com/2006/01/
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917119225.38/warc/CC-MAIN-20170423031159-00483-ip-10-145-167-34.ec2.internal.warc.gz
en
0.983126
6,259
2.640625
3
Anatomy and Physiology of Animals/Endocrine System< Anatomy and Physiology of Animals After completing this section, you should know: - The characteristics of endocrine glands and hormones - The position of the main endocrine glands in the body - The relationship between the pituitary gland and the hypothalamus - The main hormones produced by the two parts of the pituitary gland and their effects on the body - The main hormones produced by the pineal, thyroid, parathyroid and adrenal glands, the pancreas, ovary and testicle in regard to their effects on the body - What is meant by homeostasis and feedback control - The homeostatic mechanisms that allow an animal to control its body temperature, water balance, blood volume and acid/base balance The Endocrine SystemEdit In order to survive, animals must constantly adapt to changes in the environment. The nervous and endocrine systems both work together to bring about this adaptation. In general the nervous system responds rapidly to short-term changes by sending electrical impulses along nerves and the endocrine system brings about longer-term adaptations by sending out chemical messengers called hormones into the blood stream. For example, think about what happens when a male and female cat meet under your bedroom window at night. The initial response of both cats may include spitting, fighting and spine tingling yowling - all brought about by the nervous system. Fear and stress then activates the adrenal glands to secrete the hormone adrenaline which increases the heart and respiratory rates. If mating occurs, other hormones stimulate the release of ova from the ovary of the female and a range of different hormones maintains pregnancy, delivery of the kittens and lactation. Endocrine Glands And HormonesEdit Hormones are chemicals that are secreted by endocrine glands. Unlike exocrine glands (see chapter 5), endocrine glands have no ducts, but release their secretions directly into the blood system, which carries them throughout the body. However, hormones only affect the specific target organs that recognize them. For example, although it is carried to virtually every cell in the body, follicle stimulating hormone (FSH), released from the anterior pituitary gland, only acts on the follicle cells of the ovaries causing them to develop. A nerve impulse travels rapidly and produces an almost instantaneous response but one that lasts only briefly. In contrast, hormones act more slowly and their effects may be long lasting. Target cells respond to minute quantities of hormones and the concentration in the blood is always extremely low. However, target cells are sensitive to subtle changes in hormone concentration and the endocrine system regulates processes by changing the rate of hormone secretion. The main endocrine glands in the body are the pituitary, pineal, thyroid, parathyroid, and adrenal glands, the pancreas, ovaries and testes. Their positions in the body are shown in diagram 16.1. Diagram 16.1 - The main endocrine organs of the body The Pituitary Gland And HypothalamusEdit The pituitary gland is a pea-sized structure that is attached by a stalk to the underside of the cerebrum of the brain (see diagram 16.2). It is often called the “master” endocrine gland because it controls many of the other endocrine glands in the body. However, we now know that the pituitary gland is itself controlled by the hypothalamus. This small but vital region of the brain lies just above the pituitary and provides the link between the nervous and endocrine systems. It controls the autonomic nervous system, produces a range of hormones and regulates the secretion of many others from the pituitary gland (see Chapter 7 for more information on the hypothalamus). The pituitary gland is divided into two parts with different functions - the anterior and posterior pituitary (see diagram 16.3). Diagram 16.2 - The position of the pituitary gland and hypothalamus Diagram 16.3 - The anterior and posterior pituitary The anterior pituitary gland secretes hormones that regulate a wide range of activities in the body. These include: - 1. Growth hormone that stimulates body growth. - 2. Prolactin that initiates milk production. - 3. Follicle stimulating hormone (FSH) that stimulates the development of the follicles of the ovaries. These then secrete oestrogen (see chapter 6). - 4. melanocyte stimulating hormone (MSH) that causes darkening of skin by producing melanin - 5. lutenizing hormone (LH) that stimulates ovulation and production of progesterone and testosterone The posterior pituitary gland - 1. Antidiuretic Hormone (ADH), regulates water loss and increases blood pressure - 2. Oxytocin, milk "let down" The Pineal GlandEdit The pineal gland is found deep within the brain (see diagram 16.4). It is sometimes known as the ‘third eye” as it responds to light and day length. It produces the hormone melatonin, which influences the development of sexual maturity and the seasonality of breeding and hibernation. Bright light inhibits melatonin secretion Low level of melatonin in bright light makes one feel good and this increases fertility. High level of melatonin in dim light makes an animal tired and depressed and therefore causes low fertility in animals. Diagram 16.4 - The pineal gland The Thyroid GlandEdit The thyroid gland is situated in the neck, just in front of the windpipe or trachea (see diagram 16.5). It produces the hormone thyroxine, which influences the rate of growth and development of young animals. In mature animals it increases the rate of chemical reactions in the body. Thyroxine consists of 60% iodine and too little in the diet can cause goitre, an enlargement of the thyroid gland. Many inland soils in New Zealand contain almost no iodine so goitre can be common in stock when iodine supplements are not given. To add to the problem, chemicals called goitrogens that occur naturally in plants like kale that belong to the cabbage family, can also cause goitre even when there is adequate iodine available. Diagram 16.5 - The thyroid and parathyroid glands The Parathyroid GlandsEdit The parathyroid glands are also found in the neck just behind the thyroid glands (see diagram 16.5). They produce the hormone parathormone that regulates the amount of calcium in the blood and influences the excretion of phosphates in the urine. The Adrenal GlandEdit The adrenal glands are situated on the cranial surface of the kidneys (see diagram 16.6). There are two parts to this endocrine gland, an outer cortex and an inner medulla. Diagram 16.6 - The adrenal glands The adrenal cortex produces several hormones. These include: - 1. Aldosterone that regulates the concentration of sodium and potassium in the blood by controlling the amounts that are secreted or reabsorbed in the kidney tubules. - 2. Cortisone and hydrocortisone (cortisol) that have complex effects on glucose, protein and fat metabolism. In general they increase metabolism. They are also often administered to animals to counteract allergies and for treating arthritic and rheumatic conditions. However, prolonged use should be avoided if possible as they can increase weight and reduce the ability to heal. - 3. Male and female sex hormones similar to those secreted by the ovaries and testes. The hormones secreted by the adrenal cortex also play a part in “general adaptation syndrome” which occurs in situations of prolonged stress. The adrenal medulla secretes adrenalin (also called epinephrine). Adrenalin is responsible for the so-called flight fight, fright response that prepares the animal for emergencies. Faced with a perilous situation the animal needs to either fight or make a rapid escape. To do either requires instant energy, particularly in the skeletal muscles. Adrenaline increases the amount of blood reaching them by causing their blood vessels to dilate and the heart to beat faster. An increased rate of breathing increases the amount of oxygen in the blood and glucose is released from the liver to provide the fuel for energy production. Sweating increases to keep the muscles cool and the pupils of the eye dilate so the animal has a wide field of view. Functions like digestion and urine production that are not critical to immediate survival slow down as blood vessels to these parts constrict. Note that the effects of adrenalin are similar to those of the sympathetic nervous system. In most animals the pancreas is an oblong, pinkish organ that lies in the first bend of the small intestine (see diagram 16.7). In rodents and rabbits, however, it is spread thinly through the mesentery and is sometimes difficult to see. Diagram 16.7 - The pancreas Most of the pancreas acts as an exocrine gland producing digestive enzymes that are secreted into the small intestine. The endocrine part of the organ consists of small clusters of cells (called Islets of Langerhans) that secrete the hormone insulin. This hormone regulates the amount of glucose in the blood by increasing the rate at which glucose is converted to glycogen in the liver and the movement of glucose from the blood into cells. In diabetes mellitus the pancreas produces insufficient insulin and glucose levels in the blood can increase to a dangerous level. A major symptom of this condition is glucose in the urine. The ovaries, located in the lower abdomen, produce two important sex hormones. - 1. The follicle cells, under the influence of FSH (see the pituitary gland above), produce oestrogen, which stimulates the development of female sexual characteristics - the mammary glands, generally smaller build of female animals etc. It also stimulates the thickening of the lining of the uterus in preparation for pregnancy (see chapter 13). - 2. Progesterone is produced by the corpus luteum, the endocrine gland that develops in the empty follicle following ovulation (see chapter 13). It promotes the further preparation of the uterine lining for pregnancy and prevents the uterus contracting until the baby is born. Cells around the sperm producing ducts of the testis produce the hormone testosterone. This stimulates the development of the male reproductive system and the male sexual characteristics - generally larger body of male animals, mane in lions, tusks in boars, etc - Hormones are chemicals that are released into the blood by endocrine glands i.e. Glands with no ducts. Hormones act on specific target organs that recognize them. - The main endocrine glands in the body are the hypothalamus, pituitary, pineal, thyroid, parathyroid and adrenal glands, the pancreas, ovaries and testes. - The hypothalamus is situated under the cerebrum of the brain. It produces or controls many of the hormones released by the pituitary gland lying adjacent to it. - The pituitary gland is divided into two parts: the anterior pituitary and the posterior pituitary. - The anterior pituitary produces: - Growth hormone that stimulates body growth - Prolactin that initiates milk production - Follicle stimulating hormone (FSH) that stimulates the development of ova - Luteinising hormone (LH) that stimulates the development of the corpus luteum - Plus several other hormones - The posterior pituitary releases: - Antidiuretic hormone (ADH) that regulates water loss and raises blood pressure - Oxytocin that stimulates milk “let down”. - The pineal gland in the brain produces melatonin that influences sexual development and breeding cycles. - The thyroid gland located in the neck, produces thyroxine, which influences the rate of growth and development of young animals. Thyroxine consists of 60% iodine. Lack of iodine leads to goitre. - The parathyroid glands situated adjacent to the thyroid glands in the neck produce parathormone that regulates blood calcium levels and the excretion of phosphates. - The adrenal gland located adjacent to the kidneys is divided into the outer cortex and the inner medulla. - The adrenal cortex produces: - Aldosterone that regulates the blood concentration of sodium and potassium - Cortisone and hydrocortisone that affect glucose, protein and fat metabolism - Male and female sex hormones - The adrenal medulla produces adrenalin responsible for the flight, fright, fight response that prepares animals for emergencies. - The pancreas that lies in the first bend of the small intestine produces insulin that regulates blood glucose levels. - The ovaries are located in the lower abdomen produce 2 important sex hormones: - The follicle cells of the developing ova produce oestrogen, which controls the development of the mammary glands and prepares the uterus for pregnancy. - The corpus luteum that develops in the empty follicle after ovulation produces progesterone. This hormone further prepares the uterus for pregnancy and maintains the pregnancy. - The testes produce testosterone that stimulates the development of the male reproductive system and sexual characteristics. Homeostasis and Feedback ControlEdit Animals can only survive if the environment within their bodies and their cells is kept constant and independent of the changing conditions in the external environment. As mentioned in module 1.6, the process by which this stability is maintained is called homeostasis. The body achieves this stability by constantly monitoring the internal conditions and if they deviate from the norm initiating processes that bring them back to it. This mechanism is called feedback control. For example, to maintain a constant body temperature the hypothalamus monitors the blood temperature and initiates processes that increase or decrease heat production by the body and loss from the skin so the optimum temperature is always maintained. The processes involved in the control of body temperature, water balance, blood loss and acid/base balance are summarized below. Summary of Homeostatic MechanismsEdit 1. Temperature controlEdit The biochemical and physiological processes in the cell are sensitive to temperature. The optimum body temperature is about 37 C [99 F] for mammals, and about 40 C [104 F] for birds. Biochemical processes in the cells, particularly in muscles and the liver, produce heat. The heat is distributed through the body by the blood and is lost mainly through the skin surface. The production of this heat and its loss through the skin is controlled by the hypothalamus in the brain which acts rather like a thermostat on an electric heater. . (a) When the body temperature rises above the optimum, a decrease in temperature is achieved by: - Sweating and panting to increase heat loss by evaporation. - Expansion of the blood vessels near the skin surface so heat is lost to the air. - Reducing muscle exertion to the minimum. (b) When the body temperature falls below the optimum, an increase in temperature can be achieved by: - Moving to a heat source e.g. in the sun, out of the wind. - Increasing muscular activity - Making the hair stand on end by contraction of the hair erector muscles or fluffing of the feathers so there is an insulating layer of air around the body - Constricting the blood vessels near the skin surface so heat loss to the air is decreased 2. Water balanceEdit The concentration of the body fluids remains relatively constant irrespective of the diet or the quantity of water taken into the body by the animal. Water is lost from the body by many routes (see module 1.6) but the kidney is the main organ that influences the quantity that is lost. Again it is the hypothalamus that monitors the concentration of the blood and initiates the release of hormones from the posterior pituitary gland. These act on the kidney tubules to influence the amount of water (and sodium ions) absorbed from the fluid flowing along them. (a) When the body fluids become too concentrated and the osmotic pressure too high, water retention in the kidney tubules can be achieved by: - An increased production of antidiuretic hormone (ADH) from the posterior pituitary gland, which causes more water to be reabsorbed from the kidney tubules. - A decreased blood pressure in the glomerulus of the kidney results in less fluid filtering through into the kidney tubules so less urine is produced. (b) When the body fluids become too dilute and the osmotic pressure too low, water loss in the urine can be achieved by: - A decrease in the secretion of ADH, so less water is reabsorbed from the kidney tubules and more diluted urine is produced. - An increase in the blood pressure in the glomerulus so more fluid filters into the kidney tubule and more urine is produced. - An increase in sweating or panting that also increases the amount of water lost. Another hormone, aldosterone, secreted by the cortex of the adrenal gland, also affects water balance indirectly. It does this by increasing the absorption of sodium ions (Na-) from the kidney tubules. This increases water retention since it increases the osmotic pressure of the fluids around the tubules and water therefore flows out of them by osmosis. 3. Maintenance of blood volume after moderate blood lossEdit Loss of blood or body fluids leads to decreased blood volume and hence decreased blood pressure. The result is that the blood system fails to deliver enough oxygen and nutrients to the cells, which stop functioning properly and may die. Cells of the brain are particularly vulnerable. This condition is known as shock. If blood loss is not extreme, various mechanisms come into play to compensate and ensure permanent tissue damage does not occur. These mechanisms include: - Increased thirst and drinking increases blood volume. - Blood vessels in the skin and kidneys constrict to reduce the total volume of the blood system and hence retain blood pressure. - Heart rate increases. This also increases blood pressure. - Antidiuretic hormone (ADH) is released by the posterior pituitary gland. This increases water re-absorption in the collecting ducts of the kidney tubules so concentrated urine is produced and water loss is reduced. This helps maintain blood volume. - Loss of fluid causes an increase in osmotic pressure of the blood. Proteins, mainly albumin, released into the blood by the liver further increase the osmotic pressure causing fluid from the tissues to be drawn into the blood by osmosis. This increases blood volume. - Aldosterone, secreted by the adrenal cortex, increases the absorption of sodium ions (Na+) and water from the kidney tubules. This increases urine concentration and helps retain blood volume. If blood or fluid loss is extreme and the blood volume falls by more than 15-25%, the above mechanisms are unable to compensate and the condition of the animal progressively deteriorates. The animal will die unless a vet administers fluid or blood. 4. Acid/ base balanceEdit Biochemical reactions within the body are very sensitive to even small changes in acidity or alkalinity (i.e. pH) and any departure from the narrow limits disrupts the functioning of the cells. It is therefore important that the blood contains balanced quantities of acids and bases. The normal pH of blood is in the range 7.35 to 7.45 and there are a number of mechanisms that operate to maintain the pH in this range. Breathing is one of these mechanisms. Much of the carbon dioxide produced by respiration in cells is carried in the blood as carbonic acid. As the amount of carbon dioxide in the blood increases the blood becomes more acidic and the pH decreases. This is called acidosis and when severe can cause coma and death. On the other hand, alkalosis (blood that is too alkaline) causes over stimulation of the nervous system and when severe can lead to convulsions and death. (a) When vigorous activity generating large quantities of carbon dioxide causes the blood to becomes too acidic it can be counteracted in two ways: - By the rapid removal of carbon dioxide from the blood by deep, panting breaths By the secretion of hydrogen ions (H+) into the urine by the kidney tubules. (b) When over breathing or hyperventilation results in low levels of carbon dioxide in the blood and the blood is too alkaline, various mechanisms come into play to bring the pH back to within the normal range. These include: - A slower rate of breathing - A reduction in the amount of hydrogen ions (H+) secreted into the urine. Homeostasis is the maintenance of constant conditions within a cell or animal’s body despite changes in the external environment. The body temperature of mammals and birds is maintained at an optimum level by a variety of heat regulation mechanisms. These include: - Seeking out warm areas, - Adjusting activity levels, blood vessels on the body surface, - Contraction of the erector muscles so hairs and feathers stand up to form an insulating layer, - Sweating and panting in dogs. Animals maintain water balance by: - adjusting level of antidiuretic hormone(ADH) - adjusting level of aldosterone, - adjusting blood flow to the kidneys - adjusting the amount of water lost through sweating or panting. Animals maintain blood volume after moderate blood loss by: - Constriction of blood vessels in the skin and kidneys, - increasing heart rate, - secretion of antidiuretic hormone - secretion of aldosterone - drawing fluid from the tissues into the blood by increasing the osmotic pressure of the blood. Animals maintain the acid/base balance or pH of the blood by: - Adjusting the rate of breathing and hence the amount of CO2 removed from the blood. - Adjusting the secretion of hydrogen ionsinto the urine. 1. What is Homeostasis? 2. Give 2 examples of homeostasis 3. List 3 ways in which animals keep their body temperature constant when the weather is hot 4. How does the kidney compensate when an animal is deprived of water to drink 5. After moderate blood loss, several mechanisms come into play to increase blood pressure and make up blood volume. 3 of these mechanisms are: 6. Describe how panting helps to reduce the acidity of the blood - http://www.zerobio.com/drag_oa/endo.htm A drag and drop hormone and endocrine organ matching exercise. - http://en.wikipedia.org/wiki/Endocrine_system Wikipedia. Much, much more than you ever need to know about hormones and the endocrine system but with a bit of discipline you can glean lots of useful information from this site.
<urn:uuid:96bd793f-86c3-4067-9564-539f7db4d171>
CC-MAIN-2017-17
https://en.m.wikibooks.org/wiki/Anatomy_and_Physiology_of_Animals/Endocrine_System
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917120349.46/warc/CC-MAIN-20170423031200-00425-ip-10-145-167-34.ec2.internal.warc.gz
en
0.897133
4,793
3.859375
4
If you’re reading this, I imagine you want to communicate with confidence and competence in English. When we communicate effectively we are able to express our ideas and opinions, share experiences, and build relationships with others. When we struggle to express ourselves, we feel unvalued and insecure. As human beings, we want to participate in group discussions and have an impact on the society around us. In the modern world, we communicate across borders. English is the closest thing we have to an international language. By speaking better English, people all over the world can hear our voice. But, to speak better English, you need a teacher, don’t you? You need to take English classes, right? Well, English teachers and English classes definitely help. But, studying English for a few hours a week may not improve your spoken English very much. What you need is to become a self-directed learner, somebody who takes responsibility for their own learning and creates their own learning programme to develop their English. Now, it’s certainly true that speaking is a social activity and is best done with other people. However, you could say the same about many activities. Leo Messi became a wonderful football player because he spent hours every day for many years practising by himself. You can do the same with your English. Here are 33 ways to speak better English, without going to classes. 1. Record yourself speaking English. Listening to yourself can be strange at first but you get used to it. Listen to a recording of a fluent English speaker (a short audio file) and then record yourself repeating what they said. Compare the difference and try again. Humans are natural mimics so you will find yourself getting better and better. Soundcloudis an excellent tool for voice recording as you or your teacher can make notes about your errors. 2. Read aloud, especially dialogue. Reading aloud is not the same as speaking naturally. However, it is very useful for exercising the vocal muscles. Practise for 5 or 10 minutes a day and you will begin to notice which sounds are difficult for you to produce. Find transcripts of natural dialogues, such as these here, and practise acting them with a friend, you will also learn common phrases which we use when speaking. 3. Sing along to English songs while you’re driving or in the shower. The lyrics to pop songs are often conversational so you can learn lots of common expressions by listening to them. Humans are also able to remember words when used together with music which is why it is difficult to remember poems but easy to remember the words to songs. Here are some songs to get started with. 4. Watch short video clips and pause and repeat what you hear. YouTube is an amazing resource for language learners and you probably already have your favourite clips. My advice is to watch short clips and really study them. With longer videos, you may find your attention wanders. The key to improving by watching videos is to really listen carefully and use the pause button to focus on sounds and words. Many YouTube videos now have captions. 5. Learn vowel and consonant sounds in English. The Phonemic chart is a list of the different vowel and consonant sounds in English. Learning how to make these sounds and then using them to pronounce words correctly will really help you speak English clearly. This is a great resource from the British Council. 6. Learn and identify schwa. What is schwa you might be asking? Well, it’s the most common sound in English: Click here. We use it all the time in words like ‘teacher’ and ‘around’. 7. Learn about weak and strong forms of common words. When you know about the ‘schwa’ sound, you will listen to native speakers in a different way. English is a stress-timed language which means that we use a combination of strong and weak forms of some words. For example, which words do we stress in the following sentence? I want to go for a drink tonight. How do native speakers pronounce to / for / a in the sentence? We use the schwa sound so it sounds like: I wanna go ferra drink tenigh. Learn how and when to use weak forms and your speaking will improve overnight. You will also learn to focus on stressed words when listening to fast, native-speaker English and you will finally be able to understand us! 8. Learn about word stress. When words have more than one syllable, we stress one or more of them. For example, the word intelligent has four syllables but which syllable do we stress?Click here to find out. Remember that the small vertical mark above the word identifies the stressed syllable: /ɪnˈtel.ɪ.dʒənt/ 9. Learn about sentence stress. Sentence stress refers to the word or words we stress in a phrase of a sentence. When we stress a word, we help the listener understand what is important. If we stress the wrong word or don’t stress the key word, the listener may get confused or not realise what is important in the sentence. A few years ago, I enrolled in a gym. I was asked to attend an introductory class at ‘five to six‘. The Hungarian receptionist stressed the word ‘six‘ so I arrived at 5.55. She looked at me and told me that I was late and the class had nearly finished. She should have stressed ‘five‘ and ‘six‘ so would have understood that the class lasted for one hour and began at 5pm! For more on sentence stress, read here. 10. Identify fixed and semi-fixed phrases and practise them. Fixed phrases usually contain between 3 and 7 words and include items like: to be honest in a moment on the other hand A conversation is made of grammatical structures, vocabulary and fixed or semi-fixed phrases.In fact, to tell the truth , on the whole, most of the time, my friends and I , communicate with each other in a series of fixed and semi-fixed expressions. 11. Learn about collocations.Words don’t like being alone. They prefer to hang out with their friends and, just like people, some words form close friendships and other never speak to each other. Yellow doesn’t get on well with hair. Maybe yellow is jealous of blond because blond and hair are frequently seen out together having a great time. Yellow doesn’t understand why hair prefers blond because yellow and blond are so similar. Listen carefully for common combinations of words. Short and small have similar meanings but people have short hair not small hair. High and tall are often not so different but people havehigh hopes but not tall hopes. Foxes are sly not devious. Hours can be happy but are nevercheerful. Idiots are stupid but rarely silly. 12. Replace regular verbs with phrasal verbs. Many learners of English don’t understand why native speakers use so many phrasal verbs where there are normal verbs (usually with Latin roots) which have the same meaning. English was originally a Germanic language which imported lots of Latin vocabulary after the Norman conquest in the 11th century. Regardless of the historical factors, the fact is that native English speakers use lots and lots of phrasal verbs. If you want to understand us, then try to include them in your conversation. If you make a mistake, you’ll probably make us laugh but you are unlikely to confuse us as we can usually guess what you want to say from the context. Phrasal verbs are spatial and originally referred to movement so when you learn a new one, make physical movements while saying them to help you remember. 13. Learn short automatic responses. Many of our responses are automatic (Right, OK, no problem, alright, fine thanks, just a minute, you’re welcome, fine by me, let’s do it!, yup, no way! you’re joking, right?, Do I have to? etc.) Collect these short automatic responses and start using them. 14. Practise telling stories and using narrative tenses. Humans are designed to tell stories. We use the past simple, past continuous and past perfect for telling stories but when the listener is hooked (very interested), they feel like they are actually experiencing the story right now. So, we often use present tenses to make our stories more dramatic! 15. Learn when to pause for effect. Speaking quickly in English does not make you an effective English speaker. Knowing when to pause to give the listener time to think about what you have said, respond appropriately, and predict what you are going to say does. Imagine you’re an actor on a stage, pausing keeps people interested.Great strategy if you need to speak English in public. 16. Learn about chunking. Chunking means joining words together to make meaningful units. You don’t need to analyse every word to use a phrase. Look at the phrase: Nice to meet you. It’s a short phrase (4 words) which can be remembered as a single item. It is also an example of ellipsis (leaving words out) because the words ‘It’ and ‘is’ are missing at the beginning of the phrase. However, we don’t need to include them. Learn more here. 17. Learn about typical pronunciation problems in your first language. Japanese learners find it difficult to identify and produce ‘r‘ and ‘l‘ sounds; Spanish don’t distinguish between ‘b‘ and ‘v‘; Germans often use a ‘v‘ sound when they should use a ‘w‘. Find out about the problems people who speak your first language have when speaking English and you will know what you need to focus on. 19. Find an actor/actress you like and identify what makes them powerful speakers. Do you want to sound like Barack Obama, Benedict Cumberbatch (Sherlock Homes) Beyonce or Steve Jobs? If you want to sound like David Beckham, I advise you to reconsider, unless you want to sound like a young girl! 20. Use a mirror and / or a sheet of paper for identifying aspirated and non-aspirated sounds. Aspirated sounds are those with a short burst of here, such as ‘p‘ in ‘pen, and unaspirated sounds have no or little air, such as the ‘b‘ in ‘Ben‘. Watch this video to learn more. What a terrible tongue twister. What a terrible tongue twister. What a terrible tongue twister. 22. Practise spelling names, numbers and dates aloud. This may seem very basic to some of you but if you don’t practise, you forget how to say them.Have a go here at numbers here and at place names here. 23. Learn about common intonation patterns. Intonation (when the pitch of the voice goes up and down) is complex in English but it is very important as it expresses the feeling or emotion of the speaker. Here is an amusing introduction to intonation. 24. Learn about places of articulation. The articulators are the parts of the mouth we use to turn sound into speech. They can be fixed parts (the teeth, behind the teeth and the roof of the mouth) and mobile parts (the tongue, the lips, the soft palate, and the jaw). Click here for more information. 25. After looking at places of articulation, practise making the movements that native speakers use when they speak.Here’s a video and remember to open the jaws, move the lips and get your tongue moving! 26. Learn why English is a stress-timed language. The rhythm of the language is based on stressed syllables so we shorten the unstressed syllables to fit the rhythm. Syllable-timed languages (such as Spanish) take the same time to pronounce each syllable. Here’s anexplanationwhich might explain why you speak English like a robot or watch this funny cliphere. 29. Speak lower not higher. Studies show that you command attention and demonstrate authority with a deeper vocal tone, especially men. This is particularly important if you have to speak in public. Here is a quick guide. 30. Listen and read along to poetry (or rap songs) to practise the rhythm of English.Limericks (short, funny, rhyming poems) are really useful and demonstrate how English is stress-timed and how we use weak forms. 32. Learn how to paraphrase. Paraphrasing is when we repeat what we have just said to make it clear to the listener or when we repeat what the other person has said by using different words. Here are a few to get started. 33. Use contractions more. Contractions make your speech more efficient because they save time and energy. Say ‘should not’ and then say ‘shouldn’t’: which is easier to say? Very common in fluent speech. Now, here’s your CALL TO ACTION. In the next 33 days, spend 15 minutes every day on one of the tips. I’m sure you’ll notice a huge improvement. And maybe one day you’ll speak English like Messi plays football! Starting a conversation to get to know someone or breaking an awkward silence can be very stressful. To start a conversation when you have nothing to talk about, use these guidelines. Part One of Three: Finding Things to Talk AboutEdit Remark on the location or occasion. Look around and see if there is anything worth pointing out. Examples of location or occasion comments: "This is a gorgeous room!", "Such incredible catering!", "I love this view!", or "Great dog!" Ask an open-ended question. Most people love to talk about themselves; it's your place as the conversation starter to get them going. An open question requires an explanation for an answer rather than just a simple yes or no. Open questions tend to begin with who, when, what, why, where, and how, whereas closed questions tend to start with do, have, and is/am/are. Closed questions: "Do you like books?", "Have you ever been to this university?", "Is spring your favorite season?", "Am I intruding?", and "Do you come here often?" Open questions: "What sort of books do you like?", "What did you study here at this university?", "Which is your favorite season? Why?", "What are you doing right now?", and "Where's your usual watering hole?" Know how to combine general remarks with open-ended questions. Since either one of these might be awkward or out-of-place on its own, combine them for maximum effect. For example: "That's a nice handbag, where did you get it?" This lets the handbag owner talk about the day that they went shopping and all this funny stuff happened, as opposed to: "I like your handbag!" "Thank you." (The end.) "What an amazing buffet! Which is your favorite dish?" Asking an opinion is especially useful, as it can be followed up with the classic open-ended question: “Why?” "Fantastic turnout! Which of the lecturers is your favorite?" "I love your costume. What are your favorite sci-fi movies?" Ask them about their pets. Animals are often common ground with people you have nothing else in common with. If you like animals in general, it's easy to relate to other animal lovers whether they prefer dogs, horses, birds, cats or wildlife. While talking about your own pet might be annoying to some people, asking them about their pets is a great way to get people to open up and start having fun. Brush up on current events. Chances are they'll know about it too and if they don't then that's a good thing to talk about! Read or watch the news and when you're ready to start a conversation with someone, say something like, "Hey, did you hear about that helicopter crash? That was pretty crazy." Draw on previous discussions. If you know the person, review a mental list of topics you’ve discussed previously and continue on one of them. For example, their kid’s milestone, one of their projects, or some bad news that they shared with you. This not only gives you something to talk about, but it also shows that you pay attention when you talk to them and you care about their problems and experiences enough to think about and remember them. Ask questions that are easy to answer. Some questions are a little harder to answer than others. Has someone ever asked you your weekend plans and you thought, "I don't want to think about my weekend plans... do I really have to answer that?" Most people prefer easy questions, like "what are you up to today," or "is school killing you these days?" This should make conversations flow better and feel more comfortable. Be sensitive to their feelings. Keep your questions non-invasive. Be sure you're not asking them questions about topics they'd rather not discuss. For example, some people might be very uncomfortable discussing issues that they feel touch on them personally, such as weight, lack of having a degree or qualifications, lack of having a steady date, etc. Try to be as thoughtful as possible even though you don't really know them yet. Let go of your fears. When you suddenly feel that you're not able to engage in conversation with another person, it's likely that you're telling yourself a few negative things, such as worrying that you're boring, not good enough, too unimportant, intruding, wasting their time, etc. This can leave you feeling tongue-tied. Feeling self-conscious when carrying on conversation with others is not unusual but it's also not productive. Relax. Chances are that whatever small-talk you're making isn't going to stick out in anyone's mind a few months from now. Just say whatever comes into your head, so long as it's not offensive or really weird (unless, of course, the person you're attempting to converse with is into weird stuff). Try to keep in mind that everyone has these self-doubts from time to time but that it's essential to overcome them in order to engage with fellow human beings. Reassure yourself that the other person is not judging you. Even if they are, it's unlikely to have any real impact on your life, so just relax. Introduce yourself if necessary. If you don’t know the person, breaking the ice is very simple: look approachable, tell the new person your name, offer your hand to shake, and smile. This is not only polite but it also is a good way to start a conversation. Sometimes introductions might be saved until after a conversation is started, however. Keep the conversation going with small talk. This keeps the conversation light and simple, which is especially useful for people who are still getting to know one another better. Use small talk to establish rapport and similarities rather than set each other up for an opinionated argument. Small talk encompasses such topics as your blog or website, the purchase of a new car, house renovations, your kids' artwork prize, vacation plans, your newly planted garden, a good book you've just read, etc. Small talk is not politics, religion, nuclear disarmament or fusion, or criticizing anybody, especially not the host or the event you're both attending. Although talking about the weather is a cliché, if there's something unusual about the weather, you've got a great topic of conversation. Synchronize with your conversation partner. Once your partner-in-conversation has started talking, follow his or her cue to keep the conversation going smoothly. Use active listening to reflect what they're saying and to summarize their possible feelings. Answer questions when they ask, ask them questions about what they're talking about, change topics when there is a pause in the conversation, and make sure they get the chance to talk at least as much (if not more than) you. Say the other person's name now and then. Not only does it help you to remember them but it's a warming sign of respect and will make them feel more comfortable. It shows a more personal approach and makes the conversation feel more real and intimate. Once every other conversation "turn" and at least once per conversation is a good rule of thumb. Give acknowledgement cues. You don't even have to say things a lot of the time; you can nod, say “ah-ha” or “wow’ or “oh” or “hmm,’ sigh, grunt convivially, and give short encouraging statements such as "Is that so?" and "Goodness!", and "What did you do/say then?" and "That's amazing!", etc. Keep your body language open and receptive. Nod in agreement, make occasional genuine eye contact without staring, and lean in toward the other person. Place your hand on your heart now and then, and even touch them on the upper arm if you're a touchy-feely person. This makes people feel more at ease and leads to more natural conversations. Keep a reasonable bubble of personal space if the person you're talking to is a stranger or someone that you don't know well. Stay engaged in the conversation. Stay interested in the other person and focused on them. Keep your curiosity piqued rather than withdrawing back into yourself. This is important for keeping conversations comfortable and finding new ways to continue the conversation. It may even lead you to find openers for future conversations with the same person, as you can ask for an update on some aspect of their life that they're talking about now if you pay attention the first time around! Respond naturally to situations.Smile and laugh when the other person makes a funny comment or a joke. Don’t force laughter, as this is cringe-inducing; smile and nod instead or smile, shake your head, and look down. Practice getting conversations started. You may feel a little clumsy at first, but with practice it can become easy to start good conversations. Every time you're in a situation where you're called upon to converse with others, see it as part of your ongoing practice, and note how you're improving each time that you try it. Part Three of Three: Keeping Things InterestingEdit Follow your partner's lead. If he or she appears interested, then continue. If he or she is looking at a clock or watch, or worse, looking for an escape strategy, then you've been going on for too long. It's important to try to follow their cues in order to make conversations as pleasant as possible and to leave them feeling like they'd want to talk with you again. This can sometimes feel like a hard skill to learn, but just practice. It's really the only way to improve. Use words of a sensory nature. These are words such as "see", "imagine", "feel", "tell", "sense", etc., which encourage the other person to keep painting a descriptive picture as part of their conversation. This can make conversations more engaging and will also leave an impact on your conversation partner. For example: Where do you see yourself in a year's time? What's your sense of the current stock market fluctuations? How do you feel about the new plans for renovating downtown? Maintain the equilibrium. As the person who started the conversation, the responsibility initially rests with you to maintain the momentum. So what happens when the other person starts practicing active listening and open questions back on you? You have several options: Relish it as their cue to let you start talking about yourself. Just don't overdo it; remember to keep engaging them back with open questions and active listening at the end of your own recounting. Deflect it if you'd rather not be the center of conversation attention. Say something like: "Well, I like Harry Potter books, and I especially loved the last one. But you don't want to hear about me all night! What were your favorite moments in the Harry Potter series?" Answer questions with a question. For example, "How did you manage to get away so early?" could be responded to with, "Well, how did you?" Often the other person will be so intent on filling you in on their side of the story that they'll forget they asked you the question first! Don't be afraid of pauses. Pauses can be used to change topics, re-energize the conversation, or even to take a short breather. Letting a pause hang is the only time you should worry about silence in a conversation. As long as you move naturally to the next subject or excuse yourself from the conversation, then it's fine and you shouldn't stress. Try not to make your partner uncomfortable. Respond respectfully to someone who remains awkward or uncomfortable in your presence. If your conversation partner appears withdrawn and uninterested in sharing information with you, don't persist too much. Try a little more before making a decision to move on. Don't ask too many questions if your conversation partner continues to appear unresponsive. Give yourself an out. A great entry into starting a conversation is to mention you can only talk briefly as you're meeting up with other friends or have a meeting to get to. This relieves your partner of a feeling of being trapped or obligated, and gives you both an easy out if things don't progress well. If the conversation does progress well, you can always delay leaving your partner for as long as you like. Remember not to overdo it, because they might think that you don't want to talk to them, but prefer to be with your friends. Just use this trick once or twice.
<urn:uuid:fc550b14-10cc-4f55-ad67-92a20432dd55>
CC-MAIN-2017-17
http://britishicca.blogspot.com/
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917120001.0/warc/CC-MAIN-20170423031200-00306-ip-10-145-167-34.ec2.internal.warc.gz
en
0.961726
5,422
2.671875
3
1First flowering dates are occurring earlier than they did in the past in many locations around the world. It is sometimes assumed, implicitly or explicitly, that the changes in first flowering dates describe the phenological behaviour of entire populations. However, first flowering dates represent one extreme of the flowering distribution and may be susceptible to undesirable confounding effects. 2We used observations of flowering in Colorado and Massachusetts to test whether changes in population size and sampling frequency affect observations of first flowering dates. 3We found that the effect of population size on first flowering dates depended on location. Changes in population size were strongly related to the dates on which first flowering was observed in Massachusetts but not in Colorado. The lack of a significant effect in Colorado may reflect the rapid onset of spring after snowmelt and fixed developmental schedules of the plants at this sub-alpine site, or the scale of the plots sampled during the study. 4We also found that changes in sampling frequency can influence observed changes in first flowering dates and other aspects of the flowering distribution. Similar to the effect of declines in population size, lower sampling frequency caused later observations of first flowering. However, lower sampling frequency, if maintained consistently throughout a study, did not significantly affect estimates of changes in flowering dates over time or in response to climate. 5Synthesis. Researchers should consider the effects of changes in population size and sampling frequency when interpreting changes in first flowering dates. In some cases, past results may need to be reinterpreted. When possible, researchers should observe the entire flowering distribution or consider tracking peak or mean flowering dates to avoid the confounding effects of population size and sampling frequency. Many past and recent studies have used first flowering dates to describe changes in the flowering phenology of plant populations (e.g. Sparks & Carey 1995; Bradley et al. 1999; Fitter & Fitter 2002; Inouye et al. 2002; Inouye et al. 2003; Miller-Rushing & Primack 2008). Although researchers would generally prefer to measure changes in entire flowering distributions or mean or peak flowering dates (for reasons discussed below), it is often necessary to rely on first flowering dates because they may be the only data available. It is far easier for an observer to note the date that a species first flowers rather than monitoring the progression of flowering for an entire population, which could last for weeks or even months. However, first flowering dates occur at one extreme of the flowering distribution and those observations may be affected by population size and sampling effort (i.e. number of observers or hours of observation, or frequency of observations). If flowering dates were approximately normally distributed, we would expect to have a greater probability of observing a very early flower in a year when a population size is large or sampling effort is great than in a year with a small population size or a diminished sampling effort (Fig. 1). Changes in the distribution of flowering dates (e.g. from a normal to skewed distribution) or changes in spatial patterns of microclimate could additionally alter the observation of first flowering dates. We would expect that changes in population size or sampling effort would have less of an effect on observations of mean or peak flowering dates, or the date that a certain percent of the plants have flowered. Thus, changes in first flowering dates may reflect changes in population size or sampling effort in addition to or instead of the population's overall phenological response to climate change (Fig. 1). It is possible that some previous studies have confounded the effects of climate change with changes in sampling effort and population size. For example, if researchers censused plants for first flowering twice a week for the first 20 years of a study, then sampled 7 days a week for the next 20 years, they might see a pattern of earlier flowering that was unrelated to climate change. Similarly, if a population declined over 20 years, the first flowers might appear later over time even if the peak flowering dates were occurring earlier (Fig. 1). The impact of sampling intensity and population size have been considered in studies of bird arrival times (Tryjanowski & Sparks 2001; Knudsen et al. 2007; Miller-Rushing et al. 2008), which often rely on records of first arrivals rather than mean arrivals or other measures of migration time (e.g. Bradley et al. 1999; Butler 2003; Sparks & Tryjanowski 2007), but to our knowledge, they have not been previously considered in the studies of plant phenology other than in the work of Aldo Leopold (Leopold & Jones 1947). Here we use long-term phenological records from two different locations – Colorado and Massachusetts – to test empirically whether population sizes and sampling frequency affect observations of first flowering dates, as well as overall flowering distributions. We also test the ability of changes in first flowering dates to predict changes in peak flowering dates, addressing the question: Do first flowering dates serve as an adequate proxy for peak flowering dates? rocky mountain biological laboratory, colorado We examined records of flowering for eight species, a subset of those that were observed approximately every 2 days throughout 33 growing seasons (May–September). The species were: Delphinium nuttallianum Pritz. ex Walp. (low larkspur), Erigeron speciosus (Lindl.) DC. (aspen fleabane), Eriogonum umbellatum Torr. (sulphur-flower buckwheat), Hydrophyllum capitatum Douglas ex Benth. (ballhead waterleaf), Lathyrus lanszwertii var. leucanthus (Rydb.) Dorn (aspen peavine), Potentilla hippiana Lehm. (woolly cinquefoil), Taraxacum officinale F.H. Wigg. (common dandelion), and Viola praemorsa Douglas ex Lindl. (canary violet). Each species was observed in an average of 5–13 permanent 2 × 2 m plots each year from 1973 to 2007 (number of plots depended on the species: no observations were made in 1978 or 1990). These plots were located in a sub-alpine meadow habitat at the Rocky Mountain Biological Laboratory (RMBL) in Gothic, Colorado at an elevation of 2900 m (GPS coordinates and metadata for plots and census methods are available at <http://www.rmbl.org>). The plots contained plants growing naturally and were never manipulated. The number of flowers open for each species was counted every second day with a few exceptions where the interval may have been 3 or 4 days. We used the peak number of flowers observed on a single day as an estimate of the total population of flowers, distinct from the number of plants and the total number of flowers produced in a season. A few plants with a large number of flowers could potentially have been responsible for much of the peak number of flowers. However, the species we examined were relatively small and typically have a small number of flowers per individual. Most of the species can produce more than one flower per individual and many plants may not flower in any given year, as is true for most long-lived perennials such as these. We examined records of first flowering in Concord, Massachusetts as observed by Alfred Hosmer (Hosmer, A. W. Alfred W. Hosmer Botanical Manuscripts, 1878–1903, William Munroe Special Collections, Concord Free Public Library) from 1888 to 1902 and by Miller-Rushing & Primack (2008) from 2004 to 2006. Hosmer and Miller-Rushing & Primack (2008) observed first flowering dates for species throughout Concord, without any permanent transects. Hosmer made observations about 4 days/week, while Miller-Rushing and Primack made observations about 2.5 days/week, with two separate sets of observers in different locations. Hosmer's phenological observations were made based on more days per week, but fewer total observer days per week than Primack and Miller-Rushing. For detailed descriptions of both data sets, see Miller-Rushing & Primack (2008). We calculated the change in first flowering dates for three groups of species: those with declining population sizes (n = 11), increasing population sizes (n = 5), and relatively unchanging population sizes (n = 34). We considered a species to be declining in abundance if Hosmer (1888–1902) considered it common and Primack et al. (unpublished data) considered it now rare (i.e. we only found it in Concord at a single location). Species increasing in abundance were rare in the older period and are now common or frequent (i.e. occur at three or more locations in Concord). We selected the species with relatively unchanging population sizes from those used in Miller-Rushing & Primack (2008) that were relatively common in the past and are still common in Concord today. The group of species with relatively unchanging population sizes acted as our control, with minimal effects of changes in population size. It is important to note that due to the different types of data collected, population size for Concord refers to the number of locations where a species occurred in the town – more locations was interpreted as a larger population size – and for Colorado refers to the number of flowers observed in specific plots. effects of population sizes We used multiple linear regression to determine the relationship between first flowering date (response variable) and peak flowering date and the peak number of flowers (explanatory variables) for each species. We used this method to test whether the peak number of flowers explained any of the variation in first flowering dates beyond that explained by peak flowering date. In addition, we used an F test to compare the residual sum of squares among three models that varied in their level of restriction: (i) intercepts and slopes held constant across species, (ii) slopes held constant across species, but intercepts allowed to vary, and (iii) slopes and intercepts allowed to vary across species (as with individual regressions for each species). We also ran simple linear regressions with peak flowering date as the response variable and first flowering date as the explanatory variable for each species. For each species, we then used a t-test to determine if the slope of the relationship was significantly different from one. If first flowering date were a one-to-one predictor of peak flowering date, as is sometimes assumed, we would expect the slope to equal one. We used single-factor anova to compare changes in first flowering dates among the three categories of species – declining, increasing and unchanging (control). If changes in population sizes affected first flowering dates, we expected that the flowering dates of the declining species would advance more slowly than the control group, whereas the flowering dates of the species increasing in population size would advance more rapidly than the control group (Fig. 1). Before performing the calculations, we adjusted the flowering dates to days after the vernal equinox to account for directional changes in the timing of the equinox over time (Sagarin 2001). effects of sampling frequency We set out to answer three questions regarding the effects of sampling frequency on the observations of flowering dates: (i) Does sampling frequency alter the apparent distribution of flowering times? (ii) Does sampling frequency affect estimates of the change in flowering dates over time or the relationship between flowering dates and the timing of snowmelt? (iii) Does sampling frequency affect the ability to detect trends in flowering dates over time? For the empirical portion of this analysis we used only observations made at RMBL in Colorado. The original observations of flowering for eight species (see Rocky Mountain Biological Laboratory, Colorado) were made every second day. We then degraded this data set to create a set where observations were made every sixth day (i.e. days n, n + 6, n + 12, etc.), thus decreasing the sampling frequency. Several phenology studies have previously relied on observations made this often or less frequently (e.g. Bradley et al. 1999; Miller-Rushing et al. 2006). To answer the first question, we compared the flowering distributions of the original, intensely sampled data with the degraded data. Specifically, we used paired t-tests to compare the dates of first, peak and last flowering as well as the peak number of flowers observed on any day. To address the second question, we used panel analysis (Hsiao 2003) with both data sets together. We tested whether the frequency of sampling affected estimates of the change in first flowering date, peak flowering date, last flowering date, flowering duration and peak number of flowers over time. Panel analysis is a form of multiple regression that allowed us to consider all of the plant species in a single model. Panel analysis increased the power of our models to find statistically significant relationships, improved the efficiency of the models’ estimates, and controlled for estimation biases. In each model the response variable was a characteristic of the flowering distribution (first, peak, or last flowering date, duration of flowering or peak number of flowers), and the explanatory variables were year and a year-sampling frequency interaction term. We performed an identical test to determine whether sampling frequency affected estimates of the response of first flowering dates to changes in date of snowmelt. In this location, snowmelt is the primary environmental cue for flowering phenology (Inouye & McGuire 1991; Inouye et al. 2002; Dunne et al. 2003; Inouye 2008). If the panel model indicated that the regression coefficients varied significantly among species (as determined by an F-test comparing models with varying levels of restriction), we performed regressions for each species individually. For the third question, we used Monte Carlo techniques to estimate the ability of various sampling frequencies to detect changes in flowering dates in the future. We used the following equation to generate one thousand experimental data sets for spring temperature for the next 50 years: in which T was temperature in year Y of the experimental data set, α was a constant, β represented the linearized annual rate of warming and μ was an error term. We set β = 0.028 °C/year, a mid-range estimate of warming made by the Intergovernmental Panel on Climate Change (IPCC 2007). This warming scenario is in line with the downscaled predictions for annual and seasonal warming in the New England region over the next 100 years (New England Regional Assessment Group 2001; Hayhoe et al. 2007). The error term was drawn randomly from a normal distribution with a mean of zero and a SD of 1.2, which was the SD of January, April and May temperatures in Boston during 1831–2004. Temperatures in these months are significantly associated with flowering dates in many Massachusetts plants, whereas temperatures in other months are correlated with the flowering dates of relatively few species (Miller-Rushing & Primack 2008). This procedure assumed that mean January, April and May temperature variability will remain constant in the future. We used Massachusetts and not Colorado as a setting for this simulation because we have much longer temperature records for Massachusetts than we have snowmelt records for Colorado, which allowed us to estimate more precisely long-term inter-annual climate variation. We then used these temperature simulations to test whether we could detect a change in flowering dates over time. For each experimental data set, we calculated flowering date as: where FD was the flowering date in year Y, α was a constant, and μ was an error term. We considered nine scenarios (three temperature effects on flowering × three sampling frequencies). We set β, the linearized effect of temperature (T) on flowering date to one of three magnitudes: 6 days earlier/°C warming, 3 days earlier/°C, or 1 day earlier/°C. These values are all realistic flowering responses to temperature in Concord, Massachusetts (Miller-Rushing & Primack 2008). The error term contained information about the sampling frequency. We tested three sampling frequencies: every 2 days, every 7 days and every 14 days. Finally, we used ordinary least squares regression to test in each year whether we could detect a significant change (P < 0.05) in flowering dates for each experimental data set. Because one anomalously warm year might create a significant trend that would disappear the next year, we recorded the point at which the trend had been statistically significant for five consecutive years. We tested for changes in flowering dates for a period of 50 years. effects of changing population sizes An F-test indicated the relationships between first flowering date and peak number of flowers varied significantly among species (F-test P < 0.001, H0: slope of relationship did not differ among species). Thus, we evaluated the relationship with individual regressions for each species. Of the eight species we examined at RMBL, only one of them had a significant relationship between first flowering date and peak number of flowers, as determined by multiple regression with peak flowering date and peak number of flowers as explanatory variables. The first flowers of L. lanszwertii var. leucanthus opened 0.56 ± 0.23 (± SE) days earlier for each additional 10 flowers (P = 0.023; peak flowering date: slope = 0.697 days/day, P < 0.001; adjusted R2 = 0.73). Similarly, an F-test indicated that the relationship between flowering duration and peak number of flowers differed among species (F test P < 0.010). After evaluating each species individually, only E. umbellatum had a significant relationship between duration of flowering and the peak number of flowers (3.5 ± 0.95 longer duration/10 flowers, P = 0.001). By chance we would have expected 5% of species to show a significant relationship in both instances, suggesting that the significant relationships may have occurred by chance (although the very low P-value for the relationship between flowering duration and peak number of flowers relationship for E. umbellatum suggests that this relationship is real). We next tested the ability of first flowering date to predict peak flowering date. An F-test indicated that the slope of the relationship did not vary significantly among species (P = 0.274). Peak flowering dates occurred 0.85 ± 0.04 days earlier for each day earlier that the first flower was observed (P < 0.001, adjusted R2 = 0.68; Fig. 2). Intriguingly, the slope of the relationship was significantly less than one (t = 4.09, P < 0.001) and first flowering dates explained 68% of the variation in peak flowering dates, as determined by the adjusted R2. Directional changes in population size had a significant relationship with changes in first flowering times in Concord, Massachusetts, as determined by anova (P = 0.010). Species with declining population sizes flowered 7.2 ± 6.1 days later in the period 2004–06 than they did in the period 1888–1902. In comparison, species with increasing population sizes flowered 12.0 ± 4.3 days earlier over the same time period, while species with relatively unchanging population sizes flowered 5.1 ± 1.5 days earlier. effects of changing sampling frequency As expected, a relatively low frequency of observations (making observations every 6 days) in Colorado significantly delayed the observation of first flowering dates (average delay = 2.5 ± 0.2 days, t = −16.62, two-tailed P < 0.001), advanced the observation of last flowering dates (average advance = 3.5 ± 0.5 days, t = 7.56, two-tailed P < 0.001), and shortened the observed duration of flowering (average shortening = 6.0 ± 0.5 days, t = 11.79, two-tailed P < 0.001) relative to making frequent observations (every 2 days), as determined by paired t-tests. Maintaining a low frequency of observations also significantly decreased the peak number of flowers observed (average decrease = 43.8 ± 4.6 flowers, t = − 9.57, two-tailed P < 0.001), but did not affect the observed date of peak flowering (average advance = 0.4 ± 0.3 days, t = 1.43, two-tailed P = 0.153). Overall, making observations with low frequency (every 6 days rather than every 2 days) caused the distribution of flowering dates to shrink. Sampling less frequently in Colorado, however, did not significantly affect estimates of changes in first, peak, or last flowering dates, duration, or the peak number of flowers over time, as determined by the interaction between sampling frequency and year in random effects panel models (P > 0.82 for each interaction term). Neither did sampling frequency significantly affect estimates of the relationships of flowering dates nor duration with the timing of snowmelt (P > 0.65 for each interaction term in random effects panel models). The relationship between the peak number of flowers and the date of snowmelt varied among species (F-test P < 0.001), but individual regression models for each species indicated that sampling frequency did not affect the estimates of the relationship for any of the species (P > 0.13 for each species’ interaction term) (Fig. 3). Finally, sampling less frequently in Colorado substantially reduced the ability to detect a change in flowering dates for a species that flowered just one day earlier for each 1 °C warming, but did not have much effect on the ability to detect a change in flowering date for a species that flowered 6 days earlier for each 1 °C warming (Fig. 4). After 10 years of observation, there was a 96–99% chance of detecting a 6 day/°C advance in flowering date and a 78–99% chance of detecting a 3 day/°C advance, depending on sampling frequency. When the actual change in flowering date was just 1 day/°C, however, an observer would have a 97% chance of detecting a significant trend over 10 years if sampling every 2 days, a 54% chance if sampling every 7 days, and an 18% chance if sampling every 14 days (Fig. 4). We found that population sizes and sampling frequency may substantially affect observations of first flowering dates and estimates of changes in first flowering dates. Surprisingly, the presence of an effect depended on the location and method of the study. In Concord, Massachusetts, changes in population size appeared to alter observed changes in first flowering dates. While the first flowering dates for the control group are occurring about 4 days earlier than they did 100 years ago, the first flowers for species with increasing population sizes are opening 12 days earlier. The first flowering dates for species with declining population sizes are occurring seven days later than they did 100 years ago. It is also possible, however, that the populations of species that are not responding phenologically to climate change are declining, possibly due to mistimed ecological relationships (Willis et al. unpublished data). The relationship may exist in both directions. Because in Massachusetts a species with an increasing population size occurs in a larger number of locations over time, the species also covers more environmental variation, including variation in temperature caused by shading, aspect, soils and other microsite features. Individuals growing at warmer sites will generally flower earlier then those at cooler sites. Our finding that species with declining population sizes have flowered later particularly suggests that population size is causing the later first flowering dates, because warming temperatures would generally be expected to lead to earlier or unchanging plant phenology in the Massachusetts climate, where winters are generally cold enough to meet chilling requirements (Schwartz 1998; Chuine 2000; Zhang et al. 2007). Additionally, many plants and animals in eastern Massachusetts are active earlier in the spring now than they have been in the past (Ledneva et al. 2004; Miller-Rushing et al. 2006; Miller-Rushing et al. 2008; Miller-Rushing & Primack 2008), so it is reasonable to expect that timing-based mismatches may be occurring for those species not active earlier in the spring (Stenseth & Mysterud 2002; Visser & Both 2005). At RMBL in Colorado, however, changes in population size did not substantially affect first flowering dates. At that location, changes in first flowering dates provided fairly good estimates of changes in peak flowering dates (Fig. 2). It is important to note, though, that first flowering dates did not provide a one-to-one prediction of peak flowering dates; first flowering dates explained 68% of the variation in peak flowering dates and peak flowering dates occurred just 0.85 ± 0.04 days earlier for each day earlier first flowering. We suspect that the lack of an effect of population size on first flowering dates at RMBL may have reflected the relatively small area of fixed space that was sampled (i.e. the same 2 × 2-m plots each year) and the rapid onset of the growing season after snowmelt in this sub-alpine environment (Inouye & McGuire 1991; Inouye et al. 2002; Dunne et al. 2003). When population sizes increased in the plots, they did not cover an appreciably greater range of microclimates, as occurred in Massachusetts. In addition, a skewed flowering distribution (e.g. many early-flowering individuals with a long tail of late-flowering individuals) (Thomson 1980) could have minimized the effect of population size on the distribution of flowering times. However, the flowering distributions of the species we observed in Colorado were not generally skewed (data not shown). Making observations every 6 days, instead of every 2 days, caused the observed distribution of flowering times to shrink. First flowering was recorded later, last flowering occurred earlier, and the peak number of flowers observed declined, while the date of peak flowering did not change significantly. Importantly, sampling frequency did not significantly alter estimates of changes in flowering dates, duration of flowering, or peak number of flowers observed over time, nor did it affect estimates of the relationship between those variables and the timing of snowmelt. For example, a low sampling frequency resulted in later observations of first flowering, but did not affect estimates of how first flowering dates changed over time or how they responded to the date of snowmelt (Fig. 3). However, as expected, a low sampling frequency could substantially reduce the chances of detecting a significant change in flowering dates over time by increasing the variability in the date that flowering is observed (Fig. 4). The strength of this effect is most pronounced for species with flowering dates that are not changing very rapidly. These findings have important implications for researchers examining phenological change in plant populations. First, if the only data available are first flowering dates, researchers should account for changes in population size and sampling effort. In many cases population sizes or sampling effort might change directionally over time, and these changes can significantly alter changes in first flowering dates (Fig. 1), although they do not always. Increases in population size or sampling effort can lead to earlier first flowering dates, while declines in population size or sampling effort can delay first flowering dates. It is possible that monitoring relatively small, fixed plots or marked individuals may minimize the effect of changes in population sizes, as we observed in Colorado, however further research is needed to confirm that this finding is not simply due to the rapid onset of flowering in this area. Conceptually, population size or sampling effort could affect the observation of first flowering even when measuring the phenology of individual marked plants over time if the number of flowers produced or sampling effort varies significantly among years (Primack 1985). Second, studies that differ only in sampling frequency will find different first flowering dates on average, but should find the same change in first flowering dates over time and the same flowering responses to snowmelt or temperature. For example, consider a case in which two researchers studied first flowering dates for the same species in the same location for 20 years but used different sampling frequencies – one sampled every 2 days, the other every 6 days. Our results show that sampling frequency alone would not cause the two studies to differ in their estimates of change in first flowering dates. Without other confounding factors, the trends in flowering dates would be plotted as parallel lines (Fig. 3). Other factors, such as changes in population size, nonlinear changes in climate, or nonlinear flowering responses to climate, might still confound comparisons between the two studies if they were carried out in different locations or over different time periods. Third, sampling frequency can substantially affect the ability of a study to detect changes in flowering dates. This point may seem obvious, but it suggests that studies that fail to detect changes in flowering dates over short time periods or after using relatively infrequent sampling may simply lack the power to detect changes that are actually occurring. It requires fairly frequent sampling to detect changes in flowering dates given that the phenologies of most plants studied to date are changing relatively slowly (Parmesan 2007) and that there is high inter-annual variability in weather that cues the flowering dates for many plant species (Cleland et al. 2007). For species with very short flowering durations, frequency of sampling may be particularly important. This result also shows that future studies of phenological change should carefully consider sampling frequency as a part of their study design. Fourth, results could be difficult to interpret when two or more factors are affecting first flowering times. For example, if flowering dates are becoming earlier because of warming temperatures, but declining population sizes are causing first flowering to occur later, the two shifts could cancel each other. No overall change in first flowering would be observed. Or if flowering phenology and abundance did not change, but sampling intensity increased during the study, then researchers might erroneously conclude that climate change was affecting phenology. In summary, population size and sampling frequency can affect observations of changes in first flowering dates. The effects are not always intuitive, nor are they always present. To avoid the confounding effects of population size and sampling effort, researchers should record the entire flowering distribution whenever possible, or consider observing mean or peak flowering dates to control for undesired confounding effects. Observing mean or peak flowering dates requires observing the entire flowering season, which involves greater effort than observing just first flowering, but it results in data less susceptible to the influences of confounding factors. If first flowering dates are the only data available, researchers must consider the effects of population size and sampling effort when interpreting their results. Authors thank Kjell Bolmgren, Jessica Forrest and Mark D. Schwartz for providing valuable comments on this manuscript. Funding and research assistance for this project were provided by the National Science Foundation (dissertation improvement grant, grants DEB 75-15422, DEB 78-07784, BSR 81-08387, DEB 94-08382, IBN-98-14509, DEB-0238331, and DEB-0413458), Sigma Xi, an NDEA Title IV predoctoral fellowship, research grants from the University of Maryland's General Research Board and Boston University, and assistance from Earthwatch and its Research Corps [DWI]. RMBL provided research facilities and access to study sites. Snowpack data were provided by billy barr.
<urn:uuid:526e0cb2-b451-4c65-b3b7-a53874dd7234>
CC-MAIN-2017-17
http://onlinelibrary.wiley.com/doi/10.1111/j.1365-2745.2008.01436.x/full
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917123590.89/warc/CC-MAIN-20170423031203-00486-ip-10-145-167-34.ec2.internal.warc.gz
en
0.935656
6,377
3.734375
4
BIBLICAL MOTIFS IN C.A.!S THE PLACE OF THE SKULL Biblical motifs in Chingiz Aitmatov's The Place of the Skull Canadian Slavonic Papers, Mar-Jun 1998 by Nina Kolesnikoff E-mail Print Link The 1968 publication of Chingiz Aitmatov's Plakha (The Place of the Skull, in Natasha Ward's translation) created a heated discussion around the controversial topics of drug smuggling and ecology, the author's choice of a pair of wolves and a former seminarian as the main protagonists, and the complex juxtaposition of three separate story lines in one novel.l One of the most controversial questions proved to be Aitmatov's introduction of Biblical materials into the otherwise contemporary plot, and its significance for the novel's philosophical and ethical interpretation. 13 Job Interview Mistakes To Avoid In assessing the role of Biblical references in The Place of the Skull, the critics reproached Aitmatov for his arbitrary treatment of the Bible. Vadim Kozhinov admonished the author for his distortion of the Biblical story of Pilate's interrogation of Jesus, his incorrect portrayal of Pilate as the chief culprit, and his depiction of Jesus as a liberal humanist strongly opposed to the Roman Empire.2 Similarly, Igor Zolotussk admonished Aitmatov for his oversimplification of the Biblical narrative of Pilate and Jesus, which Aitmatov presented as a polemic on the question of power and authority versus individual free will. He was also criticized for his depiction of Christ as a social revolutionary, who questioned the authority of the state, while defending the interests of the poor and the oppressed.3 The sharpest censure of Aitmatov's treatment of the Bible came from Sergo Lominadze, who accused the writer of misrepresentation of Biblical ideas, and the debasement of Christian teaching, particularly with regards to his rejection of the idea of the Last Judgment.4 The second major criticism of Aitmatov's treatment of the Biblical material was directed towards the striking similarity between The Place of the Skull and Mikhail Bulgakov's Master i Margarita. Lev Anninsk considered the Biblical episode in The Place of the Skull a paraphrase of the same story in The Master and Margarita,5 while Natalia Ivanova criticized Aitmatov for "placing the figures" in exactly the same positions as did Bulgakov fifty years earlier.6 And Sergei Averintsev reproached Aitmatov for following too closely Bulgakov's approach to Jesus as a historical figure, a tradition which, according to the critic, has been exhausted in Russian literature by The Master and Margarita.7 The criticism of Aitmatov's dependence on Bulgakov and its purported distortion of the Biblical story of Jesus and Pilate was carried out in a polemical fashion, without substantial analysis of the questions concerned.8 The purpose of this article is to examine these questions closely by juxtaposing Aitmatov's Biblical story with the New Testament and Bulgakov in order to identify both similarities and differences. More importantly, the article will also examine the links between the Biblical story and the rest of the narrative, and assess the role Biblical references play in a novel dealing with contemporary issues.9 The Biblical motifs are introduced into The Place of the Skull both in overt and covert forms. The most obvious instance is the embedded story of Pilate's interrogation of Jesus, inserted into the subplot that deals with Avd Kalistratov's attempts to change the world into his Christian ideals. The embedded story appears as Avd's delirium, following his confrontation with drug smugglers who throw him from a moving train. Lying semi-conscious by the railroad tracks, Avd hallucinates about Pilate's interrogation of Jesus. The link between the Biblical story of Jesus and the contemporary story of Avd is not coincidental, since Avd is portrayed in the novel as a modern Christ-like figure, preaching Christian ideas of forgiveness and love, and the rejection of evil. Reconstructed in the mind of the delirious protagonist, the story of Pilate and Jesus retains most of the elements of the New Testament version.I0 For example, it preserves the same participants: Pilate and Jesus appear as the main protagonists; Caiaphas, the Jewish elders, Judas, and Pilate's wife are secondary characters. The embedded story also focuses on the same issues: the question of Christ's teachings in light of Roman law and it offers the same resolution: Pilate's confirmation of the Sanhedrin verdict of crucifixion. Of the four Gospels depicting the episode of interrogation, Aitmatov follows most closely the Gospel according to John, which, rather than insisting on His silence, elaborates in more detail Jesus' responses to Pilate. Following the Gospel according to Matthew, Aitmatov introduces the motif of intervention by Pilate's wife, who, in the form of a note, pleads with her husband to release Jesus. The most obvious difference between the New Testament story and Aitmatov's rendition is the generic change from an interrogation to a philosophical dispute. Whereas in the Gospels Jesus either refused to answer Pilate's questions or answered them briefly, in The Place of the Skull Jesus eagerly participates in the discussion and argues with Pilate about questions of truth, God, and the Last Judgment. In many ways, Aitmatov's story resembles a Socratic dialogue in which the interlocutors pronounce their opposing views and elicit each other's responses. Both Jesus and Pilate appear as ideologists, seeking and testing truth in a dialogic confrontation. The Socratic dialogue is supplemented in The Place of the Skull with a psychological portrayal of both participants, especially of Pilate. Pilate is presented in the novel as a vain and self-centered man, convinced of his importance and omnipotence. He takes pleasure in interrogating Jesus, experiencing both curiosity and hatred toward Him. At the same time, Pilate tries to put himself in Jesus' place in order to understand His views and motivation. He wrongly perceives Jesus as a false prophet and usurper, who wants to gain power over the people. In accordance with the New Testament tradition, Jesus is portrayed in The Place of the Skull as completely dedicated to His mission, refusing to renounce His views. Faced with His imminent death, He continues to believe in man's ultimate goodness and the process of self-improvement and perfection. He counteracts Pilate's philosophy of strength with the idea of good, based on the rejection of vice, violence, and bloodshed, and the acceptance of love for God and men. While conveying the convictions and strength of Jesus the prophet, Aitmatov depicts some of the human qualities of His protagonist. Faced with the prospect of death, Jesus experiences anxiety and fear, which are manifested in His paleness, His profound sweating and the lump of terror in His throat. He admits to Pilate that He is afraid and at one point asks to be released. Aitmatov's portrayal of Jesus' human nature is most evident in the story of the crocodile, invented by the writer and added to the Biblical episode.ll The story of Jesus' childhood encounter with a crocodile and His narrow escape serves as an example of Jesus' human reaction: His hope for another miracle and His concern for His mother. Like an ordinary human being, Jesus thinks of His mother at the moment of agony, and asks her forgiveness for the pain He will cause her with His death. The second example of an overt Biblical reference appears shortly after the embedded story of Jesus and Pilate, in the depiction of Avd's search for Jesus on the eve of the Passover. Like the embedded story of Jesus and Pilate, Avd's search for Jesus is motivated by the protagonist's delirium: Avd hallucinates about coming to Jerusalem and desperately searching for Jesus in order to forewarn Him about the forthcoming betrayal. By shifting his protagonist to the year 33 AD, Aitmatov illustrates the theory of historical synchronism, based on man's mental ability to be simultaneously in different temporal dimensions separated by centuries or even millennia. Knowing the outcome in advance, Avd tries in vain to change the course of events. In his delirium about Jerusalem on the eve of the Passover, Avd refers to Jesus as "Master," indicating his desire to be regarded as Christ's disciple. Indeed throughout the novel Avd appears as Christ's follower, preaching the ideas of goodness and self-improvement. In the novel Avd bears the name of King Ahab's governor who had saved a hundred prophets from execution and arranged a meeting between the King and the prophet Elijah to convince the people of Israel to renounce the pagan gods of Baal and to return to the true God.12 Following in the footsteps of his Biblical prototype, Avd tries to bring his contemporaries back to Christianity through writing on ethical topics for a Komsomol newspaper, and through preaching Christian ideas to drug runners and the "junta." Following Christ's example, Avd preaches about good and evil, guilt and repentance, revenge and sacrifice. Like Christ, he is determined to propagate his ethical ideas, even at the cost of his life. The analogy between Avd and Christ is reinforced in episodes describing his conflict with drug runners and the "junta." In both instances, Avd is confronted by those who, like Pilate, believe in the power of strength and the philosophy of living for today. The confrontation between Avd and Grishan, the leader of the drug runners, follows the same pattern as the interrogation. In the form of a Socratic dialogue, Avd and Grishan present their arguments and test each other's convictions. The elements of a verbal polemic are more subdued in Avd's confrontation with OberKandalov. Here Avd's arguments against the inhuman slaughter of antelope is rendered indirectly in the form of a narrative summary while Ober-Kandalov's views are expressed in condensed form. Arguing for the idea of state power, Ober-Kandalov repeats the arguments of his Biblical predecessor. As in the Bible, the verbal confrontation between Avd and his opponents is followed by a physical reprisal. Insensitive to Avd's plea to reject drugs and repent, the drug runners beat him violently and throw him from a train. The members of the "junta" stage a mock trial that is reminiscent of the New Testament's trial of Jesus conducted by the Jewish elders as guards mocked and beat Him.13 In similar fashion, the "junta" members torture Avd, scoff at his Christian ideas and decide to punish him by hanging him from a tree. The image of Avd tied to the tree and left there to die recalls the image of the crucified Jesus, dying on the cross. Like Christ, Avd pays with his life for his attempts to convince people of the need to strive for self-improvement and perfection. Besides the Biblical story of Pilate's interrogation of Jesus and some references to events preceding it, The Place of the Skull introduces another Biblical motif, that of Judas' betrayal of Jesus.14 There are several references to Judas in the embedded story of Pilate and Jesus. The name of Judas of Iscariot first appears in Jesus' remarks concerning the incorrect interpretation of the idea of the Last Judgment. Jesus also makes the second extended reference to Judas in His description of Judas' act of betrayal: I did not sleep, but was wakeful in prayer and, summoning up my courage, was intending to tell my disciples of this vision vouchsafed me by the Father, when suddenly a great crowd appeared in Gethsemane, with Judas among them. Judas swiftly embraced me, kissing me with his cold lips. "Hail, Master!" he cried to me, but before that, he had said to those he came with, "He whom I shall kiss, He is the one. Take Him" (p. 148). Two more references to Judas appear in Avd's delirium about Jerusalem. Aware of Jesus' Last Supper with His apostles, Avd attempts to forewarn Him about Judas' betrayal. Having failed in his search for Jesus in Jerusalem, Avd rushes to Gethsemane, but does not find Him there either, since "Judas had already done his work and they had seized Him and led Him away" (p. 158). The New Testament story of Judas' betrayal of Jesus does not have a straightforward analogy in the contemporary narrative. Nevertheless, it serves as a prototype for another embedded story, "Six and the Seventh One," introduced into the narrative as a purported Georgian ballad recalled by Avd during the concert of a Bulgarian choir.ls The story "Six and the Seventh One" depicts events from the Civil War in the Caucasus. It shows the Cheka officer Sandro infiltrating a group of counter-revolutionaries led by Guram Dzhokhadze. After a failed ambush to capture Guram and his men alive, Sandro decides to kill them. He carries out his plan during a farewell gathering on the eve of the group's selfimposed exile. Faithful to his task, Sandro kills Guram and his followers, but in the end he takes his own life. The ballad "Six and the Seventh One" reproduces several components of the Biblical story of Judas, including the figure of the traitor, the act of betrayal, the monetary reward and the traitor's suicide. As in the Biblical story, Sandro appears as a follower of the man he must betray, although in the ballad he simply assumes the role of a follower in order to carry out his plan. Like Judas, Sandro is offered a large monetary reward and a promotion. Unlike Judas, Sandro is forced to kill Guram and his men himself. Having done so, he takes his own life. In addition to the Biblical story of Judas, the ballad introduces the Biblical motif of the Last Supper.16 In this case Guram and his followers stage a farewell party on the eve of their departure from Georgia. But unlike the apostles, the six Georgian fighters know that this is their last supper together, that they will never again see each other or their native country. As in the Biblical Last Supper, Guram and his men share wine and bread, although not in a religious sense, but as symbols of a unique Georgian tradition. In accordance with that tradition the six men also sing and dance, and Sandro joins them, thus symbolically becoming one of them. By becoming a blood brother of the six men, Sandro has no other choice but to kill himself. The ending of the ballad, portraying the protagonist's decision to die, seems unjustified in the context of Soviet ideology, but it is in full agreement with the novel's philosophical and ethical connotations. Having trespassed the boundaries of human ethics, Sandro has no other choice but to end his life.7 The third Biblical motif to appear in The Place of the Skull is that of the Apocalypse and the end of the world.8 In his final remarks to Pilate, Jesus describes the vision He had the night before: I was labouring under a strange premonition of total abandonment on earth; I wandered in Gethsemane that night like a shade myself, could find no peace, feeling as though I were the only sentient being left in the whole universe, flying over the earth and never seeing another living soul. Everything was dead; everything was covered with the black ash of some long-since-raging fire; the earth lay in ruins, no forests, no fields, no ships on the sea. Only a strange ringing sound filled the air, like a sad groaning in the wind, like a sobbing of metal deep within the earth, like a funeral bell (pp. 147-48). Jesus acknowledges that his vision captures the fatal outcome of what all generations have been waiting for-the Apocalypse, the end of history for thinking beings. Significantly, Jesus' Apocalyptic vision reads like the destruction of the world by a nuclear explosion. In this terrifying vision, the end of the world comes not as a result of God's vengeance or some natural calamity, but because of the enmity of man. The motif of the Apocalypse reappears in The Place of the Skull during a scene depicting a helicopter hunt in the Moiunkum steppe: Suddenly, thunder from the sky; the helicopters were back. This time they flew fast and threateningly low over the terrified saigak, as they galloped in flight from the monstrous attack. So fast and unexpected was the approach that hundreds of shaken antelope, their leaders and sense of direction forgotten, flew in disordered panic. The harmless creatures were no match for flying machines. The helicopters were working according to plan: pinning down the fleeing herd and rounding on it in a pincer movement, they drove it towards another, equally large, that had been grazing nearby. More and more of the herds were drawn into the stampede, the cloven-hoofed creatures losing their heads completely in the panic of a catastrophe the like of which the savannah had not seen before. Not only for the cloven-hooves; the wolves, too, their inseparable companions and hereditary enemies, found themselves in exactly the same trap (pp. 24-25). The helicopter hunt reads like an apocalyptic vision of the end of the world in which everything perishes. The helicopter hunt echoes closely the vision of nuclear destruction offered by Jesus in His dialogue with Pilate. The unscrupulous use of advanced technology to annihilate animals is the first step toward the total destruction of the world. What is the function of the Biblical references in the narrative of The Place of the Skull? The embedded story of Jesus functions as the philosophical centre, evoking the idea of self-improvement and active involvement in the struggle against the forces of evil. In the Avd subplot, the struggle takes the form of a philosophical conflict between Christian love and compassion as opposed to the selfish pursuit of pleasure, manifested by the narcotics runners, or the aggressive cruelty to nature demonstrated by the "junta." In the Boston subplot, the conflict is presented in social terms as the struggle of honest, hard working people like Boston, against corrupted and immoral individuals like Bazarbai. The story of Jesus allowed Aitmatov not only to widen the temporal framework of the contemporary plot, but also to introduce some new semantic connotations. Thanks to the Biblical story, the novel focuses on the eternal struggle of good and evil, on the ethical choices facing each individual, and on the need for love and compassion. In rendering the Biblical story of Jesus and Pilate, Aitmatov made it relevant to the problems of the late twentieth century, i.e., the destructive use of technology and the cult of military power. The apocalyptic picture of the helicopter hunt in the Moiunkum steppe echoes the terrifying vision of the end of the world, experienced by Jesus on the eve of His crucifixion. With that vision Aitmatov warns the twentieth-century reader of the consequences of nuclear technology and the philosophy of military confrontation. And he shows the danger of replacing the religion of Divine Power with the religion of military power. The last issue to be examined in this article is the question of Aitmatov's indebtedness to Mikhail Bulgakov, whose novel The Master and Margarita has been called "the final chapter of the historical treatment of Jesus in Russian literature."19 Aitmatov's indebtedness to Bulgakov is undeniable both in the selection and the treatment of the Biblical material. Following Bulgakov, Aitmatov selects the same New Testament episode of Pilate's interrogation of Jesus, and presents it as a philosophical argument on the questions of spiritual and moral power versus the power of the state Like Bulgakov, Aitmatov concentrates on the depiction of two major characters, Jesus and Pilate, and makes only insignificant references to other participants, such as Caiaphas, the Jewish elders, or Pilate's wife. Following Bulgakov, Aitmatov demystifies the image of Christ, portraying Him as a human being, fully immersed in ordinary life, and governed by human emotions. But unlike Bulgakov, Aitmatov depicts Jesus as a skillful preacher, spreading his ideas even on the eve of His crucifixion. In contrast to Bulgakov, Aitmatov places the main ideological burden on Christ, and casts Pilate in a subordinate role of a supporting actor. Remaining to a large degree an underdeveloped character, Pilate nevertheless appears as a personification of authority and state power, determined all along to confirm the verdict of the Jewish elders. Departing from Bulgakov's depiction of the interrogation as the first step to crucifixion and of Pilate's subsequent attempts to appease his conscience by ordering the killing of Judas, Aitmatov focuses exclusively on the episode of interrogation. The interrogation emerges in The Place of the Skull as the central event that throws a special light on all other events depicted in the novel. While focusing primarily on the story of Jesus and Pilate, Aitmatov does not ignore other Biblical elements, associated with Jesus' last day and His crucifixion. He simply moves them into the contemporary story of Avd. Thus, the episodes of the Last Supper and of Judas' betrayal appear in Avd's recollections of the ballad "Six and the Seventh One," while the motifs of Jesus' trial and crucifixion are portrayed in the junta's administration of justice. The second difference between The Place of the Skull and The Master ad Margarita stems from the different historical contexts into which the Biblical story of Jesus and Pilate has been incorporated. In The Master and Margarita, the Pilate subplot appears in a narrative dealing with the events of the 1930s. The philosophical argument between Pilate and Jesus on the question of individual responsibility versus state authority acquires a special significance when placed in the context of Soviet political reality. As noted correctly by Lesley Milne, the historically real plane of Moscow stands in a typological relationship to the Jerusalem narrative, while Pilate's dilemma concerning moral choice became the historical dilemma of the 1930s.21 In The Place of the Skull the Biblical story appears in the context of the 1980s, with the contemporary narrative addressing the social problems of drugs and alcohol, the ecological problems of the natural habitat, and the military problem of potential nuclear annihilation. While the actual setting of the contemporary narrative is restricted to Russia and Kazakhstan, the novel deals with universal questions facing the entire world. The most important difference between The Place of the Skull and The Master and Margarita reflects the different philosophical connotations of both novels. In The Master and Margarita, the Pilate subplot is placed in an aesthetic context: it appears as the work of an artist within a satiric depiction of contemporary reality. In The Place of the Skull the Biblical episode appears in an ethic context, as the ultimate example of goodness and self-sacrifice that inspires contemporary characters to fight evil not with evil, but with love and compassion. In accordance with their different philosophical connotations, the two novels convey the Biblical story in a totally different language. In The Master carl Margarita, the Pilate story is rendered in an elevated and solemn language, appropriate for a literary masterpiece created by the Master. In The Place of the Skull, the language of the Christ story resembles the language of contemporary Russian newspapers.22 It is a strange mixture of journalistic cliches, bureaucratic jargon, and formal language. This stylistic crudeness corresponds to the language of Avd, a self-taught journalist writing for a Komsomol newspaper. Despite its stylistic crudeness, the Christ story remains a vital element of a novel, which underscores the dangerous imbalance between good and evil, and calls for a return to the Christian philosophy of love and forgiveness. With the help of the Christ story, The Place of the Skull conveys the message that the salvation of the world and of human values can be achieved only through conscience and repentance, sacrifice and courage. And that is exactly what Aitmatov's Jesus proclaims in His last words to Pilate: I was born on earth, to serve as an undimmed example to men, so they should hope in my name and come to me through suffering, through the struggle with evil within themselves, day after day, through disgust with vice, with violence and bloodlust that all attack the soul if it be not filled with love for God and therefore for our fellows too, for men! (p. 143). 1 Plakha was first published in Novyi mir 6, 8, 9 (1986). The first book edition appeared the following year: Plakha (Moscow: Molodaia gvardiia, 1987). 2 Vadim Kozhinov, "Paradoksy romana ili paradoksy vosprata," Literaturnaia gazeta, 15 October 1986: 4. 3 Igor Zolotussk, "Otchet o puti," Znamia 1 (1987): 221-40. 4 Sergo Lominadze, "Obsuzhdaem roman Chingiza Aitmatova Plakha," Voprosy literatury 3 (1987): 3543. 5 Lev Anninsk, "Skachka kentavra," Druzhba narodov 12 (1986): 246-52. 6 Natalia Ivanova, "Ispytanie pravdoi," Znamia 1 (1987): 216-20. Sergei Averintsev, "Paradoksy romana ili paradoksy vospra," Literaturnaia gazeta, IS October 1986: 4. 8 The Bible's influence on Plakha has been raised but not discussed in detail. See A. Krasnovas, "Prizyv i preduprezhdenie," Druzhba narodov 12 (1986): 246-52; S. Piskunova, V. Piskunov, "Vyiti iz kruga," Literaturnoe obozrenie 5 (1987): 54-58; A. Kosorukov, "'Plakha'-novyi mif ili novaia realinost'?", Nash sovremennik 8 (1988): 141-52. 9 Aitmatov's use of Biblical material has been discussed briefly in several Western studies. See Robert Porter, "Chingiz Aitmatov's The Execution Block: Religion, Opium and the People," Scottish Slavonic Review 8 (1987): 75-90; Guy and Victoria Imart, "Le Procurator, l'indig*ne et le billot: Une `soupe-ii-la-hache': Apropos du dernier roman de C. Ajtmatov," Cahiers du Monde Russe et Sovietique 28.1 (1987): 55-71; Rta Pittman, "Chingiz Aytmatov's Plakha: A Novel in a Time of Change," Slavonic and East European Review 66 (1988): 213-26; Gary Browning and Thomas Rogers, "Chingiz Aitmatov's The Executioner's Block: Through Dreams a Confrontation with Existential Good and Evil," Russian Review 51 (1992): 72-83. In his polemical article on Plakha, Anthony Olcott argued that the philosophical and religious connotations of the novel are more consistent with Islam than with Christianity; see, "What Faith the God-Contemporary? Chingiz Aitmatov's Plakha," Slavic Review 49 (1990): 213-26. 10 Cf. Matthew 27:11-26; mark 15: 1-15; Luke 23: 1-25; John 18: 28-40, 19: 116. I I The story of the crocodile was excluded from the English translation of Plakha; see The Place of the Skull, trans. Natasha Ward (New York: Grove Press, 1989). This edition will be used hereafter. 12 I Kings 18:3-7. The name Avd/Obadiah designates several other people in the Old Testament, including the minor prophet after whom the book of Obadiah was named, cf. Obadiah 1. For more information on the origin of Avd's name see A. Pavlovsk, "O romane Chingiza Aitmatova 'Plakha'," Russkaia literatura I (1998): 118; N. Rubtsov, "Dostoinaia zhizn' na nashei planete," Moskva 1 (1988): 198. 13 Cf. Matthew 26:67; Mark 14:64; Luke 22: 63; John 18: 22-23. 14 Cf. Matthew 26:14-16, 21-25, 47-49, 27:3-10; Mark 14:10-11, 17-21, 4345; Luke 22:47-48; John 13:2, 21-30, 18:2-3. 15 Like the story of the crocodile, the ballad "Six and the Seventh One" has been excluded from the English translation of Plakha. 16 Cf. Matthew 26:20-29; Mark 14:17-25; Luke 22:14-38; John 13:1-38. 17 A. Kosorukov found the image of Sandro unconvincing, since it is constructed from two contradictory elements, cruelty and sentimentality; cf. Kosorukov 145. See also an interesting analysis of the tale in James Woodward, "Chingiz Aitmatov's Second Novel," Slavonic and East European Review 69 (1991): 201-20. 18 Cf. Revelations 6:12-17; 8:1-13; 9:1-6. 19 Averintsev 4. 20 Aitmatov's indebtedness to Bulgakov is discussed in Petr Tkachenko, "Vkus starykh istin," Literaturnoe obozrenie 5 (1987): 43-45, and Pittman 369-73. 21 Lesley Milne, Mikhail Bulgakov: A Critical Biography (Cambridge: Cambridge University Press, 1990) 255. See also J.A.E. Curtis, Bulgakov's Last Decade: The Writer as Hero (Cambridge: Cambridge University Press, 1987). 22 Several Russian critics commented on Aitmatov's unsuccessful modernization of the Biblical language; cf. N. Anastasev, "Obsuzhdaem roman Chingiza Aitmatova 'Plakha'," Voprosy literatury 3 (1987): 14-15; Lominadze 39-40; Averintsev 4. Copyright Canadian Assosciation of Slavists Mar-Jun 1998 Provided by ProQuest Information and Learning Company. All rights Reserved What Faith the God-Contemporary? Chingiz Aitmatov's Plakha, by Anthony Olcott © 1990
<urn:uuid:f5a7ff18-6e2f-4737-a372-29ea2b7a10bd>
CC-MAIN-2017-17
http://www.aytmatov.org/tr/biblical-motifs-in-cas-the-place-of-the-skull
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917118519.29/warc/CC-MAIN-20170423031158-00247-ip-10-145-167-34.ec2.internal.warc.gz
en
0.940928
6,490
2.625
3
Heidegger and St. Thomas: Language, being, and transcendence Heidegger and St. Thomas: Language, Being, and Transcendence Christopher Ryan Maboloc Language, according to Martin Heidegger, is the house of Being (BW, 193). It is the place where Being presents itself to Dasein (There-Being); Dasein is the place whereby Being makes itself accessible to man. Language, in this sense, is constitutive of man’s being-in-the-world (RH, 357). Dasein, as a mode of being-in-the-world, has the fundamental character of thrownness. By being thrown into the world, Dasein is the very place whereby the Being of beings becomes manifest. Metaphysics, says Heidegger, is the basic occurrence of Dasein (BW, 112). For Heidegger, Dasein dwells on the disclosure of Being through the nothing (the unsaid in speech), which stands as its groundless ground and source of meaning. The nothing, Heidegger, says, makes possible the openness of beings (BW, 105). This openness comes to us through language, for Being “is perpetually under way to language (BW, 239).” St. Thomas, on the other hand, views language differently. Language, for him, is the means whereby the reality of being as the ultimate cause of all beings is made known to the human intellect. According to John Caputo, St. Thomas understands language as an activity of men, to be mastered and perfected like any other craft and not as a response to the address of Being (OM, 165). For St. Thomas, the reality of Being does not unfold in language; instead, through language, the reality of being is affirmed through causal participation. The unity in one source through causality is an alien concept to Heidegger. The latter recognized the immanent unity of Being in beings, but to leave this unity as a mere fact without a ground is unfinished business (KF, 54). Heidegger’s understanding of language does not account for the ultimate root of the intrinsic act of existence. Martin Heidegger: Language and Being Heidegger’s analysis of the problem of Being is out of his fascination with the word “is.” The question of Being, Heidegger says, is something we keep within the understanding of the “is,” though we are unable to fix conceptually what that “is” signifies (BT, 6). The word “is,” therefore, expresses and opens up the issue in his metaphysics of language – the issue is Being. Magda King says that if the “is” were missing from our language, there would be no other word and no language at all (HP, 28). Language is an event that has Being as its ultimate origin, a house that is arranged according to a pattern inscribed and prescribed by it (PT, 535). This means that Being makes manifest the presence of beings through language. Being therefore reveals the truth of beings through language. Now, without the “is” in language, language would be meaningless for it wouldn’t express any truth, for there would be nothing in language that will reveal that being is and not nothing. The “is” in language presents the reality of beings, that they are beings and not nothing. It is recorded that Heidegger’s quest for the meaning of Being was inspired by Franz Brentano’s On the Manifold Meanings of Beings for Aristotle. Aristotle understands Being as ousia, which refers to the active concrete and changing substance, actualized by form. Aristotle rejects the abstract world of forms of Plato and considered the particular entities in the world as the really real. To be real therefore means to be substance or to be an attribute of a substance (CH, 45). For Aristotle, substances form the structure of the world. They are objective and independent existing entities. But Aristotle’s explication of substance as the real deals with beings, not Being. In the sense, Aristotle bypasses Being. But Heidegger says that we sense more in things than mere substance and accidents for things are closer to us than the sensations that announce them (PT, 440). Aristotle has examined beings in his metaphysics but was oblivious to the fact that they are the manifestations of Being. Aristotle, therefore, is oblivious to Being. Furthermore, Aristotle defines language as a sound that signifies something (RH, 363), and this means that he is not aware of the role language in the disclosure of Being. Aristotle is ignorant of the radical role that language plays in the disclosure of beings. According to Heidegger, Being comes to man’s awareness because man belongs to language. Thus, he says, it is the home where man dwells (BW, 193). This belongingness means that of all existing beings only man can question Being. And the reason for this, according to Heidegger, is that human existence means standing in the lighting of Being (BW, 204). For Heidegger, human existence thoughtfully dwells in the house of Being (BW, 239). Dasein, by being thrown into the world, lives in this house. Dwelling in the house of Being enables man to speak of a world. Henceforth, it is language that makes the world a world for man, a world where his possibilities are realized. To speak of the world, then, means to speak of Being. Man, by being-in-the-world, stands in front of Being. Thus, man as Dasein bears witness to Being, gives voice to Being (KF, 53). What is the difference between St.Thomas’ and Heidegger’s conception of Being? Being for St. Thomas is not the lighting up process but the ipsum esse subsistens that renders beings their being by way of causal participation. Language for St. Thomas addresses Being in a different way. For St. Thomas, every being (ens) is a being insofar as it participates in esse. Being for St. Thomas is the cause of the act of existence in beings. This distinction between Being as ipsum esse subsistens and beings as ens is closely related to Heidegger’s distinction between Being and beings. The reason for this, according to Caputo, is that ens derives its meaning from esse. A being is a being insofar as it is referred to the act of existing which in its unparticipated state is pure act. St. Thomas, then, Caputo says, cannot be accused of oblivion of the ontological difference between Being and beings. Caputo accuses St. Thomas of conceiving language as something that is in no way related to the problem of being. The point that we wish to consider here is that St. Thomas’s concern on language was not alethiological but analogical. This will be the contention that we shall try to develop. The significance of the explication of this stand opens up an inadequacy in Heidegger’s conception of language and points to a basic difference between his thought and that of St. Thomas: that, for St. Thomas, Being is the ultimate source of all reality; whereas, for Heidegger, Being simply means the lighting up of what is there. For Heidegger, it is through the nothing that the openness of the meaning of beings is revealed. But, as we will point out later on, nothingness reveals the reality of human finitude but never enters into the deeper context of answering the ultimate source of the meaning of human existence. Nothingness opens up the possibilities of human finitude but never addresses man’s hunger for the ultimate reason of his existence. Nothingness only tells man that he is a being and not nothing. But it never answers why. Language, in this regard, reveals the reality that Dasein is, but only that. The Nothing in Language What is in language that allows the possibility of saying? If language is the place where Being comes into light, then there must be something in language that allows this coming-into-presence and self-concealing as its source or ground. For Heidegger, in the very instance of whatever is said, a hidden plenitude is left unsaid (TT, 172). This plenitude enables the possibility of saying. This plenitude refers to the nothing, the unsaid in speech, which “presuppose the possibility of saying, of disclosing (RH, 358),” Heidegger says, The nothing comes to be the name for the source not only of all that is dark and riddlesome in existence which seems to rise from nowhere to return to it but also of the openness of Being as such and the brilliance surrounding whatever comes to light (BW, 93). This nothing is the veil of Being (HP, 11). Ancient metaphysics, according to Heidegger, conceives the nothing in the sense of non-being, that is, unformed matter, matter that cannot take form as an informed being (BW, 109). Thus, for a long time, metaphysics exposes the nothing to only one meaning: ex nihilo nihil fit – from the nothing, nothing comes to be (BW, 109). But Being and nothingness belong together, for Heidegger says, “the Nothing functions as Being” (WM, 353). What does this mean? For Heidegger, the Nothing is an abyss, the groundless source of meaning where the reality of beings is made manifest. He says, “ if man is to find himself again into the nearness of Being, he must first learn to exist in the nameless” (BW, 199). The nameless is the silence in speech. Silence presupposes the fact that one has something to say. But science and mathematics, according to Heidegger, have dismissed the nothing as meaningless. Science gives up the nothing as a nullity. Thus, he states that, for these two fields what should be examined are beings and, besides that, nothing; beings alone, and further nothing; solely beings and beyond that, nothing (BW, 97). Science rejects the nothing precisely because scientific language requires methodical objectivity. The scientist sees the nothing as empty, as something that is devoid of any objective sense. Thus, for the scientific discipline, the silence of the nothing does not say anything. But silence is not all silence. Silence says something. What silence reveals is the possibility of saying something about what still remains hidden. Being, says Heidegger, is encountered in this silence. But where can we find this silence? Heidegger says that: If the nothing itself is to be questioned as we have been questioning it, then it must be given beforehand. We must be able to encounter it (BW, 100). The nothing, according to Heidegger, reveals itself in anxiety (BW, 103). Anxiety makes us silent, so that because of anxiety all we have to say falls silent, making the reality of beings slip away. But what is anxiety? Anxiety, Heidegger says, is not a kind of grasping of the nothing (BW, 104). Anxiety refers to the state of mind that brings man to the indeterminate possibilities of his existence. In speech, this state of mind points to the indeterminate possibilities of saying. What anxiety reveals to us is that through the nothing the reality of beings comes into light, that they are beings and not nothing. Anxiety, then, opens up the meaningfulness of beings. Henceforth, the dismissal by science of the nothing implies its annihilation of the Being of beings. The rejection of the unsaid in language means the dismissal of the meanings still concealed in such silence. An instance of being held out into the nothing in speech occurs when one travels to a far place and bids goodbye to a beloved. During this anxious moment, one say goodbye and the girl says nothing, remains silent. But this silence makes the openness of Being of the girl. Her silence reveals that there is something in her that she wants to say. Her silence discloses something about her. Her silence means something. Her silence captures her Being as a girl who is in love with someone who will be leaving her. Her silence opens up what the departure means to her and to their relationship. Thus, ex nihilo omne ens qua ens fit (from the nothing all beings as being come to be) (BW, 110). St. Thomas Aquinas: Being and Analogy Language for St. Thomas addresses the question of Being in a manner different from that of Heidegger. St. Thomas’s metaphysical inquiry on language begins with the question “Can we use any words to refer to God?” (ST, q. 13, art. 1). Language, for St. Thomas, acts as a bridge that enables man to discover a metaphorical insight into Being. What is grasped is only metaphorical because man does not have a direct knowledge of Being. As we will show later on, all of our knowledge of Being is only by way of negation (AR, 139). We know through God’s effects that God is, and that God is the cause of other beings, that God is super-eminent over other things and set apart from all (SCG, I, 30, no.4). Thus, when we say, “God is good” what we mean is that “God is good, but not in the way we are.” St. Thomas’s concern, then, knows how, for instance, goodness can be predicted literally of God. To say “God is good” means that goodness as a perfection is present in man but only in a finite way; God as the ultimate source of this perfection is infinitely good. Any knowledge of God can be based only on metaphorical resemblance with beings as His effects. But first, what does God as being mean for St. Thomas? We have seen in Heidegger that Being is the Being of beings that makes them manifest. The metaphysics of St. Thomas, on the other hand, is a metaphysics of causality which takes into account the causal relationship between Beings and beings. This is something alien to Heidegger. For St. Thomas, Being is the ipsum esse subsistens that renders beings their esse or existence. Thus, his metaphysics is a metaphysics of creation, which makes esse the most fundamental act that gives beings their principle of existence. It is esse that makes beings be. In this sense, Being is the ultimate source of all beings. Henceforth, beings are beings by virtue of their participation in esse. And Being, as the unlimited source of existence, is present in all beings, not as part of the essence or nature of beings, but as an agent is present to that upon which it acts (AR, 62). Explaining this point is very important in understanding how language brings us to an indirect knowledge of Being. How does any word describing Being become meaningful? The Thomistic tradition contends that any language dealing with Being is used to signify something transcending all things, but we make such language meaningful by demonstrating from effects that Being exists, for as we shall observe, any language about Being is derived from these effects (TA, 259-261). By this, St.Thomas means that any language that deals with God is finite, and since the finite being is a creature of God, there must be a way in which the finite language of beings could describe God. In the Summa Theologica, St.Thomas asks, “Are words used univocally or equivocally of God and creatures?” (ST, q.13, art.5) St. Thomas says that the univocal predication of God and creatures is impossible, for every effect falls short of what is typical of the power of its cause (ST, q.13, art.5). Any language that deals with God cannot have a univocal meaning for this will mean that God is totally distinct from His creatures. This will make God totally unknowable. On the one hand, any language that deals with God cannot be equivocal, for “we never use words in exactly the same sense of creatures and God” (ST, q.13, art.5). Hence, the solution according to St.Thomas, is that: In this way some words are used neither univocally nor purely equivocally of God and creatures, but analogically, for we cannot speak of God at all except an the language we use of creatures, so whatever is said both of God and creatures is said in virtues of the order that creatures have to God as their source and cause (ST, q.13, art.5). God as Being gives perfection to all beings and, therefore, is both like them and unlike them (AR, 135). Thus, when we speak of Being as the ultimate source of existence we use analogical language by virtue of this resemblance. Our being like and unlike Being comes from our participation in esse. Any word then that we use in order to describe God results from our being created in God’s image and likeness. St.Thomas is concerned to maintain that we can use words to mean more than what they mean to us: that we can use them to understand what God is like, that we can reach out to God with our words even though they do not circumscribe what He is (TA, 293). Thus, to say “God is good” does not mean we go beyond the meaning of the word good. Rather, it is entering into the deeper meaning of the word in order to find there a trace of God’s presence in His creatures. To go deeper into the meaning of the word means to transcend the finitude of this word. To transcend this finite means to trace the presence of God in His creation. John Caputo and W. Norris Clarke on St. Thomas and Heidegger Caputo does a critical analysis of St. Thomas’ conception of language in his essay Heidegger and Aquinas: An Essay on Overcoming Metaphysics. Caputo says that St.Thomas remains oblivious to the radical role played by language vis-à-vis Being (OM, 158). According to Caputo, The idea never entered St.Thomas’s mind that language opens up the field of presence in which we dwell, that language shapes the whole understanding of Being (OM, 164) Caputo accuses St.Thomas of using language only in a technical sense. His argument is that St. Thomas merely used language as a means of communicating the meaning of Being. Language simply had no role in the formation of meaning, and its value is reduced to being a sign of communication that human beings utilize. In Heidegger, Caputo argues, “language is Being’s own way of coming to words into human speech,” and this means that, “it is not man who speaks but language itself” (OM, 159). Language, according to Caputo, bids the coming-into-presence of things in the world. Thus, language does not only express the world, it is the light that makes the world a world for man. Language is not just a representation of meaning but it is that which gives meaning. Language cannot be reduced to a mere means of communication. It is not just a sign that signifies something. It is the very way in which the meaning of something comes into the open. St.Thomas neglects such an idea, Caputo asserts. Language for St. Thomas does not posses this radical role because St. Thomas, says Caputo, “is innocent of the encompassing importance of language in bringing beings to appearance, in letting them be in their Being” (OM, 158). But Caputo’s critique of Thomistic language simply proves that Heidegger’s metaphysical understanding of language is different from that of St. Thomas’s understanding. Analogical language is never alethiological, and alethiological language is never analogical. According to Fr. Norris Clarke, Heidegger, as a phenomenologist, “can only describe how Being actually appears in consciousness” (KF, 55). Therefore, he has not gone “to the necessary ontological conditions of possibility or intelligibility of what appears, not even to the intrinsic act of existence within beings” (KF, 55). In this regard, Heidegger simply imprisons man to his finite existence. Why? It is because Being, in Heidegger’s sense, is only immanent, not transcendent (KF, 52). This claim has an important implication for Heidegger’s conception of language. Heidegger merely confines language to man’s finite existence. Therefore, language, in the Heideggerian sense, does nothing in addressing the problem of unity of beings to a transcendent Being as the ultimate source of their being. In view of this, Heidegger may very well be accused of ignoring the importance of the analogical character of language that allows the possibility of transcending the finitude of language. Heidegger has not gone deeper into the power of language to signify the causal relationship between Being and beings (between God and the human person). Heidegger’s conception of language does not allow man to find a deeper context for his finite condition. Thus, when man is placed within the limiting horizon of finite existence, he will be unable to raise the question of a transcendent Being upon which his existence is rooted (EM, 138). Heidegger is forgetful of the capacity of language to trace the unity between Being and beings in the intrinsic act of being. Knowledge by analogy helps man point out a deeper context of his existence – transcendence. The truth is that Heidegger neglects the insight that analogy presupposes the reality of an ultimate source of intelligibility for the existence of creatures. Language and Transcendence Heidegger’s conception of language limits man to his finite possibilities. It does not answer man’s quest for the ultimate root of the meaning of his existence. The problem is that Dasein merely waits for Being to manifest itself. Dasein cannot reach to any meaning beyond his finite condition because he has to wait for Being to reveal this meaning to him through language. In this sense, language owns man, and man is forever at the mercy of Being’s revelation in history. This has an immense implication for humanity. For instance, Heidegger cannot accuse the Nazis of immorality, for the emergence of that part of history is nothing but one of Being’s manifestations in human history. St. Thomas’ conception of language, on the other hand, enables man to transcend his finite condition and enter into his final unity with the Source. Language signifies the relationship between man and the ultimate source of his existence, Being. This transcendence is impossible in the Heideggerian notion of language. Transcendence is not brought about by anxiety. Anxiety is a purely finite condition and, as such, can only reveal the reality of man’s finitude. The meaningful context of transcendence is revealed to us, according to St.Thomas, only by our desire to know Being. This desire or love of truth reveals itself. St. Thomas conception of language enables man to transcend his finitude and find the presence of Being in his own existence as its ultimate ground and source. The inadequacy, then, of Heidegger’s conception of language lies in its inability to trace the ultimate ground of the intrinsic art of existence among beings. Heidegger’s problem then is that does not answer the most important question raised by St. Thomas for metaphysics: “why is there something rather than nothing?” To answer such question is to account for the reason why beings exist. If raising the question of Being is important for metaphysics to retrieve it from the dust of tradition and scientific reasoning, then it is also valuable for Dasein or man to answer this question in order to quench his thirst for the ultimate meaning of his existence. Saying that Dasein is not enough. There is a horizon beyond the finite character of Dasein. Such horizon is the response to the question why being is and not nothing. This is the horizon of the transcendent Being, the ultimate source of all creation, the very reason indeed why beings are really real. Finally, it can be stated that Heidegger’s understanding of Being is kind of historical domination. For him Being determines the meaning of the world for Dasein, but the problem is that even the Transcendent will have to submit to this historical unfolding. But God does not dwell in man’s historical consciousness the way finite beings do. Man must extend beyond his finite consciousness in order to raise the question of transcendence. For Heidegger, this only happens if God presents Himself to our own historical consciousness. Even God must submit to Heidegger’s Being for man to know that He exists. But such notion essentially erases the radical orientation of the human mind to the truth of Being. This is an orientation not only to the presence of things, but more importantly it is a deep drive that transcends our mere consciousness of a world. Limiting ourselves to the horizon of the world does not end our infinite hunger for the ultimate meaning of human existence. Henceforth, man must cross the bridge that brings him to the ultimate meaning of his being. This is a bridge that St. Thomas offers us, a bridge that unites us with one transcendent Being as the ultimate ground and source of all reality.
<urn:uuid:5f381958-80bb-4f54-9a84-30fc940d95d7>
CC-MAIN-2017-17
http://ryanphilosophy.blogspot.com/
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917119838.12/warc/CC-MAIN-20170423031159-00600-ip-10-145-167-34.ec2.internal.warc.gz
en
0.952362
5,363
2.734375
3
Former German Territories in Poland Polish Province: Lubuskie The Early History of Neumark The region of the former Prussian province of Brandenburg, east of the Oder river, was referred to as the Neumark (New Borderland). Its history is inseparable from that of the rest of Brandenburg. Originally the Neumark was inhabited by several different Germanic tribes, the Burgunder, Rugians, Semnonen, and Vandals. During the 3rd and 4th centuries many of the "Elbe-Oder-Vistula" tribes moved south, leaving sparse groups behind. This allowed several small Slavic tribes (collectively referred to as the Wends) to penetrate the region which occurred around 500 AD. The remaining Germanic tribes intermarried with the Wends (Leubuzzi, Lusizzi, Pomeranians, and Redarii). As a result many of the original Germanic names of rivers and other topographical features remained and as well as some aspects of the culture. In the early 1200's, the Neumark belonged to the Duke von Glogau of the famous Polish Piasten family. The Piastens encouraged settlement by selling pieces of land to German knights and monasteries. Most of this land passed to the German Markgraf von Brandenburg who began settling the Neumark. The Markgraf von Brandenburg sent out messengers (similar to Helmold's chronicle), "... into all the regions ... to Flanders and Holland, to Utrecht, Westphalia and Frisia, proclaiming that all who were oppressed by want of land should go thither with their families; there they would receive the best of soils, rich in fruits and abounding in fish and flesh, and blessed with fine pastures ...". At the time it was largely uninhabited forest except for a few Wendisch settlements. The area was then settled by Dutch, Flemish, Frisian, and German settlers in the mid-1200's. In 1242, the Neumark was formally founded. The Wends in the area were "Germanized" and absorbed into the local German culture. The dialects of the Neumark were influenced by the settlers, primarily from northwestern Germany - Westphalia (although some came from Hessen and Thüringen) and the Netherlands. As a result the dialects had a certain "Platt" German or Dutch characteristic. Settlement occurred by building a series of towns surrounded by villages. Each city and village was planned using then current civic design that could be duplicated. The towns offered a source of commerce and safety for the outlying villages in the dense forest of the Neumark. The capital required for settlement was substantial. An undertaker or locator ("Unternehmer") would personally undertake the founding of a town or village. Often these Unternehmer were wealthy bürger but not uncommonly knights or Ministeriales (bureaucrats) of the Markgraf. In return they would receive large portions of land and the position of village magistrate or "Schultheiss". Settlers usually received 1 Hufen (hide) which was equivalent to 42-60 acres to farm. Their holdings were freely alienable and heritable and they farmed the land as they saw fit. To help colonists get on their feet, they were granted a number of tax free years and often clergy remitted tithe in whole or part for a number of years. A extensive series of castles were also built to defend the Neumark from the Poles. Villages in the Neumark were planned along the Waldhufen design. This design was ideal for the dense Neumark forests. Usually the founding of a Waldhufen began in a forest clearing. The Waldhufen village consisted of 50-60 Hufen (1 Hufe = 42-60 acres). Homes were arranged in a row along the village street, separated about a Hufe apart from each other. The fields were arranged in long narrow strips side by side that ended at the edge of the dwellings. Another more traditional village design occasionally used was the Gewanndorf. This was used in open fields. Each Gewann encompassed 1 Hufen in furlong fashion. However the Neumarks dense forests hindered the establishment of the traditional Gewanndorf, instead favoring the Waldhufen. In some areas swamps and marshes were drained or diked to provide more arable land, utilizing the settlers skills from their original homelands. The Wendisch settlements already in existence developed over time in similar manner to the German village design. The primary motivation for settlers was the feudal system and the lack of land in the West. The East represented an abundance of land, self determination, and freedom from feudal control. Local nobility did not interfere with civil matters and exercised no jurisdiction over the cities or villages. Often the noble was a peasants neighbor rather than a landlord. The people and towns were directly responsible to the Markgraf von Brandenburg. To the Marksgraf von Brandenburg, the Neumark represented a new land free of cumbersome feudal aristocracy. As a result of being unhindered by old traditions and constraints, the progressive society of the Neumark quickly prospered. Already by the late 1200s and early 1300s, corn from the Neumark was being sold in the markets of Flanders, Frisia, Netherlands, and western Germany. The terror of the Bubonic Plague or Black Death reached the forests, villages, and towns of the Neumark in 1351 from the west (although it originated in the Russian Steppes). The population identified the plague as the "Pest Jungfrau" who flew through the air as a blue flame, who only had to raise her hand to infect a victim, and was often seen emerging from dead victims mouths in this guise. With the arrival of the Plague in the Neumark came the Brethren of the Cross or the Flagellant Movement. The Flagellants were a European religious movement that believed in placating "God's wrath" (the Plague) with self inflicted whippings of penance. Marching from city to city in a somber procession, carrying a whip with metal studs, each trying to outdo his neighbor in pious suffering with self-inflicted whippings. Most of the populace just watched in amazement as the Brethren chanted hymns and went about their display before moving onto the next city. Often locals would celebrate the arrival with fiddle and drink. People were not eager to join but hoped the Brethren's efforts would stop the Plague. Occasionally the arrival of the Brethren even stirred a spiritual reawakening in the locals. Local clergy wisely avoided confrontation with the Brethren. The Flagellants also became infamous for slaughtering Jews (who supposedly were agents of the Pest Jungfrau, poisoning wells with a powder from the Orient). Many of the few Jews in Brandenburg and the Neumark fled for safety in Poland. Rather quickly local clergy, church officials, and nobility became alarmed at the Flagellants practices and numbers. The movement was often not allowed entrance into cities or the use of churches. After being condemned by Pope Clement VI, various other church officials and nobility, the Flagellant Movement was violently extinguished in the Neumark and the rest of Europe. The Pest Jungfrau appeared again in 1356 exacting a particularly high toll among children. The population began to slowly recover in the 15th century, albeit hard hit areas took 100 years to recover. Despite the terror and drastic population decline, the period after the Plague was quite prosperous in the Neumark, as vacant positions needed to be filled and people inherited dead relatives fortunes and land. In 1402 King Sigismund (who was also the Elector of Brandenburg) sold the Neumark along with the Driesen region (1408) to the Deutsche Orden (Teutonic Knights). In February of 1454 the Teutonic Knights defeated Poland at the Battle of Könitz and sold the Neumark to the Marksgraf and Elector of Brandenburg, Friedrich II der Eiserne (The Iron) von Hohenzollern. Under Joachim I the Renaissance blossomed in Brandenburg and the Neumark. He encouraged and supported it financially, attracting many great lawyers, theologians, architects, smiths, and artisans from Meissen and Saxony. The designs and work of the great Italian masters could be seen in cities of the Neumark and in particular the Bürgerhäuse (wealthy citizens homes) of Küstrin. The University of Frankfurt an der Oder was established in 1506 by Joachim II, who fulfilled his fathers dream of creating a university. The university quickly became renowned under the direction of the Dominican Order. In 1535 Markgraf Joachim I died. His sons, Joachim II and Hans, divided his land between them. The youngest, Hans, received all of the land east of the Oder River (the smaller portion), the Neumark. The Neumark was now a seperate state, standing on its own. Hans became known as Markgraf Hans von Küstrin and went about building up Küstrin as his capital city of the Neumark. In order to finance his building projects he raised taxes and created a beer tax or Biersteuer (very unpopular). Hans enjoyed traveling throughout the Neumark and getting to know his subjects. On one particular occasion he disguised himself as a Danish soldier. In his travels he encountered a innkeeper's wife in Ziebingen (Kreis Weststernberg). Upon entering the inn he proceeded to question the woman about her views of the Markgraf's government. She told him that she knew only what others were saying, that the greedy Markgraf, his building projects, and taxes, in particular the Biersteuer, were very unpopular. The Danish soldier then called for the Lords von Löben and von Ziebingen into the room, both of whom greeted the Markgraf. The innkeeper's wife realized immediately who the Danish soldier really was and she fell prostrate to the floor before him. Hans laughed and gave her a friendly hand, saying that he rarely hears such truth as hers from his council. The Markgraf then halted his building projects and he proved to be a popular, benevolent ruler. Stories about his interaction with common people abound.The Neumark remained a seperate entity until 1571, when it rejoined the rest of Brandenburg. The Holy Roman Empire became engulfed in religious and social turmoil during the Reformation of the 1500s. The Neumark, like the rest of Brandenburg, readily accepted "Wittenbergisch Christentum" (Lutheranism) over "Römisch Christentum" (Catholicism). Luther's teachings were officially recognized by the Holy Roman Empire at the Augsburg Conference of 1530 although Calvinism was not. However problems and animosity existed between Catholics and Protestants; problems which the Peace of Augsburg in 1555 failed to solve. By 1608 Protestant lands had formed the Evangelical Union and this was soon followed by the Catholics Holy League in 1609. In 1618, the von Hohenzollerns of Brandenburg inherited the Duchy of Prussia, linking Brandenburg, the Neumark, and Prussia politically, a bond that would not be broken for 329 years, until 1947 (when Prussia was abolished). In that same year events in Prague ignited perhaps the most devastating and shaping event of German history, the Thirty Years War (1618-48). Imperial and Protestant armies engulfed the empire with warfare and chaos. Brandenburg and her allies represented Protestant interests against Imperial Catholic interests. By 1625 the war had reached the Neumark as the desperate Protestant and Danish armies battled unsuccessfully against Imperial armies. At the same time the Neumark was menaced once again by the Bubonic Plague. Imperial armies, 50,000 strong, under General von Wallenstein trampled across the Neumark. Often on the heels of Imperial armies came equal numbers of followers bent on looting and plunder. Later that year Swedish King Gustavus Adolphus (The Lion of the North) came to the defense of the Protestant armies and chased the Imperial armies out of the Neumark, freeing Landsberg, and Küstrin. No village, no family, noble or peasant was spared the absolute destruction of the war in the Neumark. Some historians believe that the Neumark and many other parts of Germany never fully recovered for well over 100 years. An estimated one third of the population was killed. The Peace of Westphalia in 1648 brought a end to the bloodshed and the destruction. In 1701 Friedrich I (originally the III until he became King) ruler of Brandenburg and the Duchy of Prussia united the two into the Kingdom of Prussia. From then on, Brandenburg remained the seat of Prussian power, and the Neumark became part of Prussia. In 1722 poet Anna Louisa Dürbach "Die Karschin" was born in Hammer near Schwiebus. Anna became known as the "German Sappho" because of her antiquated style and criticism of the time in which she lived. "... So grün der Wald, so bunt die Wiesen, so klar und silberschön der Bach, die Lerche sang für Belloisen und Belloise sang ihr nach." After winning the Austrian War of Succession (1740-6), King Friedrich der Grossen (The Great) embarked on ambitious land reclamation projects in the Neumark along the Netze, Oder, and Warthe Rivers. Efforts began near Stettin an der Warthe, utilizing soldiers and military engineers (due to a labor shortage,) to build dikes and drain marshes to obtain land for agriculture. Already by 1753 some 4,000 colonists had been settled on reclaimed land. The marshes near Küstrin were drained and a canal was dug shortening the run of the Oder River by 18 miles and linking Küstrin with Friedewalde. The canals around Küstrin (today Kostryn, Poland) can be seen today on maps. Similar projects were undertaken along the Netze River, under the direction of Franz von Brenkenhoff, draining swamps and creating a route to the Baltic sea via a series of canals linking the Bromberg canal to the Vistula River. Amt Driesen attracted many Mennonites who settled on the reclaimed land. In total, the land reclamation projects in the Neumark could support 11,200 settlers. Friedrich der Grossen took great pride that his projects had established 122 new villages. Austria still smarting from its loss to Prussia, began lining up allies sympathetic to its cause and began planning for action. Friedrich upon learning of their plans made the first move by invading Saxony. This began the Seven Years War (1756-63), which had ruinous effects on Brandenburg and the Neumark, as the four greatest armies of Europe, Austria, France, Russia, and Sweden converged devastatingly on the Neumark. Friedrich did a remarkable job holding off the 4 armies. The famous Rococo poet and physicist Ewald von Kleist (great uncle of Heinrich v. Kleist), mortally wounded, died on the battlefield at Kunersdorf in 1759. His exploits and poems were remembered in Lessings 1767 love play entitled "Minna von Barnhelm". On February 14, 1763, the Seven Years War came to a end with the signing of the Peace of Hubertsberg leaving things as they had been prior to the war. Once again the Neumark lay in utter confusion and desolation. Whole regions were depopulated. It is estimated that a quarter of the population died. Küstrin was burned to the ground, as well as many other cities, and thousands of homes. The Neumark had not seen such devastation since the Thirty Years War. Friedrich der Grossen spent the rest of his life and huge sums of money rebuilding Prussia and especially the Neumark. He not only financed thousands of homes but supplied food, seed, building materials, and horse and wagon teams to the Neumark. Küstrin alone was rebuilt at a staggering cost of 700,000 Thalers. In 1815 a portion of the province Schlesien (Kreis Sorau) was incorporated into the Neumark. When the Kingdom of Prussia united with other German states (except Prussia's competitor Austria) to form the German Reich in 1871, the Neumark and Brandenburg became part of the new German nation. Berlin was the capital of Brandenburg until 1920, when it became a province itself and Potsdam became the capital of Brandenburg. The Neumark was part of the Brandenburg district (Regierungsbezirk) Frankfurt an der Oder. In August of 1928 the famous poet Gottfried Benn died in Crossen (born in Sellin). In his last words he quoted Alfred Henschke's (another Neumark poet known as Klabund) "Ode an Crossen": "Oft/ Gedenk ich deiner/ Kleinen Stadt am blauen/ Rauhen Oderstrom,/ Nebelhaft in Tau und Au gebettet/ An der Grenze Schlesiens und der Mark,/ Wo der Bober in die Oder,/ Wo die Zeit/ Mündet in die Ewigkeit-". In 1938 Kreis Meseritz and Kreis Schwerin-Warthe became part of Brandenburg, both had been part of the province of Grenzmark and prior to 1920 part of Posen. In a similar move, Kreis Arnswalde and Kreis Friedeberg were incorporated into the province of Pommern. In 1945 in the closing days of the Nazi Third Reich, the Neumark and Brandenburg were the sites of extremely fierce fighting as the Red Army advanced from the east and southeast toward Berlin. Millions of refugees attempted to flee the Red Army and the following atrocities. As of Jan. 11, 1945 the fighting along the Eastern Front was established deep in Poland along the Vistula River, just east of Warsaw, and north near the East Prussian-Lithuanian border. On Jan. 12, 1945 the Red Army launched a massive attack of 1,000,000 men and 7000 tanks against 400,000 Germans and 1000 tanks. Strong points which were able to withstand the attack were bypassed as the Red Army headed for Berlin. By the end of the month the Soviets were deep into the Neumark. At this time the Neumark and other areas were subjected to a spree of unparalleled savagery, ethnic cleansing, and (as Stalin ordered) the deliberate, "systematic terrorization of German women" (including young girls). Structurally, the city of Landsberg survived largely unscathed due to the Mayor surrendering the city on Jan. 30, 1945. To the north several battles were fought near Königsberg Nm. By Jan. 31, 1945 the Red Army had reached the eastern edge of Küstrin. In most places they had reached the shores of the Oder River. However in their path lay the heavily defended cities of Frankfurt an der Oder and Küstrin. The Red Army needed to take both in order to establish bridgeheads across the Oder for a final drive towards Berlin. Küstrin was tenaciously defended under General Busch as wave after wave of Red Army attackers were slaughtered attempting to cross the drainage ditches and canals. Spring flooding also helped to hamper Red Army efforts to take Küstrin. After months of having turned back overwhelming Soviet armies the defenders finally succumbed on March 28, 1945 and Küstrin surrendered. Soviet Marshall Georgi Zhukov made Küstrin his headquarters for the final drive for Berlin. Across the Oder River in the Seelow Heights German troops began to dig in and wait for the onslaught. In 1945, as the result of the July Potsdam Conference, the Neumark, Silesia, most of Pomerania, West and East Prussia, were given to Poland by the Soviet Union, after the inhabitants had fled, been expelled, or killed during the so called "Silent Genocide". The regions were then settled with Poles from the land lost by Poland to the Soviet Union in the east, chiefly the Ukraine. In many cases the beds were still warm. The Poles did their best to remove any traces of the Neumark's former inhabitants, and cemeteries and monuments often destroyed. The descendants of settlers who had cultivated the soil, built prosperous towns and idyllic villages now found themselves refugees without a home. After the war, the DDR (Deutsche Demokratische Republik - Communist East Germany) insisted the new Oder-Neisse line/border was provisional. But on a bridge over the Neisse River in July of 1950, in the divided former city of Gorlitz, the prime ministers of both the DDR and Poland recognized "perpetual borders of peace." The West German government did not recognize the new German-Polish border until Dec. 7, 1970 but not legally. After the reunification of East and West Germany, the new German-Polish border was legally confirmed on Nov. 14, 1990 in Warsaw, Poland. This officially ended the history of the Neumark and a chapter of German history was closed. Today the majority of what was Neumark is now the Polish province of Lubuskie (Lebus / Lubusz). Here is a selection of images of the former province of Neumark, Prussia, Germany. We hope that by making these old postcards available online that they will be of interest to anyone interested in the history of Neumark or researching their German, Jewish or Polish ancestry. We have a large and ever-changing archive of old postcards and photographs of all former German locations in present-day Poland. These pictures are available for sale at reasonable prices to anyone, anywhere in the world. We can also offer these images as high-definition digital scans and/or reprints of photographs in our collection should that be of interest. The originals of these photographs and postcards date from 1898 through to 1945. If you are looking for images for your collection or publication just complete the form here and we will let you know what images we already have of your chosen location and also keep you notified if any more become available. Copyright © 2004 PrussianPoland.com
<urn:uuid:7c5b9172-1308-43db-ab20-253558ae1f33>
CC-MAIN-2017-17
http://prussianpoland.com/neumark.html
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917121453.27/warc/CC-MAIN-20170423031201-00366-ip-10-145-167-34.ec2.internal.warc.gz
en
0.965053
4,766
3.640625
4
The diversity of reproductive strategies in nature is shaped by a plethora of factors including energy availability. For example, both low temperatures and limited food availability could increase larval exposure to predation by slowing development, selecting against pelagic and/or feeding larvae. The frequency of hermaphroditism could increase under low food availability as population density (and hence mate availability) decreases. We examine the relationship between reproductive/life-history traits and energy availability for 189 marine gastropod families. Only larval type was related to energy availability with the odds of having planktotrophic larvae versus direct development decreasing by 1% with every one-unit increase in the square root of carbon flux. Simultaneous hermaphroditism also potentially increases with carbon flux, but this effect disappears when accounting for evolutionary relationships among taxa. Our findings are in contrast to some theory and empirical work demonstrating that hermaphroditism should increase and planktotrophic development should decrease with decreasing productivity. Instead, they suggest that some reproductive strategies are too energetically expensive at low food availabilities, or arise only when energy is available, and others serve to capitalize on opportunities for aggregation or increased energy availability. Organisms are biological machines that require energy to perform work, maintenance, growth and reproduction [1–3]. When energy is limited, competition increases, which in turn drives adaptation to increase energetic efficiency . Ultimately, individuals better adapted to secure, or use, energetic resources can convert this energy into offspring more efficiently . As such, energetics is a key component of modern evolutionary theory , deeply rooted in the principles of Darwin and Malthus. Three distinct types of energy affect biological systems: solar radiation in the form of photons, thermal kinetic energy as indexed by temperature, and chemical potential energy stored in reduced carbon compounds . Both thermal kinetic energy and chemical potential energy are posited to influence reproductive strategies [8,9]. Thorson hypothesized that both poor food conditions and low temperatures should slow or postpone larval growth, thereby increasing larval life duration and ultimately predation exposure in the water column (reviewed in ). Lower food and temperatures would thus decrease the frequency of feeding planktonic stages (i.e. planktrophic larvae), and lower temperatures would decrease the frequency of non-feeding planktonic stages (i.e. lecithotrophic), at higher latitudes and deeper depths. This finding was supported by later modelling efforts . More recent theoretical and empirical work also found that larval duration in the water column increases with decreasing temperature . Alternatively, lower food availability may promote adaptations for planktotrophic larvae because these larvae tend to disperse away from areas of low food concentration . Indeed, planktotrophic development may be more frequent at extreme low food availability, as populations in oligotrophic regions may represent sinks maintained only through continued recruitment requiring dispersal from distant population sources [14,15]. In contrast to these predictions of larval type with production, others have concluded that ‘the presence or absence of a feeding larval stage is only weakly and indirectly related to allocation of energy or materials to production’ [16, p. 339]. Pearse et al. concluded that in many taxonomic groups, the geographical patterns suggested by Thorson do not exist and the processes generating these patterns may be taxon specific. These two contrasting theories alternatively predict that planktotrophic larvae should be more prevalent either at high or low food availability . The results in support of these two different patterns are mixed. Thorson based his original hypothesis on the findings that planktotrophic larvae were under-represented among marine species near the poles, regions he equated with low temperatures and productivity. Based on the limited data available at the time, Thorson also suggested that the deep sea, a low food availability environment, was dominated by non-pelagic larval strategies. Support for Thorson's hypotheses can be found in recent studies involving more direct tests by explicitly quantifying both temperature and productivity. Fernandez et al. tested for variation in species richness of planktotrophic and direct developers, i.e. pelagic feeding versus non-pelagic, in crustaceans and molluscs from the Chilean shelf as a function of both sea surface temperature and minimum chlorophyll concentration. Increases in temperature lead to significant decreases in direct development but increases in planktotrophic development in both molluscs and crustaceans. Increases in chlorophyll a (chl-a) concentration lead only to increases in direct development crustaceans and planktotrophic molluscs but both relationships were weak. In a subsequent study on crustacean and mollusc species on the southeastern Pacific and southwestern Atlantic coasts, planktotrophic species richness were found to decrease polewards while direct developers increased, reflecting variation in temperature and not chl-a . Similarly, Calyptraied gastropods exhibit decreases in planktotrophic and increases in direct development with increasing latitude . Recently, Marshall et al. compiled information for 1500 species across five phyla, finding that planktonic larvae and smaller egg sizes were more common where both temperature and/or productivity were high. Yet, other studies do not support Thorson's hypothesis, instead finding that planktotrophic larvae are more frequent in habitats inferred to be energy limited. In a transect from the northwest Atlantic Ocean, from depths of 478–4970 m, planktotrophic development in gastropods increased with increasing depth, and a presumed reduction in food availability . Likewise, in both the eastern and western North Atlantic, planktotrophic development becomes the predominant strategy in gastropods inhabiting the abyssal plains, an environment associated with extremely low food availability . Bradbury et al. also found that planktonic larval duration increased with both increasing depth (up to 1000 m) and increasing latitude—opposite of those patterns predicted by Thorson . Antarctic taxa also exhibit equivalent levels of pelagic development as more temperate regions [8,17]. Hypotheses have also been put forth to predict shifts in other reproductive strategies over gradients of energy availability. The frequency of hermaphroditism may also increase under low food availability as population density, and hence mate availability, decreases [23,24]. Yet, to our knowledge, this hypothesis has never been quantitatively tested in marine organisms. Here, we assemble and analyse a database of reproductive traits, life-history attributes and energy availability, measured as particulate organic carbon (POC) flux and bottom temperature, for marine gastropod families of the western Atlantic Ocean. We include in our database all 189 families present in the western North Atlantic for which we have life-history data available. We specifically test whether food availability and temperature increases or decreases the presence of specific reproductive strategies with regard to gametes, embyros, larvae and hermaphrodistism. We build upon prior work by including an additional suite of reproductive strategies, using an evolutionary framework and explicitly quantifying energy availability. Additionally, we include multiple families that have been excluded in prior analyses that comprise geographical ranges that predominately occur in deep oceans and are characterized by low productivity levels. The inclusion of these groups allows us to explore clines over a much greater range of energy availability. 2. Material and methods For each gastropod family, as defined by Bouchet & Rocroi , we collected data about the dispersal of male and female gametes, the dispersal of fertilized embryos, the mode of larval development and the prevalence of hermaphroditism. Our level of analysis was chosen because family-wide information was readily available and because reproductive strategies are fairly conserved at this taxonomic level, i.e. most variation in the reproductive strategies we examine occurs between families . Information was collected from a literature review. For each family, character states were merged into the following binary categories: gametes (dispersing versus non-dispersing), embyros (dispersing versus non-dispersing), and hermaphroditism (present versus absent). Gametes were considered dispersing if either the female or male gametes are expelled (e.g. broadcast, spermatophore). As such, the absence of dispersing gametes infers that copulation is required. For embryos, we combined retained (e.g. brooded by the mother) and attached (i.e. deposited in an egg mass on the substrate), into a non-dispersing category; dispersed (embryos released into the water and dispersed into the pelagic zone) and mixed (multiple strategies were exhibited among species within a family) were likewise combined into a dispersing category. Hermaphroditism was combined into present (100% and mixed (multiple strategies were exhibited among species within a family)) and absent (0%). We classified gastropod families in four different larval categories: direct (young develop directly into the adult form without a larval phase and typically have limited dispersal potential), planktotrophic (young feed in the plankton during their larval stage and are considered to have longer dispersal potential), lecithotrophic (larvae derive nourishment from yolk and are non-feeding and are considered to have longer dispersal potential) and a mixed category including families that exhibited a combination of the three above states. Chemical energy available to the gastropods was estimated as POC flux (g of C m−2 yr−1) based on the Lutz et al.'s model. Specifics of the model can be found in . Temperature data was gathered from the National Oceanographic Data Center (NODC) database . For each family, we quantified the median and standard deviation of carbon flux and temperature over their known latitudinal and depth ranges. To obtain the energy values, each family's biogeographic range was overlaid upon the Lutz et al.'s model or NODC data. Depth and latitudinal range were pulled from Malacolog for the western Atlantic Ocean to estimate the biogeographic range of family. Data from each family were manipulated using ArcGIS Workstation 10 (Environmental Systems Research Institute, Redlands, CA, USA). We created a geographic information system (GIS) layer for each family's north–south range extent. This was overlaid upon bathymetry data (General Bathymetric Chart of the Oceans 08, 30 arcsecond grid, September 2010 release, www.gebco.org) to limit each family's distribution to their recorded depth range. Binary and binomial regression models were implemented in R using the package MCMCglmm, with uninformative priors and uniformly low levels of belief . Model chains were run for 500 000 iterations with a burn-in of 200 000 iterations and thinning intervals of 100 iterations. To evaluate convergence, we assessed the mixing of Markov chain Monte Carlo (MCMC) chains visually and computed formal diagnostics from Geweke and Heidelberger & Welch via the R-package ‘coda’ . For each parameter in our models, we report mean estimates from the posterior distribution along with the 95% credible interval (CI) and the corresponding MCMC p-value . Median energy flux values were square root-transformed prior to analysis to minimize skew and bimodality in the data. Taxonomic Order—from the most current taxonomy for Gastropoda —was included as a random effect to account for the possible effects of shared phylogenetic history in our model. A more explicit estimation of phylogenetic covariance was not possible owing to the current lack of a comprehensive molecular phylogeny for this clade. Some species may exhibit values for POC or temperature that differ from their representative family, i.e. the environmental values encountered over the entire range of a taxonomic family may not be representative of the environment experienced by a particular species within it. To evaluate the extent to which our results depend on taxonomic sampling level, we explored the distribution of carbon flux values taken for individual species within a family. Values of chemical energy availability (temperature data were not available) were taken from previous work on the same geographical region of our family-level samples . In this exploration, the carbon flux value for every GIS cell was taken for every individual species within a family. For all the families explored, median values for the family were near the median values based on individual species. A high correlation (0.67, Spearman's ρ: p < 0.0001) exists between the median for the family and the median value based on individual species. Pairwise χ2 tests indicated that the probability distributions of gamete retention, egg dispersal and hermaphroditism are not independent from each other. Larval dispersal type is significantly associated with gamete retention (p = 0.03) and egg dispersal (p < 0.001) but not with presence of hermaphroditism (p = 0.12). Thus, below we analyse only the presence of hermaphroditism and the larval category, as it was strongly correlated with gamete and egg type. Compared with direct development, all planktotrophic, lecithotrophic and mixed categories were less frequent at high levels of carbon flux (figure 1). In all models, mean temperature was excluded from the MCMC models (p > 0.05 and CI's include zero, table 1). In a non-taxonomic model, intercepts between larval categories were not significantly different (table 1). A significant change occurred in lecithotrophic versus direct development (p = 0.0433), with lecithotrophic being less represented at lower levels of median POC flux (table 1 and figure 1). Models including the taxonomic relationships provided better fits than non-taxonomic models (deviance information criterion (DIC) = 274.43 versus 313.38). In the MCMC models that included taxonomic structure intercepts were not significantly different (table 1). Lecithotrophic development was more frequent at higher productivities than direct development but not significant (p = 0.075). However, planktotrophic development was less frequent at higher productivities than direct development (p = 0.05). In this model, the probability of a given family having planktrophic larvae versus direct development decreases 1% for every one-unit increase in square root median of POC flux. In the non-taxonomic model, the presence of hermaphroditism increases with increasing median POC flux (p = 0.001) The non-taxonomic model suggests that the probability of exhibiting hermaphroditism within a family increases by 1% for each one-unit increment in the square root of median POC flux (figure 2 and table 1). However, this model is poorly supported in comparison to a model that incorporates taxonomic relatedness (DIC 215.94 versus 69.75). In the latter case, POC flux is no longer significant. After accounting for taxonomic relatedness, energy availability appears to influence only one of the life-history variables considered in our analyses. Specifically, with every one-unit increment in the square root of median POC flux, the probability of having planktotrophic larvae over direct development in gastropods decreases by 1%. Likewise, although marginally non-significant, our results suggest lecithotrophic development may also decrease with increasing productivity. Our findings for larval dispersal contrast with some previous hypotheses and findings. Thorson posited that cold temperatures and limited food would increase larval duration and consequently larval mortality owing to predation, thereby selecting against planktotrophic phases. Our findings of planktotrophic development in colder and more limited food regions, like the deep sea, do not support this. Similarly, our study is also in contrast to recent findings that planktotrophic development increases with increasing productivity [18,21] but matches those studies where depth was considered a proxy variable for productivity [14,15,22]. One potential reason for this discrepancy is that our study includes a greater range of productivity values owing to the inclusion of gastropod families with deep-sea ranges. Indeed, the greatest rates of change in the presence of dispersing strategies occur across the lowest median POC flux values. Another reason for the discrepancies in these findings may be that larval duration differs as function of geography itself even when taxonomy and larval type are constant . Our study focuses on the northwest Atlantic, similar to Rex & Waren and Rex et al. , whereas Fernandez et al. examine relationships in the southeast Pacific and Marshall et al.'s study is a global meta-analysis. Worth noting is that Thorson based his hypothesis on the findings of decreased presence of planktotrophic development in both the Arctic and Antarctic Oceans, areas he equated with low productivity. He suggested that despite high productivity at the poles in surface waters little of production arrived at the seafloor to be consumed by benthic invertebrates (, p. 25). However, more recent studies of carbon flux to the seafloor suggest that the pattern is complex and does not follow systematic latitudinal patterns . Nonetheless, regions of high latitudes often experience elevated carbon fluxes to the seafloor [27,35,36]. This suggests that Thorson's own work may indicate that planktotrophic development may occur with greater frequency in higher productivity regions. Similarly, Thorson indicated that the presence of planktotrophic larvae decreased into the deep sea, a finding that has not been supported in either fishes or gastropods [14,22]. Our results may indicate that low energy availability favours planktotrophic larvae, and possibly lecithotrophic larvae, versus direct development potentially indicating selection for greater dispersal ability. Dispersing larvae may serve as a bet-hedging strategy which insures that at least some juveniles will find and settle in highly productive patches. Previous work by Rex & Waren demonstrated that in prosobranch gastropods, planktotrophic development increases with depth and the associated decline in food availability. Our expanded analyses with a broader taxonomic scope support this finding. Indeed, the increase of dispersing larvae in the most food poor (i.e. oligotrophic) regions may reflect source/sink dynamics. Many mollusc populations in the abyssal plains, which are some of the most oligotrophic areas of the oceans, are probably maintained through the continued recruitment of larvae from source populations in more productive ocean regions . Alternatively, planktotrophic development may be an energetically less expensive strategy . Planktotrophic larvae require less parental investment as the larvae vertically migrate into surface waters to feed. The ability of larvae to feed themselves allows lower energetic investments from parents during offspring production, and the larval tendency to exploit resources outside of the adult habitat decreases competition between life-stages. By contrast, direct development requires species in low food environments to convert limited energy into non-feeding offspring. This may translate into the caloric requirement of offspring development not being met. Increased energy availability also appears to favour the occurrence of hermaphroditism, in conjunction with gamete retention and egg dispersal, in one of our models. This finding is also counter to earlier suggestions that simultaneous hermaphroditism may allow any two individuals to engage in successful mating and could therefore increase the chances of reproduction when population densities are low, as expected for low energy environments [23,24]. Nevertheless, the possible effect of energy availability on hermaphroditism is poorly supported by the data and appears to be driven by phylogeny. In particular, this pattern mirrors the distribution of nudibranch and opisthobranch gastropods, two predominately hermaphroditic clades whose density and richness decrease with depth and reduced food availability . In addition, hermaphrodites might be competitively excluded from the deep sea because the production of both sexes is energetically expensive and therefore possibly untenable when resource abundance is low. The increased representation of the nudibranch and opistobranch orders in higher energy regions is not totally unexpected given their higher standard metabolic requirements compared with other gastropods . Our findings may also be consistent with the view that reproduction in low energy environments is opportunistic at aggregations and that selection in those habitats favours traits that improve an individual's ability to locate and colonize the highly localized but spatio-temporally unpredictable resource patches where they occur. Highly localized food falls tend to attract large numbers of mollusc individuals [39,40] with highly specific energy requirements [41,42]. Under these conditions, selection for dealing with mate availability, i.e. hermaphroditism, is likely to be minimal because opposite-sex partners may be readily available wherever individuals congregate. Alternatively, hermaphroditism is associated with the brooding of young found predominately in small marine invertebrates . Hermaphroditism thus may show reverse predictions if either reduced dispersal or reduced body size are being selected for at higher productivities. Given that size in gastropods increases considerably in areas with greater productivity , it is likely that selection would favour limited dispersal as opposed to reduced size. Overall, our results do not support two hypotheses that predict variance in reproductive strategies with gradients in energy availability. Gastropods may reflect a unique case among invertebrates as they exhibit certain reproductive strategies more frequently, e.g. simultaneous hermaphroditism and encapsulated brood protection, not typical in other molluscs or invertebrate phyla. Future studies are needed to test for the generality of the patterns we report here for gastropods. Our results indicate that for gastropods, with decreasing productivity, hermaphroditism decreases or shows no pattern when accounting for shared evolutionary history, and the frequency of planktotrophic larvae increases. Future studies should benefit from a more nuanced phylogenetic hypothesis that allows a better estimate of the effects of differential levels of relatedness within and among Orders and from sampling at a lower taxonomic level (e.g. at the level of species) as more data become available. The full dataset is available on www.datadryad.org. This work was supported by the National Evolutionary Synthesis Center, NESCent (NSF no. EF-0905606). - Received February 17, 2014. - Accepted June 10, 2014. - © 2014 The Author(s) Published by the Royal Society. All rights reserved.
<urn:uuid:92b7a462-60c2-4e54-bd0d-7b3c67404782>
CC-MAIN-2017-17
http://rspb.royalsocietypublishing.org/content/281/1789/20140400
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917118310.2/warc/CC-MAIN-20170423031158-00540-ip-10-145-167-34.ec2.internal.warc.gz
en
0.93119
4,688
3.296875
3
Multiple sclerosis is the most common potential cause of neurological disability in young adults. The disease has two distinct clinical phases, each reflecting a dominant role for separate pathological processes: inflammation drives activity during the relapsing–remitting stage and axon degeneration represents the principal substrate of progressive disability. Recent advances in disease-modifying treatments target only the inflammatory process. They are ineffective in the progressive stage, leaving the science of disease progression unsolved. Here, the requirement is for strategies that promote remyelination and prevent axonal loss. Pathological and experimental studies suggest that these processes are tightly linked, and that remyelination or myelin repair will both restore structure and protect axons. This review considers the basic and clinical biology of remyelination and the potential contribution of stem and precursor cells to enhance and supplement spontaneous remyelination. Multiple sclerosis (MS) is the commonest cause of neurological disability in young adults with a prevalence of approximately 120 per 100 000 and a lifetime risk of 1 in 400 (Compston & Coles 2002). Although the cause of MS is unknown, it is well established that an interplay of genetic and environmental factors results in a multifocal and multiphasic disease defined histologically by inflammatory demyelination, axonal injury, astrocytosis and varying degrees of remyelination. The clinical course in the majority of patients is initially characterized by episodes with complete recovery, followed by relapse with persistent deficits and finally secondary progression, a stage characterized by few, if any, discrete exacerbations. The natural history of MS reflects a dominant role for distinct but related pathological processes; thus, inflammation drives activity during the relapsing–remitting stage and axon degeneration becomes more prominent as disability accumulates and the disease starts slowly to progress. The occurrence of spontaneous remyelination has long been recognized, but is limited and seems ultimately to fail, resulting in progressive disability (Prineas & Connell 1979; Confavreux & Vukusic 2006a). Comprehensive treatment strategies must therefore seek both to limit and repair the damage. At present, the modest achievements of disease-modifying treatments target only the inflammatory process and are ineffective in the progressive stage. While further advances in reducing relapse rates are expected, these will leave unsolved the clinical science of disease progression. Here, the requirement is for strategies that promote remyelination and prevent axonal loss. Pathological analyses and experimental studies suggest that these processes are tightly linked, and that remyelination will not only restore structure and nerve conduction, but also prove to be axon protective (Raine & Cross 1989; Kornek et al. 2000; Rodriguez 2003). Remyelination is the reinvesting of new myelin around demyelinated axons. It will be argued that the fundamental importance of this process is due less to the restoration of saltatory conduction—welcome though it is—but mainly as the most rationale method for protecting axons and thus limiting clinical disability. There is thus a great need for strategies to promote remyelination. Debate around myelin repair can sometimes be reduced to an argument between endogenous and exogenous repair. However, this represents an oversimplification of processes that are not of themselves mutually exclusive. In this review, which addresses the basic and clinical biology of remyelination or myelin repair, we argue that stem cell-based insights can also contribute to the development of strategies that will both supplement and enhance spontaneous remyelination. 2. Axonal injury and irreversible disability in multiple sclerosis Over the last decade, the emergence of increasingly sophisticated imaging techniques allied to detailed histological studies has catalysed a resurgence of interest in the contribution of axonal injury to MS. In some ways, this is a revisiting of an old story, given the reports of axonal pathology in early pathological descriptions of the disease (Compston 2006). The focus on inflammation and demyelination had obscured the importance of axonal pathology as a key determinant of disability (De Stefano et al. 1998; Bjartmar et al. 2000). Most patients develop progressive irreversible disability within 10–15 years of disease onset. Clinical, radiological and pathological evidence suggests that irreversible impairment and progressive disability result from exhausting a finite axonal reserve (Davie et al. 1995; De Stefano et al. 1998; Stevenson et al. 1998). Ferguson et al. (1997) and, subsequently, Trapp et al. (1998) provided quantitative evidence for axonal injury in both acute and chronic lesions. Together, these studies strongly suggest that axonal loss is the pathological substrate of progressive disability (figure 1a). The precise mechanism, however, of axonal injury remains largely unknown. Acute axonal injury correlates with active inflammation (Trapp et al. 1998; Kornek et al. 2000; Kuhlmann et al. 2002). These observations have been extended by the demonstration of axonal injury in both non-inflammatory chronic lesions and normal-appearing white matter (Ferguson et al. 1997; Trapp et al. 1998; Bjartmar et al. 2000; Evangelou et al. 2000; Kornek et al. 2000; Lovas et al. 2000). These (and other) observations suggest that mechanisms that account for chronic axonal loss in MS are independent of inflammation. In turn, this raises the question of what is the interplay between the cardinal clinical features of MS, namely relapses and progression, and their pathological correlates, inflammation and axonal loss. The question remains unanswered despite considerable basic and clinical research activities. The answer(s) will also need to reconcile the weight of epidemiological evidence that argues for redundancy of the phenotypic distinction between relapsing–remitting and secondary progressive MS on reaching the stage of fixed moderate disability (Coles et al. 1999; Kremenchutzky et al. 1999; Confavreux et al. 2000; Confavreux et al. 2003; Confavreux & Vukusic 2006b; Kremenchutzky et al. 2006). Further discussion on the aetiopathogenesis of axonal injury in MS is outside the scope of this review, and the reader is referred to Compston et al. (2006). 3. Remyelination matters: oligodendrocyte signals maintain axonal integrity Experimental and pathological evidence supports the idea that myelin, in addition to enabling saltatory conduction, has a dynamic and vital role in maintaining axonal homeostasis and integrity; thus, chronically demyelinated axons—devoid of myelin-derived support, consequent on inflammation perhaps—are vulnerable to degeneration (figure 1b; Griffiths et al. 1998; Scherer 1999; Kornek et al. 2000). The influence of oligodendrocytes on axonal calibre and function is well described; oligodendrocytes myelinate axons, increase axonal stability and induce local accumulation and phosphorylation of neurofilaments within the axon (Colello et al. 1994; Sanchez et al. 1996; Witt & Brady 2000). Neuronal function is further influenced by oligodendrocyte-derived soluble factors that induce sodium channel clustering, necessary for saltatory conduction, along axons and maintain this clustering even in the absence of direct axon–glial contact (Kaplan et al. 1997, 2001; Waxman 2001). Defined factors produced by cultured cells of the oligodendrocyte lineage also support neuronal survival and modulate axonal length by distinct intracellular mechanisms (Wilkins & Compston 2005; Wilkins et al. 2001, 2003). Recent studies on experimental myelin mutant animals have further suggested a role for structural myelin proteins in the maintenance of axonal integrity. Griffiths and co-workers demonstrated a late axonopathy in the absence of inflammation and demyelination in proteolipid (PLP) mutant mice associated with a progressive motor disability. These observations have been extended by the finding in 2′,3′-cyclic nucleotide 3′-phosphodiesterase (CNPase) deficient mice of uncoupling between myelin assembly and axonal support compatible with the idea that oligodendrocyte dysfunction can result in a primary axonopathy (Lappe-Siefke et al. 2003). Along with the finding of axonal pathology in patients with a null mutation for PLP, these data suggest that myelin-derived signalling is necessary for the maintenance of axonal structure (Garbern et al. 2002). The function of such signals is unclear, although recent evidence implicates a role in enabling fast axonal transport (Pera et al. 2003; Edgar et al. 2004). Recognition of the importance of oligodendrocyte signals in maintaining axonal health and the awareness that axonal degeneration occurs by regulated processes independent of apoptosis provide a compelling biological argument that remyelination is an essential part of any therapeutic strategy for MS (Raff et al. 2002). 4. What cells give rise to remyelinating oligodendrocytes? It is now generally agreed that most, if not all, remyelinating oligodendrocytes arise from a population of adult precursor cells. This view is based on several lines of experimental evidence. First, proliferating cells that are likely to be NG2+ precursors, can be labelled either by injecting a LacZ expressing retrovirus into normal white matter or by labelling with tritiated thymidine or BrdU, give rise to labelled remyelinating oligodendrocytes following induction of demyelination (Carroll & Jennings 1994; Gensert & Goldman 1997; Horner et al. 2000; Watanabe et al. 2002). Second, precursors isolated from adult central nervous system (CNS) can remyelinate areas of demyelination following transplantation (Zhang et al. 1999; Windrem et al. 2002). Third, oligodendrocyte precursors cells (OPCs) identified with a range of markers that include the growth factor receptor PDGFRα (Redwine & Armstrong 1998; Di Bello et al. 1999; Sim et al. 2002a; Fancy et al. 2004), the proteoglycan NG2 (Levine & Reynolds 1999; Mason et al. 2000a,b; Watanabe et al. 2002) and the transcription factors MyT1, Nkx2.2, Olig1 and Olig2 (Sim et al. 2002a,b; Fancy et al. 2004; Watanabe et al. 2004) have patterns of expression that are consistent with being the source of new oligodendrocytes, although for few of these markers is there unequivocal evidence that the cells they label become the remyelinating oligodendrocytes. In most situations, these cells are a distinctive phenotype widely referred to as adult OPCs. These cells are the adult descendants of an extensively studied developmental precursor. In adult tissue, these cells have a characteristic multipolar morphology and express several markers, of which NG2 and PDGFRα are the most commonly used (Nishiyama et al. 1996; Dawson et al. 2000, 2003). Apart from their ability to contribute to repair processes, it seems probable that they fulfil a range of normal physiological functions. Whether OPCs express both markers in all circumstances and in all regions of the adult CNS is uncertain (Hampton et al. 2004); indeed, the extent to which this is a homogenous population of cells throughout the adult neuraxis is also unresolved. There is now clear evidence that oligodendrocytes can be generated via several distinct lineage pathways and therefore, from a developmental perspective, the progenitor phenotypes are diverse (Mallon et al. 2002; Liu & Rao 2004; Cai et al. 2005; Vallstedt et al. 2005). For example, two distinct populations can be described on the basis of expression of PDGFRα or DM20, an alternatively spliced isoform of the proteolipid protein gene (Spassky et al. 1998, 2000). The extent to which OPCs in the adult CNS retain an imprint of their developmental origin remains to be unequivocally determined. One possibility is that adult OPCs are a homogenous population of cells having a similar phenotype and responsiveness to environmental signals, despite their varied ontogeny. Alternatively, distinctive types of OPC may exist, either coexisting or being specific to a particular anatomical region. There is some evidence to suggest that this may be the case: in tissue culture, the markers O4 and A2B5 appear to identify distinct populations of adult forebrain OPCs that respond differently to a range and combination of growth factors (Mason & Goldman 2002). This is clearly an important issue to resolve, especially in adult human tissue, if growth factor-based strategies are to be used therapeutically to enhance endogenous remyelination in clinical disease. The evidence that cells other than OPCs contribute to remyelination is limited. Two studies have demonstrated that when demyelinating lesions are induced in the corpus callosum close to the subventricular zone (SVZ), then neural progenitor cells can be deflected away from their normal path towards the olfactory bulb and the lesion, where they can contribute to the generation of new oligodendrocytes during remyelination (Nait-Oumesmar et al. 1999; Picard-Riera et al. 2002; Menn et al. 2006). The component of the total remyelination attributable to SVZ-derived cells is uncertain but is likely to be small given the abundance and responsiveness of locally derived OPCs. A further uncertain issue is how close an area of demyelination must be in order for SVZ progenitors to respond. While it is clear that lesions within the adjacent corpus callosum can induce this response, it is improbable that white matter lesions remote from the SVZ in, for example, the spinal cord or brain stem white matter will do so, given that most remyelinating cells are recruited from a narrow region surrounding a lesion (Franklin et al. 1997). In white matter regions remote from the SVZ, there is no clear evidence at present that cells other than OPCs contribute to remyelination. 5. Do endogenous CNS stem cells contribute to remyelination? If one applies strict criteria to the definition of a stem cell (a multipotent cell, generally attached to a basal lamina, that divides slowly and is both self-renewing and able to give rise to rapidly proliferating progenitor cells by asymmetric division), then true stem cells within the adult mammalian CNS are rare, comprising the GFAP-expressing B cells of the SVZ and, perhaps, their hippocampal equivalents (Doetsch et al. 1999a,b; Sanai et al. 2004; Seri et al. 2004). Recent experimental evidence suggests that SVZ type B cells may contribute to remyelination (Menn et al. 2006). Thus, adult CNS stem cells make a small and anatomically restricted contribution to endogenous remyelination in the adult. This is similar to other regenerating tissues where proliferation of the stem cell population is scarcely affected by the sudden demand for new differentiated cells following injury. Instead, the transit-amplifying population of progenitors, which, unlike the stem cells from which they are generated, have the proliferative responsiveness to generate the new cells required to repair damaged tissue, takes up this demand. Should one regard the OPCs of the adult brain as being stem cells or progenitor cells? OPCs certainly exhibit some stem cell properties: they show multipotency, giving rise to oligodendrocytes, neurons and, at least in vitro, astrocytes (Ffrench-Constant & Raff 1986; Belachew et al. 2003; Kondo & Raff 2000; Nunes et al. 2003; Gaughwin et al. 2006), and have very high levels of telomerase activity allowing them to undergo many rounds of proliferation before becoming senescent (Tang et al. 2001). However, their rapid proliferation, symmetrical division (the daughter cells of OPC proliferation are still OPCs, regardless of whether they subsequently differentiate into oligodendrocytes or not) and absence of a distinct anatomical relationship with a basal lamina are more consistent with their being a transit-amplifying population and, in our view, they are more accurately designated as progenitors rather than stem cells. Indeed, a pertinent question to consider is how similar OPCs are to other multipotent neural progenitor cells within the adult CNS, and whether, perhaps, a generic term of neural progenitor should be more widely applied (Goldman 2003). 6. Why does remyelination fail? Since the cells responsible for generating new oligodendrocytes are transit-amplifying progenitor cells, can their capacity to proliferate in response to injury become exhausted if repeatedly tested? This question has important implications for understanding why remyelination often fails and how easy it will be to mobilize OPCs therapeutically. The ability of adult OPCs to repopulate areas from which they are deficient appears to be very robust (Chari & Blakemore 2002). When the same area of CNS is exposed to several rounds of demyelination/remyelination, the number of OPCs is not reduced and the efficiency of remyelination is unimpaired by previous rounds of remyelination (Penderis et al. 2003a). This implies that a failure of remyelination is not due to an exhaustion of OPCs available to repopulate the demyelinated area and give rise to new oligodendrocytes. However, this appears only to be the case if sufficient time is left between demyelinating episodes to allow the OPC numbers to be replenished. OPC numbers do gradually diminish if an area of demyelination is exposed to a continuing demyelinating insult (Ludwin 1980; Mason et al. 2004). However, the interpretation of these long-term experiments in rodents is confounded by ageing, since this process alone can significantly impair the responsiveness of OPCs to demyelination (Sim et al. 2002b), partly due to changes in the signalling environment with ageing and, possibly, also due to intrinsic changes in the responsiveness of aged OPCs (Hinks & Franklin 2000; Decker et al. 2002; Chari et al. 2003). For a more detailed description of the many environmental factors regulating remyelination and how disturbances in their patterns of expression might contribute to remyelination failure see Franklin (2002). 7. Promoting endogenous remyelination Since endogenous remyelination spontaneously occurs in MS, sometimes partially but on occasions completely, an obvious therapeutic approach is to promote this naturally occurring repair process in situations where it is inefficient or has failed (Dubois-Dalcq et al. 2005; Lubetzki et al. 2005). Improved understanding of the mechanism of endogenous remyelination and why it fails will enable the development of strategies to promote spontaneous remyelination, as outlined in figure 2. In truth, while this approach is generally regarded as the preferred long-term means of promoting remyelination, it is currently further away from clinical implementation than the exogenous or transplantation approach and experimental proofs-of-principle are few. The reasons for this are many, including matching the information gained from experimental models to the clinical analyses. A commonly used inflammatory-mediated animal model of MS is experimental autoimmune encephalitic (EAE), created by immunization against specific oligodendrocyte/myelin components. Several studies have reported enhancement of remyelination in EAE models following administration (often systemic) of specific compounds (Komoly et al. 1992; Cannella et al. 1998; Fernandez et al. 2004). However, the significance of these studies and their interpretation is unclear for reasons including difficulty in separating effects on reduction of demyelination versus promotion of remyelination and relevance to chronically demyelinated MS lesions (which often contain abundant quiescent oligodendrocyte lineage cells that fail to fully differentiate into remyelinating oligodendrocytes) for which remyelination enhancing therapies are required (Scolding et al. 1998; Wolswijk 1998; Chang et al. 2002). Some of these difficulties can be overcome by using toxin models of demyelination. However, using these models, interventions that either inhibit remyelination or have no effect have proven much easier to achieve than those that promote remyelination (O'Leary et al. 2002; Penderis et al. 2003b; Ibanez et al. 2004; Back et al. 2005). Moreover, systemically delivered agents, whose site and mode of action is generally unknown, have generally proved more effective than those delivered directly into areas of demyelination (O'Leary et al. 2002; Penderis et al. 2003b; Ibanez et al. 2004). Together, these various data point to a mechanism of remyelination that is both complex and highly redundant, whose failure results from perturbation of a network of factors; the ‘dysregulation hypothesis’ (Franklin 2002) and where single factors often fail to tip a multi-component process towards more efficient working. For example, growth factors, which are potent regulators of OPC biology, often function best in combination with other growth factors or by interaction with integrin-mediated signalling (Baron et al. 2002; Colognato et al. 2002). It is for this reason that single growth factor interventions are unlikely to work. Instead, effective pro-remyelinating factors are likely to be those that trigger cascades of signalling events leading to the creation of a multifaceted pro-remyelination environment (and may not themselves be directly active of oligodendrocyte lineage cells) or are as yet unidentified non-redundant mediators of remyelination. In this regard, the role of ‘stem cells’ as potential cellular vehicles of ‘factors’ (see below) may be of interest. An alternative strategy for promoting remyelination may be to overcome inhibitory factors in lesions preventing it from occurring (Charles et al. 2002; Back et al. 2005). Additional approaches, aside from the identification of non-redundant mediators of remyelination, are to explore empirical approaches, such as human monoclonal antibodies where binding to the oligodendrocyte surface enhances remyelination in several demyelination models (Pavelko et al. 1998; Warrington et al. 2000; Pirko et al. 2004). Another is to bypass redundant extrinsic signalling events and target transcription factor genes critical for the developmental differentiation of multipotent precursors into oligodendrocytes such as Olig1 (Arnett et al. 2004). Olig2 and Nkx2.2 are promising candidates since their expression increases in precursors responding to demyelination (Fancy et al. 2004). 8. Remyelination by exogenous stem/precursor cells Considerable hope has been invested in the potential of stem cells as vehicles for neurological repair. In MS, expectation has been largely predicated on the potential of stem cells to yield unlimited numbers of defined myelinating cells. The emergence of methods to isolate and neuralize human embryonic stem cells in the last decade has fuelled that expectation. Successful remyelination has been achieved in a wide range of demyelinating models, using a variety of cell types. These have included embryonic- and adult-derived cells of the oligodendrocyte lineage, Schwann cells, olfactory ensheathing cells, and neural precursors (NPCs) and non-NPCs (figure 3; Blakemore & Crang 1988; Franklin & Blakemore 1997; Imaizumi et al. 1998a; Brustle et al. 1999; Keirstead et al. 1999; Zhang et al. 1999; Barnett et al. 2000; Kohama et al. 2001; Mitome et al. 2001; Akiyama et al. 2002). Transplantation-mediated remyelination is effective. The question, however, is less ‘does it work?’ (it does), but rather ‘are experimental predominantly rodent-based observations relevant to a multiphasic, multifocal disease with a variable natural history?’ In order to address this question, it is helpful to consider the challenges and thus requirements of any putative pro-myelinating cell. At a minimum, the cell must survive and navigate the pathological host environment to encounter and successfully reinvest demyelinated axons with new myelin. The ability to migrate between lesions would be a welcome addition to the curriculum vitae of such a cell given the multifocal nature of the disease. Other requirements such as sufficient numbers, resistance to endogenous disease and immune rejection are discussed later. The pathological environment, however, is not a fixed target, but variable and determined in part by the clinical phenotype and natural history of the disease. Recognition of four different patterns of demyelination in MS suggests that subtly different reparative treatments tailored for the distinct pathologies may be necessary (Lassmann et al. 2001). For example, primary progressive MS, characterized in general by less inflammation and earlier and more sustained axonal loss, may require a different approach to the more common relapse–remitting (RR) phenotype (Lucchinetti & Bruck 2004). Furthermore, the three distinct stages in the evolution of tissue injury attributable to MS—RR, relapsing with persistent deficits and secondary progressive—require different treatment strategies. Traditionally, it has been considered that remyelination strategies are best focussed upon chronic plaques—which are pathologically similar regardless of disease onset—characterized by demyelination, variable amounts of inflammation and gliosis (Lassmann et al. 1998). However, insights from animal models and pathological studies of MS lesions suggest that either earlier, and hence more acute, lesions represent better targets for transplantation-mediated myelin repair, or those chronic ones in which the environment has been altered to more closely resemble that of an acute remyelinating lesion are to be preferred (Hammarberg et al. 2000; Foote & Blakemore 2005; Kotter et al. 2005, 2006; Setzu et al. 2006). These studies provide evidence that inflammation can be beneficial to remyelination. 9. Choice of exogenous stem/precursor cell Although it is highly improbable that xenografts will ever be contemplated in clinical practice, comparatively few studies have examined the remyelination potential of human-derived cells. Recognition of inter-species difference in the behaviour of NPCs with respect to their capacity to generate oligodendrocytes caution the extent to which extrapolation from other systems can reliably be made to human disease; this emphasizes the need for improved understanding of human precursor cell biology (Chandran et al. 2004). Olfactory ensheathing cells and Schwann cells are presently the only readily accessible and potentially autologous adult human cell populations with well-characterized myelinating potential. Olfactory ensheathing cells are specialized glial cells that are entering phase I clinical studies in spinal cord injury on account of their ability to promote and augment axon growth (Feron et al. 2005). These cells may also have a role in myelin repair, given their ability to remyelinate central demyelinated axons with a Schwann cell-like phenotype (Franklin et al. 1996; Imaizumi et al. 1998b; Barnett et al. 2000; Sasaki et al. 2004). Moreover, unlike Schwann cells, olfactory ensheathing cells appear to be able to migrate and more readily integrate into an astrocytic environment (Lakatos et al. 2000; Lakatos et al. 2003). This is a distinct advantage, given that astrocytic gliosis is prevalent in both acute and chronic MS lesions. In view of the need for scale, human NPCs represent the most plausible source of exogenous central myelinating cells. NPCs can be readily derived from embryonic stem cells and the foetal and adult CNS (figure 3). There are merits and disadvantages associated with selecting any human-derived material. NPCs and cells of more restricted glial and neuronal potential are found in the adult brain. Aside from the degree of phenotypic potential of adult glial progenitors, contingent on environment, it is clear that (viewed collectively) the adult human brain contains a range of NPCs (Kukekov et al. 1999; Arsenijevic et al. 2001; Palmer et al. 2001; Nunes et al. 2003; Sanai et al. 2004). Regardless of origin, the requirement to direct human precursors to the early oligodendrocyte lineage—necessary for remyelination—ex vivo presents a considerable challenge (Smith & Blakemore 2000). For example, foetal NPCs, although readily propagated, cannot be systematically directed to a myelinating phenotype (Murray & Dubois-Dalcq 1997; Quinn et al. 1999; Zhang et al. 2000; Chandran et al. 2004). Cell surface-based selection methods provide one approach directly to isolate human white matter precursor(s) that possesses remyelinating potential (Roy et al. 1999; Windrem et al. 2002, 2004). Although valuable as an experimental resource, limited availability of foetal and adult human material constrains any widespread clinical application. Furthermore, in addition to there being a limited supply stream, the inability to standardize and predict or define sample(s) in terms of age, region and co-morbidity (for adult biopsy-derived specimens) precludes ready comparison between samples and increases the ‘noise’ of any resulting data. By contrast, human embryonic stem cells offer an alternative means to generate potentially unlimited numbers of defined NPCs and enriched populations of functional oligodendrocytes (Carpenter et al. 2001; Reubinoff et al. 2001; Zhang et al. 2001; Keirstead et al. 2005; Nistor et al. 2005). Notwithstanding advances in isolation, propagation of human embryonic stem (hES) cells and neuralization protocols that bring closer the prospect of clinical grade hES-derived NPCs, much remains to be determined with respect to the long-term efficacy of hES-derived precursors (Klimanskaya et al. 2005; Joannides et al. 2006; Ludwig et al. 2006). To date, no long-term studies have demonstrated stable functional integration of hES-derived neural derivatives. Indeed, concerns regarding potential oncogenic complications of hES-derived material have catalysed the study of alternative sources of human NPCs. 10. Non-neural stem cells Until recently, somatic stem cells were regarded as restricted to regeneration of their tissue of origin. Recent observations have raised the concept of more widespread phenotypic potential of adult somatic stem cells, despite additional explanations for some of the earlier experimental observations (Terada et al. 2002; Ying et al. 2002). Several lines of evidence focused largely around bone marrow and mesenchymal derivatives (figure 3) suggest that somatic stem cells derived from non-neural tissue may be capable of generating NPCs (Jiang et al. 2002, 2003; Clarke & Frisen 2001; Toma et al. 2001; Joannides et al. 2004). These studies find some support from opportunistic observations on the brains of females receiving bone marrow transplants from males indicating transdifferentiation rather than fusion as a potential explanation (Weimann et al. 2003; Cogle et al. 2004; Crain et al. 2005). The prospect of an ethically acceptable, readily accessible and potentially autologous source of NPCs is clearly very attractive. However, despite the evidence of remyelination by adult bone marrow-derived cells, there remains a large amount of basic characterization and improved understanding of the mechanism of ‘somatic stem cell plasticity’ to be gained before adult non-neural stem cells can be reasonably contemplated as a reliable source of myelinating cells (Akiyama et al. 2002; Takahashi & Yamanaka 2006). Additional properties of adult stem cells may, however, offer a more immediate and plausible role in remyelination and neuroprotective therapies. 11. Can cellular therapies promote neuroprotection independent of differentiation? Cellular therapies for MS have until recently been viewed as a cell-replacement strategy to be targeted at site-specific repair. However, recognition of the potential utility of NPCs outside of directed differentiation offers additional therapeutic opportunities. The demonstration that intravenous administration of stem cells leads to delivery throughout the inflammatory neuraxis resulting in axonal protection and functional improvement is of considerable interest (Pluchino et al. 2003, 2005; Zappia et al. 2005). Specifically, increasing evidence suggests that stem cell-based therapies may be neuroprotective in models of multifocal inflammatory disease, independent of directed differentiation. These studies highlight the immune-modulatory effects of stem cells. Inflammation is not only central to disease pathogenesis, but is also likely to be necessary for optimum remyelination (see below), and the idea that cellular immune-modulation may blunt the former and promote the latter is intriguing. Two cell types have demonstrated efficacy: rodent NPCs (neonatal and adult) and adult mesenchymal stem cells (MSCs; Einstein et al. 2003; Pluchino et al. 2005; Zappia et al. 2005). Pluchino et al. (2003, 2005) have shown that systemically delivered undifferentiated adult NPCs into EAE mice can, contingent on the microenvironment, either differentiate into myelinating cells or, where inflammation predominates, exert a neuroprotective effect by inducing selective apoptotic death of Th1 cells. Expression of integrin and G-protein-coupled receptors permits circulating NPCs to be recruited to the CNS using adhesion and chemokine-mediated homing mechanisms analogous to those used by activated lymphocytes. In this regard, the expression of chemokine receptors by NPCs and MSCs is of interest (Tran et al. 2004; Dar et al. 2005; Honczarenko et al. 2005; Pluchino et al. 2005; Sordi et al. 2005). The mechanism of neuroprotection is uncertain, but evidence is provided suggesting that this is mediated, at least in part, by selective NPC-mediated T-cell death. Importantly, in the chronic recurrent EAE model, NPCs resulted in functional improvement and reduced axonal loss (Pluchino et al. 2005). A further study has demonstrated that peripheral delivery of human bone marrow-derived MSCs also promote functional recovery in EAE mice (Zhang et al. 2005). It remains to be determined whether MSCs also produce trophic factors that have previously been shown to promote axonal health, thus offering a further mechanism of neuroprotection (Wilkins et al. 2003). The multifocal nature of MS has long been a conceptual barrier to stem cell-based therapies. However, the ability of precursors, delivered systemically or by the intrathecal route, to cross the blood–brain barrier and then exhibit migratory behaviour within the pathological brain suggests that the dilemma of how best to deliver and distribute a novel cellular therapy may not be problematic (Ben Hur et al. 2003; Pluchino et al. 2003). In addition to the idea of stem cells behaving as cellular immune-modulators, their demonstrated migratory and ‘homing’ effects additionally raise the prospect of using neural stem cells as cellular delivery vehicles. Proofs of concept studies in animal models have exploited the innate tropism of stem cells to target therapy to pathological lesions (Aboody et al. 2000; Benedetti et al. 2000). The molecular basis of homing is uncertain; however, inflammation appears to be a regulator of tropism through the action of various cytokines acting through receptors including CCR2, CCR3, CCR5 and CXCR4 that are expressed in EAE and MS brains (Kennedy et al. 1998; Simpson et al. 2000; Tran et al. 2004; Honczarenko et al. 2005; Pluchino et al. 2005). The Holy Grail in MS research is to deliver therapies that limit and repair damage. Despite significant advances in disease-modification treatments, strategies that enable repair remain elusive. This in part reflects the absence of consensus for the cause and mechanism of disease progression. At the heart of this question lies the role and interrelationship of inflammation and axonal loss. A parallel though ultimately convergent question is how can axonal loss, the pathological correlate of disability, be limited? The intuitive view that remyelination is protective of axons in MS is supported by considerable although largely indirect evidence. Remyelination as a neuroprotective therapy thus appears a reasonable hypothesis. Although therapeutic success will most probably require details of the pathological process that limit remyelination to be further understood, an increasing body of evidence provides grounds for cautious optimism that stem cell-based therapies offer realistic prospects for myelin repair over the next decade. One contribution of 14 to a Theme Issue ‘Stem cells and brain repair’. - © 2007 The Royal Society
<urn:uuid:ec93fb51-0b79-4f42-b520-35618430a377>
CC-MAIN-2017-17
http://rstb.royalsocietypublishing.org/content/363/1489/171
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917118552.28/warc/CC-MAIN-20170423031158-00599-ip-10-145-167-34.ec2.internal.warc.gz
en
0.906108
7,857
2.65625
3
German occupation of Estonia during World War II |Part of a series of articles on the| |Occupation of the After Nazi Germany invaded the Soviet Union on June 22, 1941, Army Group North reached Estonia in July. Initially the Germans were perceived by most Estonians as liberators from the USSR and its repressions, having arrived only a week after the first mass deportations from the Baltics. Although hopes were raised for the restoration of the country's independence, it was soon realized that they were but another occupying power. The Germans pillaged the country for their war effort and unleashed The Holocaust in Estonia during which they and their collaborators murdered tens of thousands of people (including ethnic Estonians, Estonian Jews, Estonian Gypsies, Estonian Russians, Soviet prisoners, Jews from other countries and others). For the duration of the occupation, Estonia was incorporated into the German province of Ostland. - 1 Occupation - 2 Political resistance - 3 Estonians in Nazi German military units - 4 German administrators - 5 Collaboration - 6 Controversies - 7 See also - 8 References - 9 External links Nazi Germany invaded the Soviet Union on June 22, 1941. Three days later, on June 25, Finland declared herself to once again be in a state of war with the USSR, starting the Continuation War. On July 3, Joseph Stalin made his public statement over the radio calling for scorched-earth policy in the areas to be abandoned. Because the northernmost areas of the Baltic states were the last to be reached by the Germans, it was here that the Soviet destruction battalions had their most extreme effects. The Estonian forest brothers, numbering about 50,000, inflicted heavy casualties on the remaining Soviets; as many as 4,800 were killed and 14,000 captured. Even though the Germans did not cross the Estonian southern border until July 7–9, Estonian soldiers who had deserted from Soviet units in large numbers, opened fire on the Red Army as early as June 22. On that day, a group of forest brothers attacked Soviet trucks on a road in the district of Harju. The Soviet 22nd Rifle Corps was the unit that lost most men, as a large group of Estonian soldiers and officers deserted from it. Furthermore, border guards of Soviet Estonia were mostly people who had previously worked for independent Estonia, and they also escaped to the forests, becoming one of the best groups of Estonian fighters. An Estonian writer Juhan Jaik wrote in 1941: "These days bogs and forests are more populated than farms and fields. The forests and bogs are our territory while the fields and farms are occupied by the enemy [e.g. the Soviets]". The 8th Army (Major General Ljubovtsev), retreated in front of the 2nd corps of the German Army behind the Pärnu River - the Emajõgi River line on July 12. As German troops approached Tartu on July 10 and prepared for another battle with the Soviets, they realized that the Estonian partisans were already fighting the Soviet troops. The Wehrmacht stopped its advance and hung back, leaving the Estonians to do the fighting. The battle of Tartu lasted two weeks, and destroyed most of the city. Under the leadership of Friedrich Kurg, the Estonian partisans drove out the Soviets from Tartu on their own. In the meanwhile, the Soviets had been murdering citizens held in Tartu Prison, killing 192 before the Estonians captured the city. At the end of July the Germans resumed their advance in Estonia working in tandem with the Estonian Forest Brothers. Both German troops and Estonian partisans took Narva on August 17 and the Estonian capital Tallinn on August 28. On that day, the Soviet flag shot down earlier on Pikk Hermann was replaced with the Flag of Estonia by Fred Ise. After the Soviets were driven out from Estonia, German troops disarmed all the partisan groups. The Estonian flag was soon replaced with the flag of Nazi Germany, and the 2,000 Estonian soldiers that took part in the parade in Tartu (July 29), were disbanded. Most Estonians greeted the Germans with relatively open arms and hoped for the restoration of independence. Estonia set up an administration, led by Jüri Uluots as soon as the Soviet regime retreated and before German troops arrived. Estonian partisans that drove the Red Army from Tartu made it possible. That all was for nothing since the Germans had made their plans as set out in Generalplan Ost,:54 they disbanded the provisional government and Estonia became a part of the German-occupied "Ostland". A Sicherheitspolizei was established for internal security under the leadership of Ain-Ervin Mere. In April 1941, on the eve on the German invasion, Alfred Rosenberg, Reich minister for the Occupied Eastern territories, a Baltic German, born and raised in Tallinn, Estonia, laid out his plans for the East. According to Rosenberg a future policy was created: - Germanization (Eindeutschung) of the "racially suitable" elements. - Colonization by Germanic people. - Exile, deportations of undesirable elements. Rosenberg felt that the "Estonians were the most Germanic out of the people living in the Baltic area, having already reached 50 percent of Germanization through Danish, Swedish and German influence". Non-suitable Estonians were to be moved to a region that Rosenberg called "Peipusland" to make room for German colonists. The initial enthusiasm that accompanied the liberation from Soviet occupation quickly waned as a result and the Germans had limited success in recruiting volunteers. The draft was introduced in 1942, resulting in some 3400 men fleeing to Finland to fight in the Finnish Army rather than join the Germans. Finnish Infantry Regiment 200 (Estonian: soomepoisid 'Finnish boys') was formed out of Estonian volunteers in Finland. With the Allied victory over Germany becoming certain in 1944, the only option to save Estonia's independence was to stave off a new Soviet invasion of Estonia until Germany's capitulation. In June 1942 political leaders of Estonia who had survived Soviet repressions held a hidden meeting from the occupying powers in Estonia where the formation of an underground Estonian government and the options for preserving continuity of the republic were discussed. On January 6, 1943 a meeting was held at the Estonian foreign delegation in Stockholm. It was decided that, in order to preserve the legal continuity of the Republic of Estonia, the last constitutional prime minister, Jüri Uluots, must continue to fulfill his responsibilities as prime minister. In June 1944 the elector’s assembly of the Republic of Estonia gathered in secrecy from the occupying powers in Tallinn and appointed Jüri Uluots as the prime minister with the responsibilities of the President. On June 21 Jüri Uluots appointed Otto Tief as deputy prime minister. As the Germans retreated, on September 18, 1944 Jüri Uluots formed a government led by the Deputy Prime Minister, Otto Tief. On September 20 the Nazi German flag on Pikk Hermann was replaced with the tricolor flag of Estonia. On September 22 the Red Army took Tallinn and the Estonian flag on Pikk Hermann was replaced with the Soviet flag. The Estonian underground government, not officially recognized by either the Nazi Germany or Soviet Union, fled to Stockholm, Sweden and operated in exile until 1992, when Heinrich Mark, the Prime Minister of the Republic of Estonia in duties of the President in exile, presented his credentials to the newly elected President of Estonia Lennart Meri. On February 23, 1989 the flag of the Estonian SSR had been lowered on Pikk Hermann; it was replaced with the flag of Estonia to mark Estonian Independence Day on February 24, 1989. Estonians in Nazi German military units The annexation of Estonia by the USSR in 1940 was complete, but never recognized internationally except by Eastern Bloc countries. After the annexation, Estonians were subject to conscription into the Red Army, which by international law is illegal if Estonia is not considered to have been a part of the USSR. When the Soviets retreated from Estonia and Germany fully occupied it, in the summer of 1941, the Germans continued the practice of dragooning Estonian men, although the majority joined the German Army voluntarily, often out of the desire to fight the USSR, which had made strong enemies with many groups of society in Estonia after introducing their socialist economic system. Up to March 1942 drafted Estonians mostly served in the rear of the Army Group North security. On August 28, 1942 the German powers announced the legal compilation of the so-called "Estonian Legion" within the Waffen SS. Oberführer Franz Augsberger was nominated the commander of the legion. Up to the end of 1942 about 1,280 men volunteered into the training camp. Bataillon Narwa was formed from the first 800 men of the Legion to have finished their training at Heidelager, being sent in April 1943 to join the Division Wiking in Ukraine. They replaced the Finnish Volunteer Battalion, recalled to Finland for political reasons. In March 1943, a partial mobilization was carried out in Estonia during which 12,000 men were conscripted into the SS. On May 5, 1943 the 3rd Waffen-SS Brigade (Estonian), another fully Estonian unit, was formed and sent to the front near Nevel. By January 1944, the front was pushed back by the Red Army almost all the way to the former Estonian border. Jüri Uluots, the last constitutional Prime Minister of the Republic of Estonia, the leader of Estonian underground government delivered a radio address on February 7 that implored all able-bodied men born from 1904 through 1923 to report for military service in the SS (before this, Uluots had opposed any German mobilization of Estonians.) Following Uluots' address, 38.000 conscripts jammed registration centers. Several thousand Estonians who had volunteered to join the Finnish army were transferred back across the Gulf of Finland to join the newly formed Territorial Defense Force, assigned to defend Estonia against the Soviet advance. The maximum amount of Estonians enrolled in Nazi-German military units was 70,000. The initial formation of the volunteer Estonian Legion created in 1942 was eventually expanded to become a full-sized conscript division of the Waffen SS in 1944, the 20th Waffen Grenadier Division of the SS (1st Estonian). Units consisting largely of Estonians — often under German officers – saw action on the Narva line throughout 1944. Many Estonians hoped that they would attract support from the Allies, and ultimately a restoration of their interwar independence, by resisting the Soviet reoccupation of their country. In the end, there was no physical Allied support, largely because they were fighting under Nazi flags. On February 2, 1944 the advance guard units of the 2nd Shock Army reached the border of Estonia as a part of the Kingisepp–Gdov Offensive which began on February 1. Field Marshal Walter Model was nominated the leader of the German Army Group North. The Soviet Narva Offensive (15–28 February 1944) led by Soviet General Leonid A. Govorov, the commander of the Leningrad Front, commenced. On February 24, Estonian Independence Day, the counterattack of the so-called Estonian Division to break the Soviet bridgeheads began. A battalion of Estonians led by Rudolf Bruus destroyed a Soviet bridgehead. Another battalion of Estonians led by Ain-Ervin Mere was successful against another bridgehead, at Vaasa-Siivertsi-Vepsaküla. On March 6, this work was complete. The Leningrad Front concentrated 9 corps at Narva against 7 divisions and one brigade. On March 1, the Soviet Narva Offensive (1–4 March 1944) began in the direction of Auvere. The 658th Eastern Battalion led by Alfons Rebane and the 659th Eastern Battalion commanded by Georg Sooden were involved in defeating the operation. On March 17, twenty Soviet divisions again unsuccessfully attacked the three divisions in Auvere. On April 7, the leadership of the Red Army ordered to go on the defensive. In March the Soviets committed bombing attacks towards the towns of Estonia, including the bombing of Tallinn on March 9. On July 24 the Soviets began the new Narva Offensive (July 1944) in the direction of Auvere. The 1st battalion (Stubaf Paul Maitla) of the 45th Regiment led by Harald Riipalu and the fusiliers (previously "Narva"), under the leadership of Hatuf Hando Ruus, were involved in repelling the attack. Finally, Narva was evacuated and a new front was settled on the Tannenberg Line in the Sinimäed Hills. On the first of August the Finnish government and President Ryti were to resign. On the next day, Aleksander Warma, the Estonian Ambassador to Finland (1939–1940 (1944)) announced that the National Committee of the Estonian Republic had sent a telegram, which requested the Estonian volunteer regiment to be returned to Estonia fully equipped. On the following day, the Finnish Government received a letter from the Estonians. It had been signed in the name of "all national organizations of Estonia" by Aleksander Warma, Karl Talpak and several others, seconding the request. It was then announced that the regiment would be disbanded and that the volunteers were free to return home. An agreement had been reached with the Germans, and the Estonians were promised amnesty if they chose to return and fight in the SS. As soon as they landed, the regiment was sent to perform a counter-attack against the Soviet 3rd Baltic Front, which had managed a breakthrough on the Tartu front, and was threatening the capital Tallinn. After an attempt to break through the Tannenberg Line failed, the main struggle was carried to the south of Lake Peipus, where on August 11, Petseri was taken and Võru on August 13. Near Tartu, the 3rd Baltic Front was stopped by the Kampfgruppe "Wagner" which involved military groups sent from Narva under the command of Alfons Rebane and Paul Vent and the 5th SS Volunteer Sturmbrigade Wallonien led by Léon Degrelle. On August 19, 1944 Jüri Uluots, in a radio broadcast, called for the Red Army to hold back and a peace agreement to be reached. As Finland left the war on September 4, 1944 according to their peace agreement with the USSR, the defence of the mainland became practically impossible and the German command decided to retreat from Estonia. Resistance against the Soviets continued in the Moonsund Archipelago until November 23, 1944, when the Germans evacuated the Sõrve Peninsula. According to the Soviet data, the conquest of the territory of Estonia cost them 126,000 casualties. Some disregard the official figures and argue that a more realistic number is 480,000 for the Battle of Narva only, considering the intensity of the fighting at the front. On the German side, their own data shows 30,000 dead, which some have similarly seen as underrated, preferring at the minimum 45,000. In 1941 Estonia was occupied by German troops and after a brief period of military rule — dependent of the Commanders of the Army Group North (in the occupied U.S.S.R.) — a German civilian administration was established and Estonia was organized as a General Kommissariat becoming soon afterwards part of the Reichskommissariat Ostland. (subordinated to the Reichskommissar Ostland) - 1941–1944 SA-Obergruppenfuhrer Karl Sigismund Litzmann (1893) S.S. und Polizeiführer (responsible for internal security and war against the resistance — directly subordinated to the H.S.S.P.F. of Ostland, not to the Generalkommissar) - 1941–1944 SS-Oberführer Hinrich Möller (SS-Mitglied) (1906–1974) - 1944 SS-Brigadeführer Walter Schröder (1902–1973) (responsible for the operation of all concentration camps within the Reichskommissariat Ostland) - SS-Hauptsturmführer Hans Aumeier (1906–1947) Estonian Self-Administration (Estonian: Eesti Omavalitsus), also known as the Directorate, was the puppet government set up in Estonia during occupation of Estonia by Nazi Germany. According to Estonian International Commission for the Investigation of Crimes Against Humanity - Although the Directorate did not have complete freedom of action, it exercised a significant measure of autonomy, within the framework of German policy, political, racial and economic. For example, the Directors exercised their powers pursuant to the laws and regulations of the Republic of Estonia, but only to the extent that these had not been repealed or amended by the German military command. - 1941–1944 Hjalmar Mäe (1901–1978) Director for Home Affairs - 1941–1944 Oskar Angelus (1892–1979) Directors for Justice - 1941–1943 Hjalmar Mäe - 1943–1944 Oskar Öpik Director for Finance - 1941–1944 Alfred Wendt (1902) The process of Jewish settlement in Estonia began in the 19th century, when in 1865 Alexander II of Russia granted them the right to enter the region. The creation of the Republic of Estonia in 1918 marked the beginning of a new era for the Jews. Approximately 200 Jews fought in combat for the creation of the Republic of Estonia and 70 of these men were volunteers. From the very first days of her existence as a state, Estonia showed her tolerance towards all the peoples inhabiting her territories. On 12 February 1925 The Estonian government passed a law pertaining to the cultural autonomy of minority peoples. The Jewish community quickly prepared its application for cultural autonomy. Statistics on Jewish citizens were compiled. They totaled 3,045, fulfilling the minimum requirement of 3000 for cultural autonomy. In June 1926 the Jewish Cultural Council was elected and Jewish cultural autonomy was declared. Jewish cultural autonomy was of great interest to global Jewish community. The Jewish National Endowment presented the Estonian government with a certificate of gratitude for this achievement. There were, at the time of Soviet occupation in 1940, approximately 4000 Estonian Jews. The Jewish Cultural Autonomy was immediately abolished. Jewish cultural institutions were closed down. Many of Jewish people were deported to Siberia along with other Estonians by the Soviets. It is estimated that 350–500 Jews suffered this fate. About three-fourths of Estonian Jewry managed to leave the country during this period. Out the approximately 4,300 Jews in Estonia prior to the war, almost 1000 were entrapped by the Nazis. Round-ups and killings of Jews began immediately following the arrival of the first German troops in 1941, who were closely followed by the extermination squad Sonderkommando 1a under Martin Sandberger, part of Einsatzgruppe A led by Walter Stahlecker. Arrests and executions continued as the Germans, with the assistance of local collaborators, advanced through Estonia. Unlike German forces, some support apparently existed among an undefined segment of the local collaborators for anti-Jewish actions. The standard form used for the cleansing operations was arrest 'because of communist activity'. The equation between Jews and communists evoked a positive response among some Estonians. Estonians often argued that their Jewish colleagues and friends were not communists and submitted proofs of pro-Estonian conduct in hope to get them released. Estonia was declared Judenfrei quite early by the German occupation regime at the Wannsee Conference. Jews that had remained in Estonia (921 according to Martin Sandberger, 929 according to Evgenia Goorin-Loov and 963 according to Walter Stahlecker) were killed. Fewer than a dozen Estonian Jews are known to have survived the war in Estonia. The Nazi regime also established 22 concentration and labor camps on occupied Estonian territory for foreign Jews. The largest, Vaivara concentration camp housed 1,300 prisoners at a time. These prisoners were mainly Jews, with smaller groups of Russians, Dutch, and Estonians. Several thousand foreign Jews were killed at the Kalevi-Liiva camp. Four Estonians most responsible for the murders at Kalevi-Liiva were accused at war crimes trials in 1961. Two were later executed, while the Soviet occupation authorities were unable to press charges against two who lived in exile. There have been knowingly 7 ethnic Estonians: Ralf Gerrets, Ain-Ervin Mere, Jaan Viik, Juhan Jüriste, Karl Linnas, Aleksander Laak and Ervin Viks that have faced trials for crimes against humanity. Since the reestablishment of the Estonian independence markers were put in place for the 60th anniversary of the mass executions that were carried out at the Lagedi, Vaivara and Klooga (Kalevi-Liiva) camps in September 1944. Estonian military units' involvement in crimes against humanity The Estonian International Commission for the Investigation of Crimes Against Humanity has reviewed the role of Estonian military units and police battalions in an effort to identify the role of Estonian military units and police battalions participation during World War II in crimes against humanity. The conclusions of the Estonian International Commission for the Investigation of Crimes Against Humanity are available online. It says that there is an evidence of Estonian units' involvement in crimes against humanity, and acts of genocide; however, the commission noted Given the frequency with which police units changed their personnel, the Commission does not believe that membership in the cited units, or in any specific unit is, on its own, proof of involvement in crimes. However, those individuals who served in the units during the commission of crimes against humanity are to be held responsible for their own actions. Views diverge on history of Estonia during World War II and following the occupation by Nazi Germany. - According to Estonian point of view, the occupation of Estonia by Soviet Union lasted five decades, only interrupted by the Nazi invasion of 1941-1944. Estonian representatives at the European Parliament even made a motion for a resolution acknowledging the 48 years of occupation as a fact. The final version of the resolution of European parliament, however, only acknowledged Estonia's loss of independence lasting from 1940 to 1991 and that annexation of Estonia by Soviet Union was considered illegal by Western democracies. - The position of the Russian Government: Russia has denied that Soviet Union illegally annexed the Baltic republics of Latvia, Lithuania and Estonia in 1940. The Kremlin's European affairs chief Sergei Yastrzhembsky: "There was no occupation." Russian State officials look at the events in Estonia in the end of World War II as the liberation from fascism by the Soviet Union. - Views of World War II veteran, an Estonian Ilmar Haaviste fought on the German side: “Both regimes were equally evil — there was no difference between the two except that Stalin was more cunning”. - Views of World War II veteran, an Estonian Arnold Meri fought on the Soviet side: "Estonia's participation in World War II was inevitable. Every Estonian had only one decision to make: whose side to take in that bloody fight — the Nazis' or the anti-Hitler coalition's." - Views of World War II veteran, a Russian fought on the Soviet side in Estonia answering a question: How do you feel being called an "occupier"? " Viktor Andreyev: "Half believe one thing half believe another. That's in the run of things." In 2004 controversy regarding the events of World War II in Estonia surrounded the Monument of Lihula. In April 2007 the divergent views on history of World War II in Estonia centered around the Bronze Soldier of Tallinn. - 20th Waffen Grenadier Division of the SS (1st Estonian) - Estonian resistance movement - Klooga concentration camp - Reichskommissariat Ostland - "Conclusions of the Commission". Estonian International Commission for Investigation of Crimes Against Humanity. 1998. Archived from the original on June 29, 2008. - Chris Bellamy. The Absolute War. Soviet Russia in the Second World War, page 197. Vintage Books, New York 2008. ISBN 978-0-375-72471-8 - Lande, Dave, Resistance! Occupied Europe and Its Defiance of Hitler, p. 188, ISBN 0-7603-0745-8 - Chris Bellamy. The Absolute War. Soviet Russia in the Second World War, page 198. Vintage Books, New York 2008. ISBN 978-0-375-72471-8 - Buttar, Prit. Between Giants. ISBN 9781780961637. - Raun, Toivo U., Estonia and the Estonians (Studies of Nationalities), ISBN 0-8179-2852-9 - Chronology at the EIHC - Mälksoo, Lauri (2000). Professor Uluots, the Estonian Government in Exile and the Continuity of the Republic of Estonia in International Law. Nordic Journal of International Law 69.3, 289–316. - Mark, Heinrich, Heinrich Mark (in Estonian), president.ee, archived from the original on 2007-11-14, retrieved 12 July 2013 - Estonian Vikings: Estnisches SS-Freiwilligen Bataillon Narwa and Subsequent Units, Eastern Front, 1943–1944. - Uluots, Jüri, Jüri Uluots, president.ee, archived from the original on 2007-09-27, retrieved 12 July 2013 - Lande, Dave, Resistance! Occupied Europe and Its Defiance of Hitler, p. 200, ISBN 0-7603-0745-8 - Estonian State Commission on Examination of Policies of Repression (2005). The White Book: Losses inflicted on the Estonian nation by occupation regimes. 1940–1991 (PDF). Estonian Encyclopedia Publishers. Retrieved 2009-06-25. - The Baltic States: The National Self-Determination of Estonia, Latvia and Lithuania, Graham Smith, p. 91, ISBN 0-312-16192-1. - Aleksander Warma, president.ee - Mart Laar (2006). Sinimäed 1944: II maailmasõja lahingud Kirde-Eestis (Sinimäed Hills 1944: Battles of World War II in Northeast Estonia) (in Estonian). Tallinn: Varrak. - Hannes, Walter. "Estonia in World War II". Historical Text Archive. Retrieved 2008-10-21. - Conclusions of the Estonian International Commission for the Investigation of Crimes Against Humanity Archived June 21, 2007, at the Wayback Machine. — Phase II: The German occupation of Estonia in 1941–1944 Archived June 29, 2007, at the Wayback Machine. - "Estonia", The Virtual Jewish History Tour, retrieved 2009-03-11 - Weiss-Wendt, Anton (1998). The Soviet Occupation of Estonia in 1940–41 and the Jews. Holocaust and Genocide Studies 12.2, 308–25. - Berg, Eiki (1994). The Peculiarities of Jewish Settlement in Estonia. GeoJournal 33.4, 465–70. - The Holocaust in the Baltics - Birn, Ruth Bettina (2001), Collaboration with Nazi Germany in Eastern Europe: the Case of the Estonian Security Police. Contemporary European History 10.2, 181–98. Cite error: Invalid <ref>tag; name "birn" defined multiple times with different content (see the help page). - Museum of Tolerance Multimedia Learning Center, Wiesenthal, archived from the original on 2007-09-28 - Communism and Crimes against Humanity in the Baltic states - Holocaust Markers, Estonia, Heritage Abroad - Gilbert, Sir Martin, The Righteous: The Unsung Heroes of the Holocaust, p. 31, ISBN 0-8050-6260-2 - Conclusions of the Estonian International Commission for the Investigation of Crimes Against Humanity Archived June 21, 2007, at the Wayback Machine. - Moscow celebrations Archived September 29, 2007, at the Wayback Machine. at newsfromrussia - Motion for a resolution on the Situation in Estonia, 2007-05-21, retrieved 2010-03-05, Estonia, as an independent Member State of the EU and NATO, has the sovereign right to assess its recent tragic past, starting with the loss of independence as a result of the Hitler-Stalin Pact of 1939 and including three years under Hitler’s occupation and terror, as well as 48 years under Soviet occupation and terror, - European Parliament resolution of 24 May 2007 on Estonia, 2007-05-24, retrieved 2010-03-05, Estonia, as an independent Member State of the EU and NATO, has the sovereign right to assess its recent tragic past, starting with the loss of independence resulting from the Hitler-Stalin Pact of 1939 and ending only in 1991, the Soviet occupation and annexation of the Baltic States was never recognised as legal by the Western democracies - Russia denies Baltic 'occupation', BBC, May 5, 2005, retrieved May 20, 2010 - Booth, Jenny (April 27, 2007), Russia threatens Estonia over removal of Red Army statue, London: Times, retrieved May 20, 2010 - When giants fought in Estonia, BBC, May 9, 2007, retrieved May 20, 2010 - Birn, Ruth Bettina (2001), Collaboration with Nazi Germany in Eastern Europe: the Case of the Estonian Security Police. Contemporary European History 10.2, 181–98. - Estonian SS-Legion (photographs) - Estonian SS-Legion (photographs) - Hjalmar Mäe - Hjalmar Mäe (photograph) - Saksa okupatsioon Eestis - Weiss-Wendt, Anton (2003). Extermination of the Gypsies in Estonia during World War II: Popular Images and Official Policies. Holocaust and Genocide Studies 17.1, 31–61.
<urn:uuid:d66ebffc-54eb-43f9-90ec-16963eb3066d>
CC-MAIN-2017-17
https://en.wikipedia.org/wiki/German_occupation_of_Estonia_during_World_War_II
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917122955.76/warc/CC-MAIN-20170423031202-00485-ip-10-145-167-34.ec2.internal.warc.gz
en
0.95115
6,400
3.734375
4
A Few of Our Biographies: Ask the Editors This is Part Four of a Five-Part Article "Ask the Editors" appeared in each issue of The History Channel Magazine from 2003 to 2007 with answers to reader's questions about history. A predecessor of "Ask the Editors" called "Fact or Fiction" ran in Biography magazine during its life as a mass-circulation monthly, 1997 to 2003. Q. I have a question about the inspiring effort to protect Indians conducted by Catholic friar Bartolome de Las Casas in the 1500s. This led to enlightened laws in Spanish America forbidding their enslavement. Slavery was of course widespread in America. Indians in other parts of the country lacked brave advocates such as Las Casas. Just from a cold-blooded economic point of view, why weren't these Indians enslaved? Wouldn’t this have made sense to the whites, rather than their importing slaves from overseas? A. Many historians have sought new insights into slavery in recent years - it's a hot topic in college history departments. Ira Berlin of the University of Maryland, author of "Many Thousands Gone: The First Two Centuries of Slavery in North America" (1998), supplied the following answer in an interview. Many Native Americans were enslaved in the 1500s and 1600s during the early years of European settlement of this continent (as were black Africans). New Englanders were very active as Indian slavers. Out west, French Canadians snatched Indians from the foothills of the Rocky Mountains and exported them to Martinique. And so on. "A nasty business," notes Berlin. In the 1700s, America experienced explosive growth in a specific venue of slave labor: plantations. The "plantation revolution," as Berlin calls it, was one of the key economic stories in the American Colonies in these years. These large tracts of land grew tobacco and rice. (Big cotton plantations developed a bit later, after Eli Whitney’s invention of the modern cotton gin in 1793.) Plantation owners possessed a "nearly insatiable" appetite for slave labor and not enough Indians were available. Berlin cites the situation in the Chesapeake Bay region as an example: "Planters enslaved Indians where they could....(but) the Native-American population was dwindling fast at the end of the seventeenth century, so Africans became the object of the planters’ desire." The African trade exploded and became a dominant fact of American life. See here for background on the British effort to end the slave trade, here for an article about the Lincoln-Douglas debates about slavery, and here for reading suggestions about the history of human rights. Q. I’ve been trying to recall what the Chrysler concept car looked like that was shown at, I believe, the Paris Auto Show, and then lost at sea in 1956 with the sinking of the Andrea Doria. Can you provide background information on the car and on the accident? A. Automakers have long built "concept cars," also known as "idea cars," "show cars," and "dream cars," to test new design and engineering ideas, stimulate creativity, and generate buzz. In the 1950s, Chrysler Corp. contracted with the Italian firm Ghia to build such vehicles, one of which was the Chrysler Norseman, which sank in 1956 with the Italian ocean liner Andrea Doria. The Norseman, built over the course of 15 months at Ghia's plant in Turin, featured a fastback design and an overhanging roof that didn't rest on pillars but was attached to the glass of the windshield. The car was not shown at the Paris auto show. Upon its completion in Italy, it was dispatched to the U.S. aboard the pride of the Italian fleet. The Andrea Doria was one of the largest, most luxurious, and most beautiful passenger ships in the world – indeed, it was one of the most gorgeous ocean liners ever built - 700 feet long with sweeping lines, lovely decor, and expensive artwork. However, it had design problems affecting its stability and seaworthiness; these flaws contributed to severe listing after the collision of July 25, 1956. While steaming west in the foggy North Atlantic late that evening, the ship collided with the east-bound Swedish liner Stockholm about 50 miles southeast of Nantucket Island and 200 miles east of New York City. Fifty-one people were killed in the accident; hundreds needed rescue. Many crew members of the Andrea Doria were among the first to abandon ship (a contravention of proud maritime tradition); their cowardly absence contributed to widespread panic during the rescue. Many injuries resulted. Passengers dropped children into lifeboats, though this was not necessary. Norma Di Sandro, age four, died in a Boston hospital after being dropped. The Stockholm suffered a crumpled forward section and eventually limped into New York harbor, while the Andrea Doria slipped beneath the waves on the morning of July 26 after an 11-hour wallow. The ship rests today in 235 feet of cold, dark, swirling, shark-infested water. See Life magazine's coverage here with a piece by Walter Lord, author of "A Night to Remember" about the Titanic. According to one rumor, the Chrysler Norseman was crated in a vacuum-sealed canister during the trans-Atlantic voyage, which would suggest interesting possibilities for salvage experts, but this story has never been verified, and seems "unlikely" to Bruce R. Thomas, historian for DaimlerChrysler. David W. Temple of Car Collector magazine writes, "No one is absolutely sure of the (transport) method employed" – special canister, plain box, or wooden pallet. Also, no one knows the exact spot where the vehicle was stored. Supposedly, quantities of cash and jewels also went down with the ship, but this, too, remains unverified. The wreck has attracted attention from divers searching for treasure and thrills; it's regarded as "the Mt. Everest of diving," according to the book "Deep Descent: Adventure and Death Diving the Andrea Doria" by Kevin F. McMurray (previewed here). Several explorers have been killed in the rubble-strewn confines of the vessel. One of the best examinations of the Andrea Doria disaster is the book "Collision Course" by Alvin Moscow. See here for more on the history of concept cars. Q. What is the origin of the military expression "five-by-five" to indicate clear radio communications? For example, "I am reading you five-by-five, over." A. The phrase apparently dates from the early days of radio when operators wanted to indicate the quality of transmission on a scale of one to five. "Five-by-five" denotes "loud and clear" – the first "five" refers to the strength of the transmission ("loud") while the second refers to intelligibility ("clear"). "One-by-one" would indicate that the transmission is essentially inaudible. "Ask the Editors" has received a couple of other questions recently about the origins of words and phrases. One questioner asks about the phrase "the whole nine yards" and says he read on the Internet that the phrase comes from the length of machine gun ammunition belts during World War II. This explanation is probably not true, says David Wilton in "Word Myths" (2004) because quantities of ammo are not counted by belt length but by number of rounds or by weight. The phrase "the whole nine yards" may have arisen in the 1960s and is definitely American, Wilton says, but its exact origin is a mystery. According to one theory, the phrase refers to the amount of liquid concrete that can be carried by a typical concrete truck. Another idea is that it connects to the amount of dirt in a burial plot. Still another theory says it's connected to football - perhaps some football coach wanted ten yards for a first down, got nine, and made a joke about it. Another questioner asks about the origin of the word "doughboy," a popular term for American troops in Europe during World War I. The word’s derivation is murky but it seems to date to the 19th century. One possible source is the buttons on uniforms worn during the American Civil War and earlier; these resembled dumplings that were known as doughboys. Q. Why is the Battle of Breed’s Hill called the Battle of Bunker Hill? At which locale is the monument to the battle located? A. Ferocious fighting erupted in Boston on June 17, 1775, during the American Revolution, with most of the shooting taking place on Breed’s Hill, located about one-half mile from Bunker Hill. But the battle took the name of the latter location. So what’s the deal? The original intent of American officers, as they prepared for combat that day, was to fortify Bunker Hill and make a stand there. However, the plan changed in the heat of events, and the focal point of activity suddenly shifted to Breed's Hill. The Bunker Hill name stuck as a designation for the day’s conflict, maybe because "Bunker Hill" was written on planning documents and marching orders. A 221-foot obelisk commemorating the Battle of Bunker Hill is located on Breed’s Hill. Americans lost the Battle of Bunker Hill but proved themselves worthy fighters, surprising the British in this regard. Among the American dead was Joseph Warren, a doctor and activist who held the rank of general but fought this battle as a volunteer private. Among his last words were, “Tell me where the assault will be most furious.” Informed of the spot, he went there, and died for American independence. The most famous quote to emerge from the battle is, of course, “Don’t fire until you see the whites of their eyes!”, which was supposedly said by either William Prescott or Israel Putnam. In fact, it may not have been said by either man. If it was said, it derives not so much from an eagerness to engage in close combat as from the fact that muskets of the day were quite inaccurate. Q. I know Peyton Randolph was our first true president. I also know that other men held the job prior to George Washington. Can you provide a list of names and dates? A. “First true president” is a debatable phrase. Peyton Randolph served as President of the Continental Congress of the United Colonies during two separate periods in 1774 and 1775. A number of other men, including John Hancock, held the post in the years leading up to the writing of the U.S. Constitution in 1787. Historian Stanley L. Klos provides a full list of names and dates in his book “President Who? Forgotten Founders.” Klos says that Samuel Huntington is the “first true U.S. President.” Huntington was President of the United States in Congress Assembled, under the Articles of Confederation, from March 1, 1781, to July 6, 1781. His powers were limited. The first President of the United States of America, as the term “United States of America” is best understood, was none other than George Washington. Q. Are Liberty ships the same as Victory ships? A. No. Their engines are distinctly different. They share certain characteristics. Both were active during the Second World War as cargo freighters, running deadly gauntlets of German submarines to deliver war materiel and foodstuffs to Europe. Both are about 440 feet long, and their carrying capacities are roughly the same. Liberty ships use triple-expansion reciprocating engines – rugged, simple, not terribly powerful, and capable of being manufactured by any good-sized foundry. Victory ships use steam turbine engines, which are more complex, capable of generating more power, and harder to build. Top speed for Liberty ships was 10 to 12 knots, and for Victory ships 16 to 20 knots. "You could take an Iowa farmhand and teach him in three or four days how to run a triple-expansion engine," says Chet Robbins, administrative director for the National Liberty Ship Memorial, based in San Francisco. "Teaching him to run a steam turbine took months." A total of 2,710 Liberty ships and 537 Victory ships were built, many by construction wizard Henry J. Kaiser. Most of the vessels were operated by the sailors of the U.S. Merchant Marine, not by the U.S. Navy (a few Liberty ships were sailed by the Navy). According to historian Douglas Botting, the two styles of ship "proved to be the answer to Germany’s U-boats" because they were assembled "faster than the submarines could sink them." Two Liberty ships are available today for public viewing, in San Francisco and Baltimore. Two Victory ships are in operational condition, in San Pedro, Calif., and Tampa, while another Victory ship is being restored in Richmond, Calif. By the way, the engine room scenes in the film "Titanic" (1997) were filmed on the SS Jeremiah O’Brien, the Liberty ship in San Francisco. A. Your memory is correct. Johnson, who left the White House in 1969, died in 1973; in the last years of his life, he let his white hair grow. (See photos below from 1972.) The late ’60s and early ’70s were the height of the hippie period in American culture, when a fair number of young men had extremely long hair. Perhaps LBJ was trying to show a certain empathy for long-hairs, even though they voiced some of the harshest criticism of his Vietnam policies. He may have been saying, “I, too, am a rebel at heart, and an idealist - I know where you’re coming from.” Q. Do you know of any historic sites that welcome "vacationing volunteers,” where we can live in that moment of history, even if just for a short while? A. We can suggest a few resources and ideas for your search. One of the best books on this topic is “The Back Door Guide to Short-Term Job Adventures: Internships, Summer Jobs, Seasonal Work, Volunteer Vacations, and Transitions Abroad” by Michael Landes. In an interview, Landes suggests calling a historic site that you’re interested in and seeing if they would be willing to take you on for a few days, if only to rake leaves and trim the hedges. He comments, “Direct communication and idea brainstorming with a specific site may prove to be the best option!” Two additional good books are "The International Directory of Voluntary Work" by Louise Whetter and Victoria Pybus and “Volunteer Vacations: Short-Term Adventures That Will Benefit You and Others” by Bill McMillon, Doug Cutchins, and Anne Geissinger. The National Park Service, which oversees many battlefields and other historic sites, has an outstanding Volunteers-in-Parks program; see nps.gov/volunteer. The American Hiking Society has one-week volunteer vacations; information is available at americanhiking.org/get-involved/. The Appalachian Trail Conservancy has one- to six-week volunteer programs; consult appalachiantrail.org. As an alternative, you might be interested in a field seminar or educational program such as the ones conducted by the Yellowstone Association (and other groups). Consult the Website yellowstoneassociation.org. See here for an example of a museum that has welcomed short-term volunteers in the past and may still do so. Q. I recently traveled by cruise ship to Bermuda, the British crown colony located off North Carolina. We heard the following story. Gen. George Washington sent a letter in 1775 to Bermuda requesting gunpowder for the American army. Bermudians sent the powder, and Washington promised that if America could ever return the favor, Bermudians only needed to ask. In the 1980s, Bermuda asked President Ronald Reagan to grant special privileges to certain financial companies based there, in the hopes of smoothing their business dealings in the U.S. This request was instantly granted because of Washington’s promise. True? A. Here are the facts, supplied by Andrew P. Bermingham, president of the Bermuda Historical Society: Washington sent a letter to Bermuda in 1775 seeking gunpowder, addressed to supporters of the American Revolution. In August of 1775, while Washington’s letter was en route, a group of Bermudians raided a powder magazine, stole kegs of the precious material, and delivered the goods to American ships. They may have been motivated, in part, by an interest in swapping powder for food – their homeland was blockaded by the Americans. By the time Washington’s letter arrived in Bermuda, powder was already on its way to the colonies. No business dispensation was granted Bermuda in the 1980s by the American government based on a 200-year-old promise by George Washington. That said, Bermuda has, in recent decades, developed a high-end financial industry with ties to the U.S. Q. My great-grandfather served as an “artificer” during the Civil War. Can you describe this rank? A. “Artificer” means “skilled worker”; during the Civil War, this designation was generally given to blacksmiths who repaired cannons and other items for artillery units. Artificers also did small-scale manufacturing of equipment. These men are “overlooked by the annals of history” according to one author. (Until now!) The Civil War artificer typically held the rank of private and earned $15 per month, compared to a wage of $50 a month for a first lieutenant and $95 for a colonel. The artificer’s base of operations was his “battery forge,” a mobile workshop located well behind the lines – a special wagon holding an anvil, bellows, forge, and other gear. Artificers frequently had skill with wood, leather, and other materials. Q. I have read many stories about feats of courage and patriotism during World War II but I’ve never seen anything about the actions of supply units in the various services. Can you suggest any books about “beyond the call of duty” actions by these people? I was a storekeeper during the war in the South Pacific and have a special interest in this topic. A. According to an old military adage, amateurs discuss battle tactics while professionals focus on logistics and supply, i.e., "the practical art of moving armies" (and navies) in the phrase of French warrior and historian Jomini - the demanding art of transporting combat troops and providing them with food, clothing, shelter, munitions, and health care. A reviewer at Amazon.com, Mike Baum, summarizes: "Waging war is never merely about raising an army and fighting an enemy; it's also about getting to the enemy without dying of dehydration and malnourishment along the way." Unfortunately, notes military author Jay Karamales, there’s a "sad dearth" of good history books on logistics - it's not a topic that thrills publishers of books for the general reading public. One solid volume is “The Road to Victory: The Untold Story of World War II’s Red Ball Express” by David P. Colley (2001), which describes the U.S. Army’s three-month trucking campaign to equip men racing across Europe in 1944-45, including Patton’s Third Army, which traveled so fast that it outdistanced its supply lines. (See the 1970 film “Patton” for a powerful re-creation of the general's grand push.) The 1952 movie “Red Ball Express” is based on this effort and is pretty good. A number of academic books have been published on the topic. A key work, using history as a foundation, is "Supplying War: Logistics From Wallenstein to Patton" by the excellent scholar Martin van Creveld (1977, second edition 2004). An important critique of "Supplying War" was published in 1994 by John Lynn: "Feeding Mars: Logistics in Western Warfare From the Middle Ages to the Present." For a good-humored look at Navy supply efforts during the Second World War check out the 1955 film “Mister Roberts” which has the extraordinary cast of Henry Fonda, James Cagney, William Powell, and Jack Lemmon. A recent “Ask the Editors” (Jan./Feb. ’07) discussed the efforts of the U.S. Merchant Marine during the World War II. The May-June ’07 issue of this magazine offered a feature story on Liberty ships. Another chapter of the logistics story, from World War II’s Eastern front, is told by Harrison Salisbury in his book “The 900 Days.” See also the outstanding "Alexander the Great and the Logistics of the Macedonian Army" by Donald W. Engels (1980).
<urn:uuid:793dd154-27cb-4f30-8246-1deeba300847>
CC-MAIN-2017-17
http://historyaccess.com/asktheeditors-pb.html
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917123270.78/warc/CC-MAIN-20170423031203-00309-ip-10-145-167-34.ec2.internal.warc.gz
en
0.966376
4,436
3.390625
3
Deadly Flies, Deadly Methods: Fishing Chironomids Chironomid fishing may be a lake angler’s ultimate weapon of mass destruction. Think of a fearsome “dirty” bomb. A Dirty Bomb is one made of conventional explosives and radioactive isotopes. Upon detonation, the radioactivity is spread over a huge area, creating a massive kill zone. When it comes to cataclysmic fish-catching Armageddon, an angler who masters “fishing the noid” can rule a lake. As you might remember from the chapter on aquatic organisms, the chironomid is a member of class Insecta (insects), order Diptera (flies), and family Chironomidae (midges). So, “midge” is a scientific, taxonomical name, not the generic term used by many people when describing “a tiny little bug I saw near the water that I could not possibly identify.” Throughout this book, I will use the terms midge and chironomid interchangeably. If I use the word “noid”, this also refers to a chironomid. I have several fishing friends who stumble when trying to pronounce chironomid. It comes out as “chirominoid”, which my other companions and I laughingly shortened to “noid”, a simplified bastardization of the original word. The term has stuck with us over the years, and evokes a smile every time it is used. Perhaps the most deceptive thing about midges and the reason too many fly anglers have ignored them is the fact that many are quite small. Fishermen may assume that such a tiny morsel could not possibly interest a big fish. When I try to convey to my students why indeed midges do interest big fish, I ask them if they like popcorn. Of course, just about everyone does. A single piece of popcorn to a human might be the same size equivalent of a midge to a fish, even a large fish. Next, I enquire: “Do you sit down with a bowl of freshly-popped corn and only eat one or two kernels?” Heck no. The snacker will consume hundreds, if not a thousand, pieces of popcorn. It then follows that a fish will snack on many, many chironomids during a single feeding session. In order to be consistently effective, lake anglers cannot possibly ignore midges. Midges are one of the few food sources available to stillwater fish year-round. Through stomach content samples taken from lake and pond fish throughout the calendar year, there are some days that stomach samples contain only midge larvae and pupae. It is not much of a stretch to assume that without the presence and availability of midges, some fish might not survive in their environment. As you will recall from an earlier chapter, chironomids (midges) have a four-stage, or complete, life cycle. For the fly angler, three of the four stages are important to imitate: larva, pupa, and adult. And, because midges can be active and hatching year-round, even during the dead of winter, they are definitely one of the go-to flies any time you launch your boat on a lake or pond. As I write this chapter, February 2011, I took advantage of mild weather to do some pond fishing for trout. My plan was to check the fish activity on three different ponds. With a little luck, fish would be willing to bite in at least one of the three. When I arrived early afternoon at Pond #1, I found its surface like glass, not a hint of a breeze. A quick scan revealed no rising fish. Water clarity was reasonable, with the visibility into the depths at about two feet. The water temperature was typical for western Oregon in mid winter, about 47 degrees Fahrenheit. Examining the surface for insects or vacated midge pupae skins revealed nothing. To make good use of my time and wait for the fish and insects to awaken, I began to stage some of my many fly boxes and gear for photographs I needed. There was a high overcast with the sun feebly trying to break through. The light was very good for the shots I wanted. As I went about my sorting and set up, I occasionally surveyed the pond for insects or rises. The quiet allowed me to listen for rises when my eyes were busy elsewhere. Shortly, I heard the familiar splash of a trout on the surface. I looked up to see an expanding ring, just out of casting range. Within a few minutes, there was another. Hmmmmm. Since there were a few cruisers willing to come to the top, I readied my floating line rod. On the leader, I secured a strike indicator with two chironomid pupae flies, a dropper fly at three feet under the indicator, and the other twenty inches below the dropper. To add some visual attractiveness to my offerings, I selected flies with glitzy beads at the heads --- one pearl orange, the other red. These flies were part impressionistic (sized and shaped as chironomids) and part attractor (color and shine). In limited visibility situations, these had served me well in the past. I must admit that my focus was divided, which limited my fishing success. I made a few casts, then arranged my inanimate subjects and took photos. When my indicator went down on my first strike, I cleanly missed the hook up. When the second strike came twenty minutes later, I set the hook solidly into a nine-inch rainbow. The size of the fish was totally irrelevant to me. The important thing to me was actually fooling a fish. It took a silver-bodied fly with the orange bead. The rises at Pond #1 were sporadic and spread out over a large area, but more frequent by mid afternoon. This is logical since it was the warmest part of the day. Occasionally now I saw an airborne midge. It was these, I think, that encouraged a rise here and there. I was fascinated at how the rises were consistently beyond casting range. Since I was fishing on foot, no float tube, I could not get my flies to the Red Zone. I moved my casting positions to various points around the pond, managing to get one more strike, which I missed. Time to move on. The conditions at Pond #2, about a mile away from the first, were exactly the same. I stayed with the same set up, two flies not too far below an indicator. As I walked toward the little bay I wanted to fish, I scanned the water for risers and insects. Nothing. Standing on a shoreline tree stump, I was in a position to cast to the water that had been productive for me previous springs and summers. My expectations were low, so any action would be a bonus. Lucky for me, my wait was a short one. Within ten minutes of my arrival, my indicator went down hard. As I raised my rod tip, I felt the solid resistance of a worthy trout. After an admirable struggle by the fish, I slid the rainbow into the shallows for a look and a quick release. It had fallen for the red fly. Good choice. As I continued to fan my casts out in different directions, I found a scenic photo opportunity where the stumps in the lake and the trees were reflected in the pond’s mirror surface. Being right-handed, I switched my rod to my left hand, to hold the camera in the favored one. As I raised the camera to look through the viewfinder, I simultaneously saw my indicator twitch. Instinctively, I raised the rod tip to feel that familiar thrashing resistance every angler loves. After a fine accounting of itself, actually pulling line from my fly reel several times during the battle, I soon slid a solidly built fourteen-inch rainbow into the shallows for quick inspection and praise. After a few quick photos --- which did not steal its spirit --- I sent the fish back to his home. During the next twenty-five minutes, I got one more strike. Missed it. It was getting late into the afternoon, so it was time to go to my last destination. Pond #3 was low. It is fed by many springs, so there is no shortage of water. The apparent problem was that an excessive amount of water had breached part of the earthen berm that held water in the pond, adding to its depth. Wandering the shoreline, I located the area of deepest water . . . and some rising fish! The dimming light of evening was fast approaching as I made my first cast. As usual, the cruising risers were just out of reach, so I was never really able to put a cast onto the ring of a rise, but a few times I got close enough. During my brief stay, I got three strikes, landing one small rainbow on the red fly. As I headed to my car for my journey home, a few fish were still rising . . . just beyond casting range. In reflecting on my day to better insure fishing success on a future trip to these ponds, I wished I had experimented more with fly patterns and depths at which they were fished. I never cast either of the two rods I had with me with sinking lines on them. There were locations at each pond I wished I had had time to fish. Additionally, I wonder if pursuing the fish in a float tube would have translated into more strikes. Probably. One thing I noted was that strikes came within a minute of the cast. When I let the flies rest stationary for two to four minutes, the strike never came. With the in-water visibility limit of about two feet, I surmised that the flies had to luckily land near a cruising fish. There was a chance that the gentle splash down of the flies and indicator drew the attention of the fish. I have certainly seen this reaction on excellent visibility days when I can watch cruising trout as I cast to them. It logically followed that I did not let my flies rest for more than a minute after the cast during the latter half of my fishing day. Though I had some success, I know I can do better. I should have had two more floating line chironomid rods rigged and ready to cast before I went fishing. My float tube will be with me next time, as will my depth finder / fish locator. Rather than take fish photos, I will use my stomach pump on them. Lastly, I will start a bit earlier. Review. Analyze. Plan. Go fishing. If ever there was a commercial fly-tier’s “dream fly” as a money-maker, it has to be midge larvae patterns. A skilled tier may be able to knock out a fly in forty-five seconds. If such a fly wholesaled at $10 / dozen, that translates into $50 / hour. Not bad. If you are a beginning fly-tier, this is an easy fly to start with that will catch fish . . . assuming you know the basics of how to fish it. Chironomid larvae can be best described as “skinny little worms” that live in the mud, detritus, and vegetation on the lake bottom. Those residing in burrows or tubes in the substrate will leave these temporarily to feed on decomposing plant material or disintegrating organic matter. As they forage, fish will dine on the unprotected larvae. Other free-living, roaming sorts not residing in mud tubes are available to the fish on a continual basis. It is logical, then, to fish chironomid larval imitations along the bottom. Though the shallower waters along the edges of the pond or lake can be very good locations to fish larvae flies, there are times that the fish may have retreated to deeper water; midge larvae will be there. In fact, larvae may be found in water fifty feet deep. This is much deeper than most fly anglers are willing to fish. Count me among this group. Down-riggers and eight-ounce weights are too much for any fly rod I own, so I will max out my pursuits at about thirty feet. Though the larvae wriggle, it’s a stretch to call it swimming. So, when fishing a midge larva fly get it near the bottom, and fish it with little or no movement. You can troll it as slow as possible for a short distance, then stop to let the fly settle and sit. If you choose to cast and retrieve, I suggest that all you do is attempt to keep your line tight by occasionally taking in the slack with a deathly-slow strip, or a creeping hand-twist retrieve. An effective method for fishing midge larvae is to suspend them under an indicator. The nearer the bottom the better. If the bottom is friendly, having the fly sit on the substrate can be good. As the angler moves the fly occasionally, a little “dusty puff” disturbance of the mud or detritus may grab a fish’s attention. An electronic depth finder is indispensable to know precisely the length of leader and tippet needed under the indicator to put the flies near or on the bottom. If you do not have a depth finder, or don’t want to hassle with it today, secure your forceps to your line or hook. Drop the system into the water, allowing the forceps to sink to the bottom. Once the system hits the bottom, make note of how much leader or leader & line is required to hit bottom. I fish two larvae at a time, different colors. Brown, olive, amber, and red are my favorites. I separate the flies by about eighteen inches on my leader. If the fish should show a color preference, then I will tie on two of the same. It is amazing how subtle the fish strike can be. Even when the water’s surface is still, a quiet interception of the fly may move the indicator so slightly that you may ignore it or think it is your imagination. I cannot possibly recount how many times I have lifted my rod tip in response to an “imagined” strike to find a fish on the line. I try not to over react on the strike, quickly lifting my rod tip a short distance to come tight. No more. If it was truly my imagination, or I missed the strike, I merely drop the tip again, and let the flies settle toward the bottom to await another opportunity. To visually enhance my larvae patterns, I sometimes incorporate a glass bead of matching color at the head of the fly. I have a few experimental patterns where the entire pattern is nothing but glass beads. However, to be able to slip glass beads onto a hook, their diameters must be much greater than that of the ultra thin midge larvae. When I catch fish on such concoctions, I cannot help but wonder if fish really took the fly as a genuine larva, or took it because the fly merely looked interesting and edible. Fishing chironomid pupae under an indicator can make you feel guilty. Some fly anglers hesitate to do it, preferring instead to cast and slowly retrieve the fly. It is an interesting study in human behavior when I encounter those who turn up their noses at fishing a “bobber”. When their catch is minimal, those who disdain indicators can quickly lose their religion when anglers around them continue to sound off, “Yeehaw! I’ve got another one.” Pride will be damned. There is actually a simple solution, an alternative to standard strike indicators. In lieu of cork, foam, plastic, and yarn, use a dry fly. I often fish a Callibaetis dry fly as an indicator in late spring and summer. This is particularly effective when both mayflies and midges are available to the fish. I am sure the dry fly may be mistaken for either insect, even though there may be a size discrepancy. Fortunately for us, hungry fish are not always discriminating. Even a fly that has no living counterpart can catch fish. If you tie a huge fluffy dry fly onto the upper portion of your tippet so it floats high and is easy to see at great distance, a curious fish may very well impale itself on it. There are some days when fish will attack my indicator, trying to eat it. Go figure. I learned a long time ago that when this happens, attach a hook or fly to the indicator. On these special occasions, I typically land several fish on my indicator hook. Good chironomid pupae fishing can be killer even if no hatch of the adults is occurring. Prior to their final ascent to the surface, midge pupae may emerge from the tubes or cases in the substrate where they transformed from larva to pupal form. They may be suspended in the first couple of feet off the bottom for a day or two, exposed and vulnerable to cruising fish. A fly suspended in this zone can be stupidly effective. The fishing can be so easy, you may go home early to do your laundry and clean the bathroom. There are certainly those fishing days when lake fish do not go crazy for chironomids on your initial attempts. Fishing pupal flies near the bottom may only be the starting point for discovery. Because of nearby bottom contours, water temperatures at various depths, light intensity, water clarity, or the presence of flying predators, fish may choose a specific cruising and feeding level. If you get no action near the bottom, begin to experiment with suspending the flies at incrementally lesser depths. I gradually suspend my flies in one-foot upward adjustments, moving them closer and closer to the surface. Of course, it could be the fly size and color that need adjustments. Carrying multiple rods onboard allow me to quickly switch up. Time is not only money; time is fish. Even after experimenting with depth, presentation, size, and color, trout may ignore your chironomids. They may go off the bite for a time, or they may turn their focus to other food items. If the area I am fishing is confined, like a small bay, and shallow --- ten feet or less --- it is possible to spook the fish out of the area for awhile. The flies, depth and method may be correct, but the location to fish them must be changed. If good fishing action tapers off, my first tendency is to move to other locations before making adjustments to my gear or method. On waters I am familiar with, I make the rounds. I go from location A to B to C, and so on. Eventually I return to my starting point, to repeat my travels throughout the day. Fishing with companions can help you track the nomadic wanderings of the fish. With multiple companions, there are times I can accurately follow the movements of the fish moving along a shoreline, as fishing pressure moves from one area to another. The movements of the fish can be easily observed if they are feeding on pupae in or near the surface. Their rise activity will move from one location to another in response to fishing activities that disturb them. Sometimes, chironomid pupae must be fished in water deeper than ten feet. With a fixed indicator, this presents an obvious difficulty. If an angler is using a standard nine-foot fly rod, a fish that has taken a fly fifteen feet below an indicator that won’t slide on the line is hard to land. Even when the angler reaches as high as possible with armd fully extended, the fish may still be under the surface, out of reach of a hand or the net. Imagine if you are fishing at a depth of twenty-five feet! There are a couple of solutions. One possibility is a term I first heard coined in Canada: fishing naked. What is meant by this is fishing chironomid pupae on a l-o-n-g leader without the aid of an indicator. Fished with a floating fly line, a leader of up to forty feet (!) is cast into depths of twenty-five to thirty feet. After determining the depth, a leader that is approximately 25% longer than the depth is to be used. As an example, if the water is twenty feet deep, use a leader that measures twenty-five feet. For leaders of twenty feet or longer, I start with a nine-foot tapered leader, with a tippet diameter of 2X (0.009”). Then, using a Double Surgeon’s knot, I add successive sections of 2X, 3X, and 5X, all at least four feet long, except the last which is three feet. For leaders over twenty feet, lengthen the 2X and 3X sections, keeping the last (5X) at three feet. These are general guidelines, not rules. I always find it useful to make a paper sketch of my potential leader lengths. Casting such a long leader can put immediate fear into the heart of any fly caster. Getting the system airborne and executing a cast where the line and leader land fully extended on the water can be a daunting task. It all starts with leader construction. Secondly, a certain amount of fly line must be left beyond the rod tip as the cast begins. I suggest a length of fly line approximately equal to the length of the leader. Finally, the sunken fly must be coaxed to the surface in a specific manner just prior to the beginning of the cast. This is most easily accomplished with a series of roll casts that extend the line and raise the leader and fly to the surface. After two or three roll casts to lift and extend the line / leader system, send the line overhead and to your rear into an initial back cast. With one or two false casts, send the fly to its destination. A slightly weighted fly will make for a quicker descent. Experiment with a countdown before you commence to retrieve with a very slow strip or hand-twist retrieve. The descent to the bottom can take a couple of minutes. If your fly gets fowled with vegetation, shorten the countdown. If you know you are fishing very near the bottom, but not getting strikes, shorten the countdown systematically in ten-count increments. The fish may prefer intercepting chironomid pupae at level higher up in the water column. As always, utilize the stomach pump on the first worthy fish you catch to make certain your fly closely approximates the naturals in size and color. If the water is extremely clear and the sun is high, it might be necessary to decrease the tippet size. As with anything else worth doing, preparation and practice are the foundations. For those who want to be more proactively involved in their chironomid fishing, not relying on the much more passive endeavor of staring at an indicator, “fishing naked” is the solution. Another nifty piece of chironomid fishing equipment I saw for the first time in Canada is a slip strike indicator. This is the answer to fishing long leaders and indicators. The striking movement of the rod tip, or the subsequent pull on the line by a fleeing fish, disengages the indicator from its fixed position on the leader. This, then, allows the indicator to slide freely up and down the leader while the fish is played. Once you see how it is set up on the leader, the mechanics of the slip indicator are ingeniously simple. The indicator has two parts: a body of cork or buoyant foam with a hole bored through its center, and a hollow peg that nestles into the hole. Before tying on a fly, slip the leader through the hollow peg, which is fitted into the hole in the indicator. Slide the pegged indicator to the desired location on the leader. Disengage the peg for a moment, pulling it up the leader a few inches towards the rod tip. Grab the leader immediately below the peg, and pull a small loop of line up to and along side the peg. Slide the indicator up and onto the peg, wedging the small line loop between the indicator and peg. A tiny loop of line will be visible above the indicator. Experiment with how snugly the peg is pushed into the hole of the indicator. It must be tight enough that the indicator does not disengage while casting, but not so snug that a quick lift of the rod tip against resistance, or a fighting fish, cannot free the peg from the indicator, allowing it to slide freely. I have saved the easiest chironomid fishing method for last. For maximum sinkage, string up the fly line with the fastest descent rating. Locate your fishing craft directly over the deep water (fifteen feet or more) that you want to fish. No cast required. Having determined the exact depth of the water, drop your fly / leader / line system off the side of the boat and start stripping line from the reel. Let out that length of leader and line whereby your point fly is suspended barely above the bottom, directly below you. Next, wait for a cruising fish to bite your fly. If nothing happens, slowly lift your rod tip a few feet. Then, slowly lower the tip again. Repeat as needed. Easy & cheesy, but this can be dirty effective. If you dose off, don’t drop your rod. A long time ago in a land far away (Idaho), my friend Jeff Hilden, Josh Cuperus, and I were fishing Henry’s Lake. It was the first time for Josh and me; Jeff had fished it with a guide on a previous trip. By default, Jeff was now our guide. Fortunately, Jeff has an excellent job outside the world of fly fishing to pay the bills and provide for his family. After a fruitless morning of trying this and that, here and there, I pulled my float tube out of the water to wander the shoreline in search of hope. My plan was to watch for anglers who might offer me a clue about where the fish were and what fly they might eat. What I wanted to see more than anything was an angler with a bent rod. There was lots of chatter on the lake, but not the excited whooping associated with playing a fish. I had now wandered far away from Josh and Jeff, when I saw him. There was an elderly man fishing solo in a pram, a stone’s throw from the shore. He was anchored with his back to me. Perfect. I could watch him and not be watched. He eventually netted and released an excellent trout, then released it. In no obvious hurry, Pram Man lit a cigarette, settled himself and made a cast. By the color of the line and the orientation it quickly assumed, I was positive he was using a sinking one. Once the line settled on the water, he sat down, still with his back to me. For the next couple of minutes he did nothing except sit and puff, absorbed in his waiting and his cigarette. When he sensed the time was right, Pram Man made a couple of slow, short pulls on the fly line. By now, the line appeared to be almost straight down from his rod tip, almost below his boat. If he had chosen to, my man could have merely stripped line from his reel and let it sink straight down; it would have had exactly the same effect. He was obviously in deep water. Whereas we had been fishing all morning in water that was probably no more than four feet deep in most locations, this guy was parked over a hole. My bloodhound had sniffed out the spot I needed to know about. And to confirm it, he soon hooked another trout. Time for me to launch my tube for closer inspection . . . and some fishing. As stealthy and as nonchalant as an excited fisherman can be, I kicked my craft out of the shallows. Not wanting to be perceived immediately as an intruder, my path was not directly at Pram Man. I stayed well away from him, but always in position to watch him. Considerately, he was always fishing with his back to me. The fact he was always facing the same direction encouraged me to get a bit closer to his position. Approaching at a nonintrusive distance toward the rear of his boat, I made my first cast in his direction. I was fishing a sparsely dressed peacock nymph. I was prompted to use this fly by a story someone had told me about his fishing day at Henry’s Lake. Somewhere in the tale, a veteran angler had said the phrase “Thin is in!” making reference to his successful fly. Somehow, I had connected “Thin is in” to a fly called the Skinny Minnie, a pattern with a slender thread or peacock body. Thus, my modified Prince Nymph with the skinny body was the fly of choice, in lieu of a more standard chironomid pupa pattern. Mirroring the cast, wait, and retrieve rhythm of Pram Man, hope was growing. The hole over which he and I were fishing had a circumference large enough that I could keep my distance. Since the angler was parked over the center of what must have been a large spring, I stayed on its edge, and continued to cast in his general direction. Just like in the movies, I was soon into my first Henry’s Lake cutthroat. With the aid of my stomach pump, I quickly extracted numerous chironomid pupae the fish had eaten recently. Though my fly was not a perfect match for what the trout were consuming, I stayed with peacock imitation and caught more fish, just as Pram man did. The most important thing to the fish --- besides this sweet location in deep water --- was the manner in which the fly was fished: low and slow. As the fly was lifted vertically, straight up from the bottom, the strikes were most likely to occur. Eventually, the action slowed as the sun got higher and the day grew warmer. As I went in search of greener pastures, I watched Pram Man lift his anchor and head for shore, confirming that the bite was off for now. For a while, the fishing had been easy peasy. And, later that evening it was again. I enjoy tying simple flies. The simpler and faster the pattern, the more variations and experimental models I can tie. In a given fishing year, I tie more chironomid pupae than all other patterns combined. Fun colors such as red, purple, orange, blue, silver, gold, and kelly green are utilized in my fly tying. I had a couple of clients in my retail days who swore that a purple midge pupa was the Holy Grail of stillwater fishing. The use of metal, plastic, and glass beads on some patterns can be very effective. Sometimes I wonder if the fish that bite flies with beads are biting the hook because they think they are eating an enticing mutant chironomid pupa, or they are merely attracted to the bead that happens to be attached to the body of the imitation. Previously, I had made mention of catching several trout one day when all that remained on the hook were some turns of thread and a glass bead. There are days when fly pattern is irrelevant. The fish are hungry and they just want something to eat. I remember to appreciate such days, especially those days when fish are hard to fool, even when I am using my Best of the Best flies. The down side of having too many fish fall for my silly experimental flies is that I can lose a little respect for the fish. While it’s fun for a while to catch fish on anything and everything, it is the challenge of successfully wooing difficult fish that gives me most pleasure. Catching fish that others cannot is my personal ultimate achievement. I don’t always win, but “losing” only makes me more determined. And, fortunately, I will never have the Final Answer. This is as it should be for me. One more challenge, one more difficult fish to challenge me. Though I have caught a good number of fish on them, no, purple chironomids are not really the Holy Grail. It isn’t just fashion models who know it. Chironomid larvae and pupae are thin. Your flies should be, too. Almost anorexic. When you look at the full length of your larvae flies, and the abdominal portion of your pupal patterns, if you ask yourself, “Is this body too thick, too rotund?” the answer is probably “Yes!” I know some very good chironomid anglers who fashion the main body of the fly out of fly tying thread, or wrap the hook with a single strand of floss. Can’t get much thinner than this until you fish a bare hook. As I write this chapter, I am thinking of a new experimental pupa tied on a red anodized hook. I will secure some silver wire --- without using thread --- at the rear of the hook shank, and spiral it onto the bare hook. I will secure the wire near the small bead at the hook eye. So, the only items on the hook will be six or seven turns of fine silver wire, a tiny bead, and a few turns of thread to secure everything at the head. Then, of course, I must do the same thing with a black hook and a bronze hook. By using different hook finishes, a variety of hook sizes and multiples of each, a range of wire colors, and all sorts of beads --- glass, metal, plastic, lots of colors --- all of a sudden I will have a hundred new chironomid pupae flies to try! As midge pupae prepare for their ascent to the surface, they are assisted in their upward journeys by the generation and trapping of gas bubbles beneath the skin. This buoyancy aid complements their feeble swimming efforts as they slowly float to the surface, where the thorax will split and the adult will emerge. The shiny, silvery appearance of the gas bubbles is an obvious attribute of the pupae as hungry fish intercept them. It only seems logical that this silvery sheen should be incorporated into some of the pupal fly patterns in your fly box. Silver wire, silver and pearl tinsels, and clear glass, white and silver beads --- either separately, or in concert --- serve to enhance the gas bubble effect of attractive midge pupae patterns. I find that I tend to favor these flies, especially when the fish become more selective. And, of course, there are endless combinations of colors, finishes, and sizes that utilize these bubble-mimicking materials. My inclination at this very moment is to stop writing and start tying flies. I can’t wait! Besides being very slim, the abdomen of the midge pupa is very obviously segmented. To maximize the authenticity of the artificial imitation, ribbing material, usually wire, creates the segmented look. Red and silver are the two most favored colors. Sometimes I will use white thread. The head and thorax appear fused, and distinctly larger than the thorax, roughly twice the diameter. The head / thorax unit comprise about one quarter of the insect’s total length, and are often darker in coloration than the abdomen. Most good pupae patterns suggest these. Prominent white respiration filaments on the top of the head / thorax region are very obvious to fish that eat them. A little tuft of white Antron or polypropylene yarn or ostrich herl simulates this feature. Some patterns, like the Ice Cream Cone Chironomid, utilize a white bead at the eye of the hook to simulate the filaments. If my chironomid fishing efforts are sketchy, instead of fishing two midge patterns, I typically replace the point fly with a scud, Micro Leech, or damselfly nymph, additional members of my “A” Team. The point fly is easier and faster to switch than the dropper. Food preferences for stillwater fish can change hourly, it seems. Therefore, I experiment to discover what fish may want for the moment, the fly du jour. During this experimentation, the last fly to be replaced is chironomid dropper. Because midges --- particularly the pupal stage --- are so important in a lake fish’s diet, this fly will always receive maximum playing time; it’s only fitting for the “A” Team captain. Copyright © 2003 Scarlet Ibis Fly Fishing Tours Inc
<urn:uuid:3ec30f77-d65f-478d-91ce-c2a93844ab8d>
CC-MAIN-2017-17
http://gormanflyfishing.com/Chironomid%20Methods.htm
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917122167.63/warc/CC-MAIN-20170423031202-00249-ip-10-145-167-34.ec2.internal.warc.gz
en
0.964537
7,613
2.59375
3
Despite increased awareness of suicide as a major public health problem, many health care professionals who have frequent contact with high-risk patients lack adequate training in specialized assessment techniques and treatment approaches.1 Primary care providers (PCPs) are positioned to lead important public health interventions to prevent youth suicide because their practice setting provides opportunities for early identification of and intervention for common mental health disorders among adolescents, as well as for counseling, guidance, and care coordination.2 However, a PCP's ability to provide appropriate care depends on his or her knowledge, comfort, and skills.3 Accordingly, the American Academy of Child and Adolescent Psychiatry states, “[P]rimary care physicians … should be trained to recognize risk factors for suicide and suicidal behavior and, when necessary, refer to a mental health clinician.”4(p28S) A goal of the U.S. government's National Strategy for Suicide Prevention involves increasing the proportion of residency programs that provide training in assessing and managing suicide risk and in identifying and promoting protective factors.1 Educating PCPs about the warning signs of adolescent suicide and equipping them with tools to identify and assess suicidal patients represent a promising approach to adolescent suicide prevention. In this article, we review the epidemiology of youth suicide, some risk and protective factors, and warning signs. We also present findings from prior research on physician education in this area, highlighting evidence of improved knowledge and skills among physicians following training. We conclude by offering recommendations for improving educational opportunities and suggestions for future research. Epidemiology of Youth Suicide Suicide is the third leading cause of death for Americans aged 15 to 24—after unintentional injuries/accidents and homicide—and accounts for more deaths among this age group than all natural causes combined.5 Suicide rates among this age group have decreased slightly, from 10.4 suicides per 100,000 individuals in 2004 to 9.7 per 100,000 in 2007. Still, in 2007, 4,225 Americans aged 5 to 24 died by suicide.5 Although adults older than age 65 have the highest suicide rates (14.3 per 100,000), suicide accounts for a much larger percentage of deaths among young people (12.2%) than among elderly adults (0.3%).6 The prevalence of nonlethal suicidal behavior heightens the public health significance of this problem: For every death by suicide, 100 to 200 adolescents attempted to take their own lives, compared with a rate of four attempts to one death among the elderly.6 In 2009, 13.8% of U.S. high school students seriously considered attempting suicide, 10.9% planned to attempt suicide, and 6.3% attempted suicide.7 Adolescent females contemplate and attempt suicide more often than males, but males are four times more likely to die by suicide than are females.8 Among youth, Native American and Hispanic females have the highest rates of suicide attempts, whereas Native American and white males have the highest rates of completed suicide.9 White youth have generally demonstrated higher suicide rates than their African American peers.10 However, a dramatic increase in the rate of suicide from 1981 to 1995 among African American male adolescents partially closed the gap in rates between African American and white males.9 From 1990 to 2004, females aged 10 to 24 and males aged 10 to 14 showed downward trends in firearm and poisoning suicides. By 2004, the most common method of suicide among these groups had become hanging/suffocation. Firearms remain the most common method among males aged 15 to 24.11 Risk factors, protective factors, and warning signs Adolescent suicide represents a complex behavior associated with myriad interrelated biopsychosocial factors. However, about 90% of adolescents who die by suicide have a psychiatric disorder, among which depressive disorders remain most prevalent.12 Other risk factors include a previous suicide attempt, interpersonal losses, legal or disciplinary problems, family history of suicidal behavior or psychopathology, problematic parent–child relationships, physical and sexual abuse, exposure to suicidal behavior of others (peers or via media), difficulties in school, homosexual or bisexual orientation, and access to lethal means (particularly firearms).13 Some protective factors that promote resilience and reduce the potential for suicide include coping, problem-solving, and conflict-resolution skills; school and parent/family connectedness; academic achievement; help-seeking behavior; good peer relationships; emotional well-being; positive self-worth; and social integration.14,15 Many adolescents do not readily verbalize their feelings, yet their feelings will manifest in vague somatic symptoms or atypical behavior. Certain warning signs involve expressed suicidal thoughts or threats, which a young person may communicate directly (e.g., “I'm going to kill myself”) or indirectly (e.g., “It's no use,” “I won't be a burden anymore”). Some other warning signs include changes in eating or sleeping habits; withdrawing from friends, family, or regular activities; violent actions, rebellious behavior, or running away; drug and alcohol use; persistent boredom, difficulty concentrating, or a decline in the quality of schoolwork; and frequent complaints about physical symptoms related to emotions such as stomachaches, headaches, or fatigue (see List 1 for additional signs).16,17 Implications for provider education Our brief review of the literature regarding factors associated with adolescent suicide and suicidal behavior identified some fundamental content areas that educational programs for PCPs should target. Changes in suicide methods demonstrate the potential mutability of youth suicidal behavior11 and suggest that PCPs need continuing education in adolescent suicide prevention. Knowing demographics associated with suicide among youth is also important: Research suggests that, compared to females, practitioners less often screen for emotional distress among adolescent males or talk with males about procuring help if they feel sad or depressed.18 Further, differences among ethnic groups extend beyond rates of suicidal behavior and may include the context in which suicidality occurs as well as clinical indicators of suicide risk.9,19 Associations between depressed mood and suicidal behavior among adolescents underscore the importance of training PCPs to identify and manage depression in this population. PCPs must become well versed in the warning signs of suicidality to ensure that they can recognize potentially high-risk patients who present with nonpsychological complaints. They need improved education initiatives across all levels of training and practice to gain the knowledge and skills they need to effectively elicit risk factors as well as reinforce and foster protective factors during clinical encounters. PCPs' Role in and Preparation for Preventing Adolescent Suicide Health care providers whose practice populations include young patients encounter distressed and suicidal adolescents and, thus, can play a major role in suicide prevention.14 One study found that 62% of persons aged 35 years and younger who died by suicide contacted a PCP in the year before their death, and 23% contacted a PCP in the month before their death.20 To our knowledge, no current data exist regarding the percentage of adolescents who visited their PCP before attempting or completing suicide. However, research shows that 20% to 41% of adolescents who present to PCPs have high levels of emotional distress and/or suicidal ideation, yet PCPs identify less than half (24%–45%) of these young people.18,21,22 This failure may reflect discomfort discussing sensitive issues,23 a focus on somatic complaints,24 or incomplete knowledge of relevant warning signs, risk factors, and demographics.25,26 We acknowledge that predicting and preventing youth suicide represent extremely difficult challenges for PCPs. Distressed adolescents often present with medical problems, not psychological symptoms,21,22 and they do not readily disclose their health-risk behaviors or psychosocial problems unless prompted.27 Yet adolescents and their parents want to discuss psychosocial problems with their PCPs,26,28 and adolescents will acknowledge suicidal thoughts when asked directly.3 Therefore, PCPs should consider all appointments with adolescents as opportunities to explore psychosocial issues beyond the presenting complaints.22 PCPs must become willing and able to inquire about adolescents' mental status, vigilantly screen for suicide risk factors, and proactively identify warning signs during routine medical and well visits.21,29,30 Unfortunately, PCPs may not receive adequate training to screen for suicide risk or mental health disorders. The Accreditation Council for Graduate Medical Education (ACGME) requires that pediatric residency programs include a one-month block rotation in developmental–behavioral pediatrics,31 which must involve training on internalizing behaviors such as suicidal behavior.32 However, a national survey of directors of pediatric residency training programs found that, on average, 64% did not consider instruction on suicide or depression in their program adequate or thorough.33 Boris and Fritz34 found that many pediatric residents receive clinical experience with suicidal patients, typically in emergency rooms, yet they do not feel competent to evaluate suicidal patients or assess a patient's state of mind. A recent survey of senior pediatric residents in a top-ranked training program supports this finding.35 Therefore, as expected, many PCPs working with adolescents report they need additional training in mental health care.18,26,34,36,37 Comprehensive training initiatives that address general competencies in mental health38 and specific competencies in suicide risk assessment and management should enhance PCPs' proficiency in identifying, evaluating, and assisting suicidal adolescents.39 After a systematic review of the literature, Mann et al40 concluded that physician education represents one of the most promising suicide prevention strategies. Research with adults demonstrated declines in suicide rates after PCPs participated in education programs targeting depression recognition and treatment. Kaplan et al41 found that residency training in assessing suicide risk was an especially important factor associated with PCPs' confidence in evaluating and managing suicidality. Similarly, Frankenfield et al26 found that physicians who felt sufficiently trained and knew how to screen for suicide risk factors among adolescents were more than three times as likely as others to screen for these risk factors. Screening practices in this study constituted a physician's clinical assessment, a physician's review of a questionnaire completed by a patient or parent, or both methods. Pfaff et al22 found dramatic increases in physicians' detection rates of psychological distress and suicidal ideation in young patients after one day of training. They identified the increased rates through a physician-completed summary sheet describing patients' psychological states (i.e., presence of psychological distress and suicidal ideation, and estimate of suicide risk). Even brief interventions may prove effective. A recent study found that a 90-minute training on youth suicide in primary care clinics resulted in a 219% increase in participating PCPs' rates of inquiry about suicide risk and a 392% increase in their case detection across three sites.42 Further, the rates of case detection remained elevated six months after the intervention. Trainers taught PCPs to screen for suicide risk by including two core questions in their standard psychosocial interview: “Have you ever felt that life is not worth living?” and “Have you ever felt like you wanted to kill yourself?” Patient endorsement of either question prompted PCPs to ask six additional questions regarding suicide planning, preparation, and attempts. Opportunities for Improvement We believe many opportunities exist to provide continuous and diverse learning experiences throughout medical school and residency, as well as in the practice setting. PCPs at all stages of their careers deserve opportunities to obtain requisite knowledge and hone their skills. Further, collaborative practice initiatives and other organizational changes that facilitate learning and support PCPs should be considered. Opportunities during medical school and residency The lack of adequate training in child and adolescent psychiatry during medical school demonstrates the devaluation of the field and minimization of mental health issues in medical education.43 U.S. medical students are only guaranteed exposure to psychiatry during the third-year clerkship.24 Yet, their psychiatry rotation may not include experience with adolescents, and some programs may not offer electives in child psychiatry. Medical schools should incorporate mental health education that takes a developmental approach and addresses mental health issues for people of all ages. Although the ACGME requires training on adolescent suicide during pediatric residency, the time and exact content that residency programs should devote to identifying, assessing, and managing suicidality remain unclear. Residency programs need explicit guidelines to help them become more deliberate in their approach to training on this issue. Pediatric residents would benefit from an authoritative syllabus, standard curricula, and case material comprising exercises in interviewing, accessing resources, demonstrating empathy, and managing distressed youth.33 Also, residents should have opportunities to discuss their feelings of anxiety about engaging suicidal patients.33 Programs should implement curricula within a structured program offered in consecutive years, possibly within rotations in developmental–behavioral pediatrics, adolescent medicine, and/or ambulatory medicine. Unfortunately, competing agendas and time constraints may reduce the likelihood that programs will develop or implement a comprehensive curriculum. Therefore, at a minimum, pediatric residency programs should provide trainees opportunities to participate in seminars and/or modules on identifying and assessing suicide risk during an adolescent medicine rotation or before their work in continuity clinics. For example, instructors could expand the Yale Primary Care Pediatrics Curriculum44 or related curricula to include a chapter on adolescent suicidality. Residents also would benefit from a collaborative training model in which mental health specialists coprecept in residency continuity clinics, partner with residents to conduct inpatient rounds, and codevelop educational programs with pediatric faculty.38 Ongoing training and collaborative practice As noted above, PCPs require dynamic, ongoing education and training regarding adolescent suicide.25 In addition to workshops to enhance their knowledge, self-efficacy, and skills, PCPs would benefit from collaborating with mental health specialists during office rounds, comprehensive trainings focused on roles in collaborative practice, and quality improvement programs, as well as in assessing and managing youth in their mutual care.38 Further, PCPs report wanting self-instructional materials to increase their knowledge about pediatric mental health issues.37 Therefore, developing and implementing computerized tutorials45 or tool kits17 that address adolescent suicidality for PCPs may help close the gaps in their knowledge. Educators should identify and execute approaches that work within their organizations. Incorporating interactive techniques To reinforce learning, PCP training programs should incorporate interactive techniques such as role-playing with feedback, multiple sessions in a series, and tools that help PCPs implement knowledge and skills in the practice setting.46,47 Role-playing allows PCPs to practice identifying risk factors and warning signs, assessing suicide risk, strengthening protective factors, responding to reports of suicidality and self-injury, demonstrating empathy, facilitating access to mental health services, and using cognitive behavioral therapy techniques. Fallucco et al35 found that pediatric residents who participated in suicide risk assessment training that incorporated a lecture and practice with standardized patients showed greater objective knowledge of risk factors and confidence in screening for risk factors and assessing suicidal adolescents compared with residents trained via other methods. Role-playing exercises could help PCPs learn to incorporate relevant tools from sources such as the Suicide Prevention Toolkit for Rural Primary Care17 into their interactions with adolescents in the practice setting (e.g., suicide assessment pocket guide, safety planning guide, crisis support plan, suicidality treatment and tracking log, and patient/parent education materials). Other studies of programs for PCPs support the value of role-playing to prepare for addressing diverse adolescent health issues.48–51 Successful strategies to reduce adolescent suicidal behavior and suicide rates will likely involve multifaceted interventions that integrate physician education with other, organizational approaches.52 According to a national survey of pediatricians, organizational barriers to identifying and managing psychosocial issues among adolescents include lack of time to treat mental health problems, long waiting periods to see mental health providers, and lack of providers to whom to refer patients with mental health problems.53 Other issues involve the social stigma associated with mental illness, poor public education around mental health issues, cultural and language barriers, and financial barriers such as inadequate reimbursement for mental health services provided by PCPs.2,26,54 Systems and resources must exist that enable PCPs to remain confident that they can identify and respond to young people found to have thoughts of ending their lives.54 Organizational strategies that could supplement physician education initiatives include customized adolescent screening and provider charting forms,55 30-minute adolescent well visits,56 access to a health educator55 or health education materials,56 nurse case management,52 improved integration between primary and secondary care,52 and continuous monitoring and improvement measures.57 In addition, colocating mental health specialists in primary care settings may encourage collaboration in a variety of ways and increase the likelihood of consultation and referral.58 Gardner et al59 described an effective approach to screening and triaging potentially suicidal adolescents by capitalizing on colocated services and a coordinated team that included psychiatric social workers. Similarly, Asarnow et al60 showed the benefits of psychotherapists serving as mental health care managers supporting PCPs in improving access to depression treatment for adolescents through primary care. Combining strategies and implementing a team approach will likely produce synergistic effects and help overcome barriers to caring for suicidal adolescents in primary care settings. Future Directions for Research and Education High rates of emotional distress and suicidal ideation among many adolescents presenting to PCPs justify more robust training designed to empower physicians to identify these issues in the primary care setting.22 PCPs should have opportunities to become confident and competent in addressing adolescent suicidality during medical school and residency as well as through continuing education programs. The literature provides support for the effectiveness of physician education to improve identification and assessment skills, thereby helping prevent adolescent suicide. However, researchers rarely describe their training programs or assess components of educational sessions in their published articles. Further, limited evaluation data exist regarding the efficacy of established adolescent suicide prevention training programs for PCPs, such as the American Association of Suicidology's Recognizing and Responding to Suicide Risk in Primary Care.61 To advance the science in this area and assist educators, practitioners and researchers must rigorously evaluate their training programs, detail program components in published works, elucidate the most effective teaching methods and strategies, and ensure that evidence-based continuing medical education and other training programs become widely available at minimal cost. Researchers also should conduct surveillance studies that capture the frequency and timing of adolescents' visits to PCPs before they attempt or complete suicide. In addition, this area of research needs sound longitudinal and controlled studies of physician education interventions that examine adolescent suicide attempts and rates as outcomes and identify the specific knowledge and skills required to affect suicidal behavior. The relatively low rate of adolescent suicide and many methodological constraints make such research challenging. However, educators, clinicians, and researchers must collaborate to ensure that research supports and propels this potentially lifesaving agenda. Educators developing courses for medical students, residents, or PCPs in practice should include content on demographics, risk and protective factors, and warning signs of adolescent suicide. Courses should incorporate interactive techniques such as role-playing, provide opportunities for participants to discuss their feelings about engaging suicidal patients, and offer guidance on accessing resources and making referrals to specialists. Finally, individuals who develop educational interventions should evaluate their programs and share their findings. Until clearer guidelines, an authoritative syllabus, and comprehensive educational materials become available, a starting point may involve adapting the Suicide Prevention Toolkit for Rural Primary Care17 for PCPs working with adolescents. A tailored tool kit may provide a common information base from which educators could develop and expand medical education curricula and continuing education programs. Often, suicide prevention becomes “a matter of a caring person with the right knowledge being available at the right place at the right time.”62 PCPs are known as caring individuals, and they are often in the right place at the right time. Therefore, educators must ensure that physicians possess the knowledge, skills, and supports to help prevent many tragic deaths. This work, done during Dr. Taliaferro's fellowship training at the University of Minnesota, was supported in part through funds from the Healthy Youth Development • Prevention Research Center, University of Minnesota (Cooperative Agreement No. U48-DP001939, Centers for Disease Control and Prevention). The findings and conclusions in this work are those of the authors and do not necessarily represent the official position of the Centers for Disease Control and Prevention. 1U.S. Department of Health and Human Services. National Strategy for Suicide Prevention: Goals and Objectives for Action. Rockville, Md: Public Health Service; 2001. 2American Academy of Child and Adolescent Psychiatry. Improving mental health services in primary care: Reducing administrative and financial barriers to access and collaboration. Pediatrics. 2009;123:1248–1251. 3Shain B; Committee on Adolescence. Suicide and suicide attempts in adolescents. Pediatrics. 2007;120:669–676. 4Shaffer D, Pfeffer C; Work Group on Quality Issues. Practice parameter for the assessment and treatment of children and adolescents with suicidal behavior. J Am Acad Child Adolesc Psychiatry. 2001;40(7 suppl):24S–51S. 5Xu J, Kochanek K, Tejada-Vera B. Deaths: Preliminary Data for 2007. National Vital Statistics Report. Vol 58, no. 1. Hyattsville, Md: National Center for Health Statistics; 2009 9Langhinrichsen-Rohling J, Friend J, Powell A. Adolescent suicide, gender, and culture: A rate and risk factor analysis. Aggress Violent Behav. 2009;14:402–414. 10Cash S, Bridge J. Epidemiology of youth suicide and suicidal behavior. Curr Opin Pediatr. 2009;21:613–619. 11Centers for Disease Control and Prevention (CDC). Suicide trends among youths and young adults aged 10–24 years—United States, 1990–2004. MMWR Morb Mortal Wkly Rep. 2007;56:905–908. http://www.cdc.gov/mmwr/preview/mmwrhtml/mm5635a2.htm . Accessed November 30, 2010. 12Gould MS, Greenberg T, Velting DM, Shaffer D. Youth suicide: A review. Prev Res. September 2006;13:3–7. 13Gould M, Kramer R. Youth suicide prevention. Suicide Life Threat Behav. 2001;31:6–31. 14Borowsky I. The role of the pediatrician in preventing suicidal behavior. Minerva Pediatr. 2002;54:41–52. 15Borowsky I, Ireland M, Resnick M. Adolescent suicide attempts: Risks and protectors. Pediatrics. 2001;107:485–493. 17Western Interstate Commission on Higher Education and Suicide Prevention Resource Center. Suicide Prevention Toolkit for Rural Primary Care: A Primer for Primary Care Providers. Boulder, Colo: Western Interstate Commission on Higher Education; 2009. 18Ozer E, Zahnd E, Adams S, et al. Are adolescents being screened for emotional distress in primary care? J Adolesc Health. 2009;44:520–527. 19Goldson D, Molock S, Whitbeck L, Murakami J, Zayas L, Nagayama Hall G. Cultural considerations in adolescent suicide prevention and psychosocial treatment. Am Psychol. 2008;63:14–31. 20Luoma J, Martin C, Pearson J. Contact with mental health and primary care providers before suicide: A review of the evidence. Am J Psychiatry. 2002;159:909–916. 21McKelvey R, Davies L, Pfaff J, Acres J, Edwards S. Psychological distress and suicidal ideation among 15–24-year-olds presenting to general practice: A pilot study. Aust N Z J Psychiatry. 1998;32:344–348. 22Pfaff J, Acres J, McKelvey R. Training general practitioners to recognise and respond to psychological distress and suicidal ideation in young people. Med J Aust. 2001;174:222–226. 23Stovall J, Domino F. Approaching the suicidal patient. Am Fam Physician. 2003;68:1814–1818. 24Lake C. How academic psychiatry can better prepare students for their future patients. Part I: The failure to recognize depression and risk for suicide in primary care—Problem identification, responsibility, and solutions. Behav Med. 2008;34:95–100. 25Waldvogel J, Rueter M, Oberg C. Adolescent suicide: Risk factors and prevention strategies. Curr Probl Pediatr Adolesc Health Care. 2008;28:110–125. 26Frankenfield D, Keyl P, Gielen A, Wissow L, Werthamer L, Baker S. Adolescent patients—Healthy or hurting: Missed opportunities to screen for suicide risk in the primary care setting. Arch Pediatr Adolesc Med. 2000;154:162–168. 27Kramer T, Garralds M. Psychiatric disorders in adolescents in primary care. Br J Psychiatry. 1998;173:303–309. 28Cheung A, Dewa C, Levitt A, Zuckerbrot R. Pediatric depressive disorders: Management and priorities in primary care. Curr Opin Pediatr. 2008;20:551–559. 29Horowitz L, Ballard E, Pao M. Suicide screening in schools, primary care and emergency departments. Curr Opin Pediatr. 2009;21:620–627. 30Zametkin A, Alter M, Yemini T. Suicide in teenagers: Assessment, management, and prevention. JAMA. 2001;386:3120–3125. 33Sudak D, Roy A, Sudak H, Lipschitz A, Maltsberger J, Hendin H. Deficiencies in suicide training in primary care specialties: A survey of training directors. Acad Psychiatry. 2007;31:345–349. 34Boris N, Fritz G. Pediatric residents' experiences with suicidal patients: Implications for training. Acad Psychiatry. 1998;22:21–28. 35Fallucco E, Hanson M, Glowinski A. Teaching pediatric residents to assess adolescent suicide risk with a standardized patient module. Pediatrics. 2010;125:953–959. 36Freed G, Dunham K, Switalski K, Jones M, McGuinness G; Research Advisory Committee of the American Board of Pediatrics. Recently trained general pediatricians: Perspectives on residency training and scope of practice. Pediatrics. 2009;123(suppl 1):S38–S43. 37Steele M, Fisman S, Dickie G, Stretch N, Rourke J, Grindrod A. Assessing the need for and interest in a scholarship program in children's mental health for rural family physicians. Can J Rural Med. 2003;8:163–170. 38Committee on Psychosocial Aspects of Child and Family Health and Task Force on Mental Health. Policy statement—The future of pediatrics: Mental health competencies for pediatric primary care. Pediatrics. 2009;124:410–421. 39Rudd M, Cukrowicz K, Bryan C. Core competencies in suicide risk assessment and management: Implications for supervision. Train Educ Prof Psychol. 2008;2:219–228. 40Mann J, Apter A, Bertolote J, et al. Suicide prevention strategies: A systematic review. JAMA. 2005;294:2064–2074. 41Kaplan M, Adamek M, Martin J. Confidence of primary care physicians in assessing the suicidality of geriatric patients. Int J Geriatr Psychiatry. 2001;16:728–734. 42Wintersteen M. Standardized screening for suicidal adolescents in primary care. Pediatrics. 2010;125:938–944. 43Beresin E. Child and adolescent psychiatry residency training: Current issues and controversies. J Am Acad Child Adolesc Psychiatry. 1997;36:1339–1348. 45Quinnet P, Baker A. Web-based suicide prevention education: Innovations in research, training, and practice. In: Sher L, Vilans A, eds. Internet and Suicide. Hauppauge, NY: Nova Science Publishers, Inc.; 2009. 46Davis D, O'Brien M, Freemantle N, Wolf F, Mazmanian P, Taylor-Vaisey A. Impact of formal continuing medical education: Do conferences, workshops, rounds, and other traditional continuing education activities change physician behavior or health care outcomes? JAMA. 1999;282:867–874. 47Satterlee W, Eggers R, Grimes D. Effective medical education: Insights from the Cochrane Library. Obstet Gynecol Surv. 2008;63:329–333. 48Kokotailo P, Langhough R, Neary E, Matson S, Fleming M. Improving pediatric residents' alcohol and other drug use clinical skills: Use of an experiential curriculum. Pediatrics. 1995;96:99–104. 49Lustig J, Ozer E, Adams S, et al. Improving the delivery of adolescent clinical preventive services through skills-based training. Pediatrics. 2001;107:1100–1107. 50Ozer E, Adams S, Lustig J, et al. Increasing the screening and counseling of adolescents for risky health behaviors: A primary care intervention. Pediatrics. 2005;115:960–968. 51Sanci L, Coffey C, Veit F, et al. Evaluation of the effectiveness of an educational intervention for general practitioners in adolescent health care: Randomised controlled trial. BMJ. 2000;320:224–230. 52Gilbody S, Whitty P, Grimshaw J, Thomas R. Educational and organizational interventions to improve the management of depression in primary care: A systematic review. JAMA. 2003;289:3145–3151. 53Horwitz S, Kelleher K, Stein R, et al. Barriers to the identification and management of psychosocial issues in children and maternal depression. Pediatrics. 2007;119:e208–e218. 54Bajaj P, Borreani E, Ghosh P, Methuen C, Patel M, Crawford M. Screening for suicidal thoughts in primary care: The views of patients and general practitioners. Ment Health Fam Med. 2008;5:229–235. 55Ozer E, Adams S, Lustig J, et al. Can it be done? Implementing adolescent clinical preventive services. Health Serv Res. 2001;36:150–165. 56Klein J, Allan M, Elster A, et al. Improving adolescent preventive care in community health centers. Pediatrics. 2001;107:318–327. 57Shafer M, Tebb K, Pantell R, et al. Effect of a clinical practice improvement intervention on chlamydial screening among adolescent girls. JAMA. 2002;288:2846–2852. 58Foy J, Kelleher K, Laraque D. Enhancing pediatric mental health care: Strategies for preparing a primary care practice. Pediatrics. 2010;125(suppl 3):S87–S108. 59Gardner W, Klima J, Chisolm D, et al. Screening, triage, and referral of patients who report suicidal thought during a primary care visit. Pediatrics. 2010;125:945–952. 60Asarnow J, Jaycox J, Duan N, et al. Effectiveness of quality improvement intervention for adolescent depression in primary care clinics: A randomized controlled trial. JAMA. 2005;293:311–319.
<urn:uuid:07bda0fe-5833-419e-b2cd-c07840afb74e>
CC-MAIN-2017-17
http://journals.lww.com/academicmedicine/Fulltext/2011/03000/Perspective__Physician_Education__A_Promising.22.aspx
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917125532.90/warc/CC-MAIN-20170423031205-00310-ip-10-145-167-34.ec2.internal.warc.gz
en
0.913426
6,560
2.984375
3
METHODS FOR ASSESSING BLOOD PRESSURE VARIABILITY The different methods available for office and out-of-office BP measurement have been used to provide information on different aspects of BPV. Evaluation of BPV using each of these methods has been shown to independently predict cardiovascular risk [1–3,11–17]. However, each BPV component seems to reflect different mechanisms, is likely to provide different information on cardiovascular regulation and might have different clinical implications [12,18]. Very-short-term BPV can be assessed by continuous beat-to-beat intraarterial BP monitoring or noninvasive finger-cuff photoplethysmography. The use of the former is limited because of the invasive nature and the latter because of questionable measurement accuracy. Short-term BPV is based on intermittent BP sampling at 15–30-min intervals in a routine 24-h period, obtained using noninvasive oscillometric ambulatory monitors. This appears to be an ideal method for routine BPV evaluation, by providing information on the dispersion of BP values in different conditions of posture and activity. However, the independent prognostic value of ambulatory BPV has been questioned, and its application might not be well accepted by patients for repeated use in the long-term management of hypertension. In the Anglo-Scandinavian Cardiac Outcomes Trial (ASCOT), ambulatory BPV had less effect on vascular events than that assessed by office measurements . Moreover, analysis of the International Database on Ambulatory Blood Pressure in Relation to Cardiovascular Outcome (IDACO) showed that the 24-h ambulatory BPV did not contribute much to risk stratification over and beyond the average ambulatory BP . It should be mentioned, however, that most of the published studies have been limited by infrequent ambulatory BP sampling, whereas it has been shown that measurements at 15-min intervals are required to provide an accurate assessment of BPV . Mid-term day-by-day BPV based on self-home BP monitoring has also been shown to provide prognostic information independent of average home BP [1,12,13,17]. Home monitoring might be more appropriate for repeated BPV assessment in clinical practice, because it is widely available and well accepted by hypertensive patients for long-term use [12,13]. However, its application requires validated devices, measurement standardization and prevention of patients’ reporting bias using automated memory [12,13]. Long-term BPV can be assessed by repeated office BP measurement in succeeding visits. Recent outcome studies demonstrated the prognostic relevance of visit-to-visit BPV [2,3,16], which might be superior to short-term ambulatory BPV . In these studies, however, carefully standardized office measurements were obtained, which might not be feasible to replicate in routine clinical practice. INDICES FOR BLOOD PRESSURE VARIABILITY QUANTIFICATION Multiple mathematical approaches have been applied to determine BPV evaluated by different BP measurement methods. The SD is the classic index for BPV quantification, yet its major limitations are that it is proportional to the mean BP value and might have inferior prognostic ability than newer BPV indices. Several SD-based formulas have been introduced to eliminate the impact of mean BP (e.g. coefficient of variation), or of diurnal BP variation . More recent studies tested novel BPV indices, which appear to have superior prognostic ability to SD-based indices. Some indices have been developed to handle BP readings obtained using specific measurement methods (Table 2). While SD merely reflects the BP excursions around the mean, the time rate of BP variation measures the speed of ambulatory BP fluctuations between successive readings and integrates the direction of these changes [5,20]. There is evidence that the time rate of BP variation is independently associated with target organ damage [5,20]. The average real variability (ARV) also overcame deficiencies of SD and added unique additional prognostic value to short-term ambulatory BPV, being more sensitive to the individual BP measurement order and less so to the low sampling frequency of ambulatory monitoring [6,15,21]. The variability independent of mean (VIM), introduced in the analysis of prognostic ability of visit-to-visit BPV in the ASCOT and the Medical Research Council (MRC) Trial, transformed SD into a new statistical tool uncorrelated with mean BP and with independent prognostic value . Asayama et al. explored the prognostic ability of ARV, VIM and the maximum–minimum BP difference of home measurements, and concluded that none of them incrementally predicts cardiovascular outcome over and beyond mean SBP. Finally, the residual BPV (RSD) reflects the erratic component of short-term BPV, and has been shown to be positively associated with left ventricular mass index and cardiovascular risk . Interindividual versus intraindividual BPV has been an issue of debate in the analysis of outcome trials. In the ASCOT, interindividual BPV was lower with amlodipine than with atenolol, mainly because of lower intraindividual BPV, and in the MRC study, both interindividual and intraindividual BPV increased with atenolol compared with placebo or diuretic . Although there was a relatively good correlation between intravariability and intervariability, they cannot be regarded as interchangeable indices but probably reflect different physiological or therapeutic phenomena [22,23]. Interindividual BPV during treatment may primarily depend on the individual's BP response to treatment rather than the variance of BP response over time [22,23]. This view is supported by the European Lacidipine Study on Atherosclerosis (ELSA) that showed higher interindividual than intraindividual visit-to-visit office and ambulatory BPV, suggesting that only the latter is able to reflect precisely the treatment-induced changes . Specific indices have been developed to quantify the effect of antihypertensive drugs on ambulatory BP. The trough-to-peak ratio (TPR), introduced to evaluate the duration of antihypertensive drug action, reflects the ambulatory BP decline in two narrow time intervals, is poorly reproducible and has wide interpatient variability . In contrast, the smoothness index is based on the entire 24-h recording period and takes into account both the degree of BP reduction and its 24-h distribution [8,24]. Thus, it reflects the homogeneity of 24-h drug effect, and is more reproducible and more closely related to treatment-induced regression of organ damage than the TPR [8,25]. In this issue of the journal, Parati et al. introduced the treatment-on-variability index (TOVI), calculated by dividing the treatment-induced hourly BP reduction to the degree of absolute BPV under the same treatment. Thus, the TOVI allows the assessment of the impact of antihypertensive drug treatment on both mean BP and BPV over 24 h. TOVI was compared with smoothness index in a retrospective analysis of 10 drug trials receiving placebo, monotherapy or two-drug combination, with ambulatory BP available before and during treatment . Although both smoothness index and TOVI reflect the effectiveness of drug-induced hourly BP decline, the former assesses the 24-h variation of this decline, whereas the latter quantifies the short-term BPV ridded of the impact of the nocturnal BP change . The two indices showed similar behavior among treatments by discriminating their differential impact on BP and BPV. Although both indices were improved with all treatments compared with placebo, combination therapy resulted in higher values than monotherapies . Higher smoothness index or TOVI was attributed to stronger BP-lowering effect and longer duration of drug action . Thus, TOVI appears to be at least as useful as smoothness index, yet the clinical relevance of treatment-induced effects on these indices in terms of clinical outcomes remains to be proved. DRUGS EFFECTS ON BLOOD PRESSURE VARIABILITY AND CARDIOVASCULAR RISK In recent years, there has been increasing interest in the effect of antihypertensive drugs on BPV and its independent impact on cardiovascular event prevention. Several outcome trials suggested that calcium channel blockers (CCBs) are superior to other drug classes in reducing BPV, which might independently contribute to more effective cardiovascular protection. In the ASCOT, clinic and ambulatory BPV were higher in the CCB compared with the β-blocker-treated group, independently of their effects on mean BP . Interestingly, the cardiovascular event rates were lower in the CCB group, which could be partially attributed to the drug effects on BPV. In the MRC trial, visit-to-visit BPV was increased in the β-blocker compared with the diuretic and the placebo group, with these BPV trends in the β-blocker group being associated with stroke risk . The above outcome data are in line with a meta-analysis of 389 trials that showed BPV to be reduced by CCBs and diuretics, and increased by β-blockers, angiotensin converting enzyme inhibitors and angiotensin receptor blockers . In 21 trials with outcome data, the above effects contributed to differences in stroke risk, independently of effects on mean BP . Interestingly, the aforementioned opposing effects of antihypertensive drug classes on BPV were dose–dependent and persisted when used in combinations . The authors suggested that high-dose CCB monotherapy or combination with other drugs might be particularly effective for stroke prevention . A recent systematic review of home BPV trials also suggested favorable effects of CCBs and not of β-blockers or angiotensin receptor blockers on BPV . The abovementioned trials had different design and different methodology for BPV evaluation, yet the message is consistent in favor of CCBs. Thus, an important new chapter has opened, and there is an urgent need to establish the optimal methodology and indices for quantifying the drug treatment effects on BPV. INDICES TO EVALUATE DRUG EFFECTS ON BLOOD PRESSURE VARIABILITY A mathematical approach to select the optimal BPV index that accurately reflects the dispersion of BP values around the average unaffected by the latter is reasonable and was the rationale for introducing novel indices, such as the ARV and VIM. On the other hand, indices specifically developed to assess drug-induced BPV changes (e.g. smoothness index and TOVI) are clearly attractive. However, the ultimate test for a BPV index is to show whether its change with treatment can independently influence the risk of cardiovascular events. The outcome trials, that showed CCBs to have favorable effects on BPV compared with other drugs with subsequent enhanced cardiovascular protective abilities, have used various methods for BPV quantification. In the ASCOT and the MRC trial, Rothwell et al. tested the SD, coefficient of variation, VIM, ARV and RSD of clinic BP measurements in terms of intraindividual visit-to-visit BPV, and also SD, coefficient of variation and ARV of ambulatory BP, and concluded that mainly the effects on systolic visit-to-visit BPV and partly on systolic ambulatory BPV could account for reduced event rates in the CCB group. A meta-analysis of 21 outcome trials applied the variance ratio as an expression of interindividual visit-to-visit variability to differentiate the BPV changes with different drugs . Cross-sectional and short-term studies also demonstrated the ability of several BPV indices to differentiate the effect of different drugs. Some studies obtained home BP readings and used the SD or coefficient of variation of morning BP , or SD of daily mean or daily morning or evening BP [13,29]. Other studies performed ambulatory BP monitoring and used the SD of systolic daytime, night-time and 24-h BP . Ambulatory BP-specific indices, such as TPR, smoothness index and TOVI, have been explored in a meta-analysis of 11 trials , in the analysis of 10 trials in the current issue of the journal and in other trials with successful results. The prognostic relevance of the latter indices requires verification in outcome trials. The optimal method and index for assessing BPV should combine technical and clinical features. Evidence should be provided that the BPV index: is easily measurable to be applicable in clinical practice; is reproducible; has defined normalcy and intervention thresholds; independently contributes to cardiovascular risk; is modifiable by treatment; and patients’ prognosis is improved when additional treatment targets are set for BPV beyond those for average BP. At present, evidence for most of these questions is missing, and therefore BPV remains a challenging research issue deserving thorough investigation. Future prospective trials should perform head-to-head comparisons of different BPV indices and test BPV as an additional target of treatment, aiming to more efficient prevention of organ damage and cardiovascular disease. Conflicts of interest There are no conflicts of interest. 1. Asayama K, Kikuya M, Schutte R, Thijs L, Hosaka M, Satoh M, et al. Home blood pressure variability as cardiovascular risk factor in the population of Ohasama. Hypertension 2. Rothwell PM, Howard SC, Dolan E, O’Brien E, Dobson JE, Dahlöf B, et al. ASCOT-BPLA and MRC Trial Investigators Effects of beta blockers and calcium-channel blockers on within-individual variability in blood pressure and risk of stroke. Lancet Neurol 3. Rothwell PM, Howard SC, Dolan E, O’Brien E, Dobson JE, Dahlöf B, et al. Prognostic significance of visit-to-visit variability, maximum systolic blood pressure, and episodic hypertension. Lancet 4. Bilo G, Giglio A, Styczkiewicz K, Caldara G, Maronati A, Kawecka-Jaszcz K, et al. A new method for assessing 24-h blood pressure variability after excluding the contribution of nocturnal blood pressure fall. J Hypertens 5. Zakopoulos NA, Tsivgoulis G, Barlas G, Papamichael C, Spengos K, Manios E, et al. Time rate of blood pressure variation is associated with increased common carotid artery intima-media thickness. Hypertension 6. Mena L, Pintos S, Queipo NV, Aizpúrua JA, Maestre G, Sulbarán T. A reliable index for the prognostic significance of blood pressure variability. J Hypertens 7. Sega R, Corrao G, Bombelli M, Beltrame L, Facchetti R, Grassi G, et al. Blood pressure variability and organ damage in a general population: results from the PAMELA study. Hypertension 8. Parati G, Omboni S, Rizzoni D, Agabiti-Rosei E, Mancia G. The smoothness index: a new, reproducible and clinically relevant measure of the homogeneity of the blood pressure reduction with treatment for hypertension. J Hypertens 9. Parati G, Dolan E, Ley L, Schumacher H. Impact of antihypertensive combination and monotreatments on blood pressure variability: assessment by old and new indices. Data from a large ambulatory blood pressure monitoring database. J Hypertens 10. Stauss HM. Identification of blood pressure control mechanisms by power spectral analysis. Clin Exp Pharmacol Physiol 11. Parati G, Ochoa JE, Lombardi C, Bilo G. Assessment and management of blood-pressure variability. Nat Rev Cardiol 12. Stergiou GS, Nasothimiou EG. Home monitoring is the optimal method for assessing blood pressure variability. Hypertens Res 13. Stergiou GS, Ntineri A, Kollias A, Ohkubo T, Imai Y, Parati G. Blood pressure variability assessed by home measurements: a systematic review. Hypertens Res 2014; [Epub ahead of print]. 14. Mancia G, Bombelli M, Facchetti R, Madotto F, Corrao G, Trevano FQ, et al. Long-term prognostic value of blood pressure variability in the general population: results of the Pressioni Arteriose Monitorate e Loro Associazioni Study. Hypertension 15. Hansen TW, Thijs L, Li Y, Boggia J, Kikuya M, Björklund-Bodegård K, et al. International Database on Ambulatory Blood Pressure in Relation to Cardiovascular Outcomes Investigators Prognostic value of reading-to-reading blood pressure variability over 24 h in 8938 subjects from 11 populations. Hypertension 16. Muntner P, Shimbo D, Tonelli M, Reynolds K, Arnett DK, Oparil S. The relationship between visit-to-visit variability in systolic blood pressure and all-cause mortality in the general population: findings from III NHANES, 1988 to 1994. Hypertension 17. Johansson JK, Niiranen TJ, Puukka PJ, Jula AM. Prognostic value of the variability in home-measured blood pressure and heart rate: the Finn-Home Study. Hypertension 18. Stergiou GS, Parati G. How to best assess blood pressure? The ongoing debate on the clinical value of blood pressure average and variability. Hypertension 19. di Rienzo M, Grassi G, Pedotti A, Mancia G. Continuous vs intermittent blood pressure measurements in estimating 24-h average blood pressure. Hypertension 20. Manios E, Tsagalis G, Tsivgoulis G, Barlas G, Koroboki E, Michas F, et al. Time rate of blood pressure variation is associated with impaired renal function in hypertensive patients. J Hypertens 21. Pierdomenico SD, Di Nicola M, Esposito AL, Di Mascio R, Ballone E, Lapenna D, Cuccurullo F. Prognostic value of different indices of blood pressure variability in hypertensive patients. Am J Hypertens 22. Mancia G, Facchetti R, Parati G, Zanchetti A. Visit-to-visit blood pressure variability in the European Lacidipine Study on Atherosclerosis: methodological aspects and effects of antihypertensive treatment. J Hypertens 23. Zanchetti A. Wars, war games, and dead bodies on the battlefield: variations on the theme of blood pressure variability. Stroke 24. Omboni S, Parati G, Mancia G. The trough:peak ratio and the smoothness index in the evaluation of control of 24 h blood pressure by treatment in hypertension. Blood Press Monit 25. Rizzoni D, Muiesan ML, Salvetti M, Castellano M, Bettoni G, Monteduro C, et al. The smoothness index, but not the trough-to-peak ratio predicts changes in carotid artery wall thickness during antihypertensive treatment. J Hypertens 26. Webb AJ, Fischer U, Mehta Z, Rothwell PM. Effects of antihypertensive-drug class on interindividual variation in blood pressure and risk of stroke: a systematic review and meta-analysis. Lancet 27. Webb AJ, Rothwell PM. Effect of dose and combination of antihypertensives on interindividual blood pressure variability: a systematic review. Stroke 28. Ishikura K, Obara T, Kato T, Kikuya M, Shibamiya T, Shinki T, et al. J-HOME-Morning Study Group Associations between day-by-day variability in blood pressure measured at home and antihypertensive drugs: the J-HOME-Morning study. Clin Exp Hypertens 29. Matsui Y, O’Rourke MF, Hoshide S, Ishikawa J, Shimada K, Kario K. Combined effect of angiotensin II receptor blocker and either a calcium channel blocker or diuretic on day-by-day variability of home blood pressure: the Japan Combined Treatment With Olmesartan and a Calcium-Channel Blocker Versus Olmesartan and Diuretics Randomized Efficacy Study. Hypertension 30. Zhang Y, Agnoletti D, Safar ME, Blacher J. Effect of antihypertensive agents on blood pressure variability: the Natrilix SR versus candesartan and amlodipine in the reduction of systolic blood pressure in hypertensive patients (X-CELLENT) study. Hypertension 31. Parati G, Schumacher H, Bilo G, Mancia G. Evaluating 24-h antihypertensive efficacy by the smoothness index: a meta-analysis of an ambulatory blood pressure monitoring database. J Hypertens © 2014 Wolters Kluwer Health | Lippincott Williams & Wilkins 32. Ulusoy S, Ozkan G, Konca C, Kaynar K. A comparison of the effects of fixed dose vs. single-agent combinations on 24-h blood pressure variability. Hypertens Res
<urn:uuid:b2a83ac8-39f2-4a47-9e8e-ef401cc9bcbd>
CC-MAIN-2017-17
http://journals.lww.com/jhypertension/Fulltext/2014/06000/Assessment_of_drug_effects_on_blood_pressure.8.aspx
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917122720.81/warc/CC-MAIN-20170423031202-00367-ip-10-145-167-34.ec2.internal.warc.gz
en
0.870552
4,484
2.5625
3
Preventing Skin Problems from Working with Portland Cement This guidance document is advisory in nature, informational in content, and is intended to assist employers in providing a safe and healthful workplace. The document does not serve as a new standard or regulation. It creates no new legal obligations. Portland cement is a generic term used to describe a variety of building materials valued for their strong adhesive properties when mixed with water. Employees who work with portland cement are at risk of developing skin problems, ranging from mild and brief to severe and chronic. Wet portland cement can damage the skin because it is caustic, abrasive, and absorbs moisture. Portland cement also contains trace amounts of hexavalent chromium [Cr(VI)], a toxin harmful to the skin. Dry portland cement is less hazardous to the skin because it is not as caustic as wet cement. The purpose of this document is to make employers and employees aware of the skin problems associated with exposure to portland cement; to note the OSHA standards that apply to work with portland cement; and to provide guidance on how to prevent cement-related skin problems. Measures to protect employees from inhalation and eye hazards associated with exposure to portland cement are also noted. Who is at risk Any employee who has skin contact with wet portland cement has the potential to develop cement-related skin problems. Portland cement is an ingredient in the following materials: Skin problems caused by exposure to portland cement Wet portland cement can cause caustic burns, sometimes referred to as cement burns. Cement burns may result in blisters, dead or hardened skin, or black or green skin. In severe cases, these burns may extend to the bone and cause disfiguring scars or disability. Employees cannot rely on pain or discomfort to alert them to cement burns because cement burns may not cause immediate pain or discomfort. By the time an employee becomes aware of a cement burn, much damage has already been done. Cement burns can get worse even after skin contact with cement has ended. Any employee experiencing a cement burn is advised to see a health care professional immediately. Skin contact with wet portland cement can also cause inflammation of the skin, referred to as dermatitis. Signs and symptoms of dermatitis can include itching, redness, swelling, blisters, scaling, and other changes in the normal condition of the skin. Contact with wet portland cement can cause a non-allergic form of dermatitis (called irritant contact dermatitis) which is related to the caustic, abrasive, and drying properties of portland cement. In addition, Cr(VI) can cause an allergic form of dermatitis (allergic contact dermatitis, or ACD) in sensitized employees who work with wet portland cement. When an employee is sensitized, that person's immune system overreacts to small amounts of Cr(VI), which can lead to severe inflammatory reactions upon subsequent exposures. Sensitization may result from a single Cr(VI) exposure, from repeated exposures over the course of months or years, or it may not occur at all. After an employee becomes sensitized, brief skin contact with very small amounts of Cr(VI) can trigger ACD. ACD is long-lasting and employees can remain sensitized to Cr(VI) years after their exposure to portland cement has ended. Medical tests (e.g., skin patch tests) are available that can confirm whether an employee has become dermally sensitized to Cr(VI). Employees who work with wet portland cement and experience skin problems, including seemingly minor ones, are advised to see a health care professional for evaluation and treatment. In cement-related dermatitis, early diagnosis and treatment can help prevent chronic skin problems. SEE A HEALTH CARE PROFESSIONAL IF YOU WORK WITH WET PORTLAND CEMENT AND HAVE SKIN PROBLEMS!! OSHA standards applicable to working with portland cement Several OSHA standards require employers to take steps to protect employees from hazards associated with exposure to portland cement. These standards include requirements for: Personal Protective Equipment (29 CFR 1926 Subpart E for construction; 29 CFR 1910 Subpart I for general industry; 29 CFR 1915 Subpart I for shipyards) OSHA's personal protective equipment (PPE) standards require that PPE be provided, used, and maintained in a sanitary and reliable condition whenever it is necessary to protect employees from injury or impairment. The employer must provide PPE such as boots and gloves as necessary and appropriate for jobs involving exposure to portland cement and ensure these items are maintained in a sanitary and reliable condition when not in use. Employees must be able to clean or exchange PPE if it becomes ineffective or contaminated on the inside with portland cement while in use. In addition, employers are required to provide PPE at no cost to their employees with limited exceptions (1910.132(h)). Sanitation (29 CFR 1926.51 for construction; 29 CFR 1910.141 for general industry; 29 CFR 1915.97 for shipyards) Construction employers must make washing facilities available for employees exposed to portland cement. Washing facilities must provide clean water, non-alkaline soap, and clean towels. Such facilities must be readily accessible to exposed employees and adequate for the number of employees exposed. The sanitation requirements for general industry and shipyards are similar to those for construction. Hazard Communication (29 CFR 1926.59 for construction; 29 CFR 1910.1200 for general industry; 29 CFR 1915.1200 for shipyards) and Safety Training (29 CFR 1926.21 for construction) The Hazard Communication standard requires that manufacturers and importers provide information on material safety data sheets (MSDSs) and labels about the hazards of portland cement. Employers must make these MSDSs and labels available to employees. The Hazard Communication and Safety Training standards also require employers to provide training to communicate the hazards of exposure to portland cement to their employees. This training must address: - the hazards associated with exposure to portland cement, including hazards associated with the cement's Cr(VI) content; - preventive measures, including proper use and care of PPE and the importance of proper hygiene practices; and - employee access to hygiene facilities, PPE, and information (including MSDSs). Employers subject to OSHA recordkeeping requirements must inform employees of how to report work-related injuries and illnesses and record all new cases of work-related injury and illness (including cement burns and cases of dermatitis) that result in days away from work, restricted work or transfer to another job, medical treatment beyond first aid, or are otherwise determined to be a significant injury or illness by a physician or other licensed health care professional. Permissible Exposure Limit (PEL) (29 CFR 1926.55 for construction; 29 CFR 1910.1000 for general industry; 29 CFR 1915.1000 for shipyards) OSHA has established a permissible exposure limit to address the inhalation hazards of working with dry portland cement. Employers must limit airborne exposure to portland cement to 15 milligrams per cubic meter (mg/m3) of air for total dust and 5 mg/m3 for respirable dust. Because the Cr(VI) content in portland cement is so low, it is anticipated that by meeting the permissible exposure limit (PEL) of 15 mg/m3 for portland cement, employers will also meet the Cr(VI) PEL and action level of 5 and 2.5 micrograms per cubic meter (μg/m3) respectively (see 1926.1126). Preventing cement-related skin problems The best way to prevent cement-related skin problems is to minimize skin contact with wet portland cement. Compliance with OSHA's requirements for provision of PPE, washing facilities, hazard communication and safety training, along with the good skin hygiene and work practices listed below, will protect against hazardous contact with wet cement. Good Practices for Glove Selection and Use - Provide the proper gloves for employees who may come into contact with wet portland cement. Consult the glove supplier or the cement manufacturer's MSDS for help in choosing the proper gloves. Butyl or nitrile gloves (rather than cotton or leather gloves) are frequently recommended for caustic materials such as portland cement. - Use only well-fitting gloves. Loose-fitting gloves let cement in. Often the use of gloves and clothing makes exposure worse when cement gets inside or soaks through the garment. Use glove liners for added comfort. - Wash your hands before putting on gloves. Wash your hands every time that you remove your gloves. - Dry your hands with a clean cloth or paper towel before putting on gloves. - Protect your arms and hands by wearing a long sleeve shirt with the sleeves duct-taped to your gloves to prevent wet cement from getting inside the gloves. - Follow proper procedures for removing gloves, whether reusing or disposing them. See Table 1 for proper procedures for removing gloves. - Clean reusable gloves after use. Before removing gloves, clean the outside by rinsing or wiping off any wet cement. Follow the manufacturer's instructions for glove cleaning. Place clean and dry gloves in a plastic storage bag and store them in a cool, dry place away from tools. - Throw out grossly contaminated or worn-out gloves. - Keep the inside of gloves clean and dry. - Do not use barrier creams or "invisible gloves." These products are not effective in protecting the skin from portland cement hazards. |Table 1. Steps for safe glove removal: Good Practices for Use of Boots and Other Protective Clothing and Equipment - Wear waterproof boots when necessary to prevent wet cement from coming into contact with your skin. It is as important to protect your legs, ankles, and feet from skin contact with wet cement as it is to protect your hands. - Boots need to be high enough to prevent wet cement from getting inside. Tuck pants inside and wrap duct tape around the top of the boots to prevent wet cement from entering. - Select boots that are sturdy, strong enough to resist punctures and tears, and slip resistant. - Change protective boots if they become ineffective or contaminated on the inside with wet cement while in use. - Change out of any work clothes that become contaminated with wet cement and keep contaminated work clothes separate from your street clothes. - When kneeling on wet cement use waterproof kneepads or dry kneeboards to prevent the knees from coming into contact with the cement. - Wear proper eye protection when working with portland cement. - Wash areas of the skin that come into contact with wet cement in clean, cool water. Use a pH-neutral or slightly acidic soap. Check with the soap supplier or manufacturer for information on the acidity and alkalinity of the soap2. - Consider using a mildly acidic solution such as diluted vinegar or a buffering solution to neutralize caustic residues of cement on the skin3. - Do not wash with abrasives or waterless hand cleaners, such as alcohol-based gels or citrus cleaners. - Avoid wearing watches and rings at work since wet cement can collect under such items. - Do not use lanolin, petroleum jelly, or other skin softening products. These substances can seal cement residue to the skin, increase the skin's ability to absorb contaminants, and irritate the skin. Skin softening products also should not be used to treat cement burns. In recent decades there have been efforts to reduce the risk of developing cement-related skin problems by lowering the Cr(VI) content of portland cement. Cr(VI) is not intentionally added to portland cement and it does not serve any functional purpose. There are a variety of ways to minimize the amount of Cr(VI) in portland cement, including: - Using slag, which is free of Cr(VI), in place of or blended with clinker, the primary source of Cr(VI) in portland cement. Slag is a by-product of the iron ore extraction process and has been used in concrete projects in the United States for over a century. - Adding ferrous sulfate to portland cement may lower the Cr(VI) content of the cement. Use of ferrous sulfate has reportedly led to a decline in cases of allergic contact dermatitis in several countries (Goh et al., 1996; Avnstorp, 1989; Roto et al., 1996)4. Edwin G. Foulke, Jr. Assistant Secretary of Labor for Occupational Safety and Health Assistant Secretary of Labor for Occupational Safety and Health Agency for Toxic Substances and Disease Registry (ATSDR); "Toxicological profile for chromium"; ATSDR Toxicological Profile, 88/10, 2000; U.S. Public Health Service, Atlanta, GA. Avnstorp, C.; "Prevalence of cement eczema in Denmark before and since addition of ferrous sulfate to Danish cement"; Acta Demato-Venereologica, 69(2), pp. 151-155, 1989; Stockholm. Center to Protect Workers' Rights (CPWR) Consortium on Preventing Contact Dermatitis; A Safety and Health Practitioner's Guide to Skin Protection, 2000a; Researched, developed, and produced by FOF Communications; Available on line at: http://www.elcosh.org/docs/d0400/d000458/d000458.html. Also includes an employee safety pamphlet on line at: http://www.cdc.gov/, and http://www.cdc.gov/ CPWR; Save Your Skin; 2000b; Produced by FOF Communications; Available online at: CPWR; An Employer's Guide to Skin Protection, 2000c; Researched, developed, and produced by FOF Communications; Available online at: http://www.elcosh.org/docs/d0400/d000457/d000457.html CPWR; Save Your Skin: A 15-Minute Tool Box Session, 2000d; Produced by FOF Communications; Available online at: http://www.elcosh.org/docs/d0300/d000303/d000303.html "Comments of Building and Construction Trades Department, AFL-CIO, in Response to OSHA's Request for Comments on Exposure to Hexavalent Chromium"; Docket H-054a, Exhibit 31-6-1, pp. 7-8, November 19, 2002. (Re: OSHA's, "Occupational Exposure to Hexavalent Chromium (Cr(VI)), Request for Information"; Federal Register, 67FR54389-54394, August 22, 2002, (Exhibit 30). CPWR; "Nonfatal Skin Diseases and Disorders in Construction"; The Construction Chart Book, 3rd Edition, Chapter 46, September 2002; CPWR is located in Silver Spring, MD. Scientific Committee on Toxicity, Ecotoxicity and the Environment (CSTEE); Opinion on Risks to Health from Chromium VI in Cement, June 27, 2002; European Commission, Brussels. De Raeve, H., Vandecasteele, C., Demedts, M., Nemery, B.;"Dermal and respiratory sensitization to chromate in a cement floorer"; American Journal of Industrial Medicine, 34(2), pp. 169-76, 1998. Goh, C.L., Gan, S.L.; "Change in cement manufacturing process, a cause for decline in chromate allergy?"; Contact Dermatitis, 34(1), pp. 51-54, 1996; Munksgaard, Denmark. Halbert, A.R., Gebauer, K.A., and Wall, L.M.; "Prognosis of occupational chromate dermatitis"; Contact Dermatitis, 27, pp. 214-219, 1992. Helmuth, R.A., Miller, F.M., Greening, N.R., Hognestad, E., Kosmatka, S.H., Lang, D.; "Cement"; Kirk-Othmer Encyclopedia of Chemical Technology., Volume 5, 4th edition, 1993; John Wiley & Sons, New York. Irvine, C., Pugh, C.E., Hansen, E.J., and Rycroft, R.J.; "Cement dermatitis in underground workers during construction of the Channel Tunnel"; Occupational Medicine, 44(1), pp. 17-23, February 1994; London. National Slag Association (NSA); National Slag Association News, Publications, and Slag Industry Publications Archives; West Lawn, PA; Available online at: http://www.nationalslag.org/ Occupational Safety and Health Administration; "Occupational Exposure to Hexavalent Chromium, Final Rule"; Federal Register, 71FR 10100, February 28, 2006. Rafnsson, V., Gunnarsdottir, H., Kiilunen, M.; "Risk of lung cancer among masons in Iceland"; Occupational and Environmental Medicine, 54(3), pp. 184-188, 1997. Roto, P., Sainio, H., Reunala, T., Laippala, P.; "Addition of ferrous sulfate to cement and risk of chromium dermatitis among construction workers"; Contact Dermatitis, 34(1), pp. 43-50, 1996. Sahai, D.; "Cement Hazards and Controls: Health Risks and Precautions in Using Portland Cement"; Construction Safety Magazine, 12(2), Summer 2001; Available at: Shaw Environmental, Inc.; Industry Profile, Exposure Profile, Technological Feasibility Evaluation, and Environmental Impact for Industries Affected by a Revised OSHA Standard for Hexavalent Chromium; February 21, 2006; Shaw Environmental, Inc., 5050 Section Avenue, Cincinnati, Ohio, 45212. Shepherd, L.; "Health in construction"; The Safety & Health Practitioner, 17(6), pp. 46-49, June 1999. Slag Cement Association (SCA); "What is Slag Cement?" Slag Cement; Slag Cement Association, Sugar Land, Texas; Available online at: http://www.slagcement.org Spoo, J. and P. Elsner; "Cement burns: a review 1960-2000"; Contact Dermatitis, 45(2), pp. 68-71, August 2001. Stern, A.H., Bagdon, R.E., Hazen, R.E., Marzulli, F.N., 1993; "Risk assessment of the allergic dermatitis potential of environmental exposure to hexavalent chromium"; Journal of Toxicology and Environmental Health, 40(4), pp. 613-641, 1993. Vickers, H.R., and Edwards, D.H.; "Cement burns"; Contact Dermatitis, 2, pp. 73-78, 1976. Zachariae, C.O.C., Agner, T., and Menne, T.; "Chromium allergy in consecutive patients in a country where ferrous sulfate has been added to cement since 1981"; Contact Dermatitis, 35, pp. 83-85, 1996; Munksgaard, Denmark. 1 Hod carriers transport mortar, bricks, and concrete in a vee shaped trough (called a hod) to other employees. 2 "An Employer's Guide to Skin Protection" (see CPWR, 2000c in the bibliography) contains a partial list of pH-neutral or moderately acidic liquid and bar soaps. 3 "An Employer's Guide to Skin Protection" (see CPWR, 2000c in the bibliography) contains some information on neutralizing and buffering products. 4 After Denmark required the addition of ferrous sulfate to reduce the Cr(VI) content of cement to less than 2 parts per million, studies showed a reduction in the prevalence of Cr(VI) allergy (Irvine et al., 1994). However, some U.S. cement manufacturers who have experimented with the use of ferrous sulfate have not been able to achieve significant Cr(VI) reduction. The reasons for this inability may be due to variations in the Cr(VI) content of cement and the amount of time that passes between cement manufacture and use. Time delays are an important consideration because ferrous sulfate may lose its effectiveness over time, depending on how cement is packaged and on humidity and temperature conditions during storage.
<urn:uuid:58a613d5-bfbf-4e4b-8679-65791c72ad9c>
CC-MAIN-2017-17
https://www.osha.gov/dsg/guidance/cement-guidance.html
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917121865.67/warc/CC-MAIN-20170423031201-00484-ip-10-145-167-34.ec2.internal.warc.gz
en
0.890161
4,301
2.625
3
Trip to the North Pole • 360° Aerial Panoramas Total Views: 199 998 For more than a hundred years there have been disputes about the conquest of the North Pole. It all began in the second half of the 18th century and reached its climax in the beginning of the 20th century when in one week time Americans, Frederick Albert Cook and Robert Edwin Peary, both claimed to reach the North Pole. The first one claimed that he reached the North Pole on April 21st, 1908, while the other claimed that he reached it on April 6th, 1909. But none of them has ever been able to provide conclusive evidence supporting the fact. Apparently, none of them actually reached the North Pole, and the only reason they tried to prove the opposite was their unwillingness to give victory to someone else. In the 20th century reaching North Pole has turned into some kind of a sport. Over a hundred years it has been conquered in all kinds of ways: by flying a hot air balloon, a dirigible, and an airplane; by taking a nuclear submarine and a nuclear icebreaker; by skydiving, and even by exploring it in "Mir" deep research vehicles. In the 1970's, 80's and even 90's the focus has shifted from using technology to challenging individual strength and one's ability to overcome increasing difficulties: in 1978 Naomi Uemura (Japan) was first to reach the North Pole alone on dog sleds. In 1979 the first ski team to reach the North Pole was a team from Soviet Union. In 1986 an international dog-sled team was the first to reach the Pole without air support; it was also the first time a woman was a part of the international North Pole team. The same year Frenchman Jean-Louis Etienne was the first to reach the North Pole on skis alone. In 1994 Norwegian Borge Ousland was first to reach the North Pole on skis alone and unsupported. In 1999 an international team of divers was successful at scuba diving exploration at 90° latitude. The first attempt at an underwater exploration of the North Pole was made a year earlier and ended tragically with the death of the Russian diver Andrei Rozhkov. Nowadays it is much more easier to reach the North Pole. During short Arctic summer Russian icebreaker "50 Let Pobedy" (translated as "50 Years of Victory") makes three round trips taking tourists from Murmansk to the "top of the world". Each expedition takes two weeks — in addition to the "Earth's crown" it also introduces tourists to Franz Josef Land Archipelago. To be honest, I have never imagined I would be interested in traveling to the North Pole on a nuclear icebreaker. Usually I am after colorful rich subjects that can only be depicted in complex multi-row panoramic photographs. And the North Pole could be described as "white silence" :-) As strange as it may sound, this very idea convinced me to actually do it. I thought: "It's quite a challenge! Many photographers out there can take beautiful pictures of Grand Canyon on a sunny day, but how many of them can make at least few decent photos of the North Pole?" I called Oleg Gaponyuk and offered him to accompany me on this journey. He liked the idea and saw an opportunity to do something on the North Pole for the first time in history — to take aerial photo panoramas from a helicopter. However Rosatomflot (Russian Atomic Fleet) could not change their summer cruise schedule to let Oleg squeeze a visit to the North Pole into his busy schedule. "Andrey, would you do me a favor?" he called me "I will give you my camera and draw the outline of what I want you to shoot. All you'll have to do is push the button and take a couple of spherical panoramas." And there I was on board the icebreaker, wiping off sweat after lifting 60 kilograms of my and Oleg's equipment, and trying to understand where the hell I was. Russian icebreaker "50 Let Pobedy" is a huge red-and-black monstrosity and one of the most modern and powerful ships in the world. It was built in 2007, but every line and detail tells you, "I was made in the Soviet Union." Frankly, it really makes a strong impression on you, and it becomes even stronger once you see this ship sail through ice. It breaks meter-thick ice floes as if they were eggshells! Click image to view a spherical panorama from «50 Let Pobedy” icebreaker The most impressive thing is how fast it pushes through the ice floes. I couldn't resist and took a video, which I usually don't. You could never forget the vibration of the vessel while it is making its way through the ice and the scratching sound it made. It seemed like the sensations pierced my whole body. There was a swimming pool on board located close to the engine room. So while taking a dive you felt even more "connected" to the iron monster. Quite an unusual sensation — I highly recommend it. But let's return to the beginning of our journey. Next to us on the pier there was an aircraft carrier "Admiral Kuznetsov". On board one could see a yellow dot and some action going on around it. Only at home looking through my photos I realized that it was a fallen crane. And, like in the famous story, several men stood around it scratching their heads and giving advice to their boss with an enthusiasm. Perhaps it was a chance for their boss to shake up the drab existence at the port, because 6 hours later, when two motorboats towed us from the pier, and we headed for the exit from the Kola Bay, the number of spectators around the crane didn't decrease, and nothing was done to resolve the problem. So, after a morning sightseeing tour around Murmansk, visiting the icebreaker "Lenin", and a hustle of going through Atomflot strict security and finding our cabins, we all gathered in a lecture hall to meet our crew. They told us how many minutes a person can survive overboard in the icy water, and how to put on a survival suit. And here is our captain introducing his officers the next day on so-called Captain's cocktail party. North Pole cruises attract rather colorful crowd. This variety may be the reason why organizers try to sort tourists in three groups according to the number of cruises that ice-breaker makes during a short polar summer: Western group (mostly Americans and Europeans), Russian group (Russians and Europeans), and Chinese group. Recently a number of wealthy Chinese individuals wishing to travel to the North Pole has grown so much that they even overshadow ubiquitous Japanese. I got on the second tour where one half of passengers comprised of Russians and Germans, and the other half comprised of Swiss, Taiwanese, French, and Austrians (about one to one and a half dozen people). There were also some Americans, Canadians, British, Australians, Italians, Bulgarians, Belgians, Ukrainians, Malaysians, and Koreans. There were a lot of retirees among Europeans. You could see their responsible attitude towards the tour: they planned everything beforehand and were prepared for any situation. There were also many young people (judging by their appearance and behavior). I also saw some middle-aged couples and fathers with their sons. But mostly people travel alone. Russians usually end up on a tour like this by chance, but only among Russians you would see families of four people traveling together. Also, the Russians are usually the first ones to book most expensive cabins; and only after they board the ship they start asking themselves "what am I doing here?" So we finally went out to sea. I looked around to get acquainted with fellow passengers and discuss expectations for the upcoming trip. It turned out that people traveling to the North Pole stocked up on movies, books, etc. — they were under the impression that life on the icebreaker comes down to idleness and taking rare pictures of polar bear from the deck. But the reality wasn't what they expected! Life on board was in full swing, and most of the passengers rarely had free time between different kinds of activities: icebreaker tours, expert lectures on arctic history and ecosystem, photography workshops, etc. As well as The Poseidon Holiday, barbecue on the deck and on ice, swimming in the ice hole, debarkations from "zodiacs" boats, and helicopter drop offs. And all those activities were fueled by four meals a day, Captain's Buffets, gym sessions for those who want to keep in shape, pool, and sauna. And of course the bar with wide assortment of drinks and live music. There was a library on the icebreaker, which had different books and photo albums about Arctic history, geography, and nature. They also served tea, coffee, and buns round the clock. Of course, as with any cruise, there were those who focused on most difficult tasks, such as cleaning out the bar. But the rest of passengers tried to participate in every activity. No doubt that all cruise activities revolved around our expedition leader, Ian Brid, a man of truly atomic energy and endless charm. He has been to more than 115 countries as the expedition leader and director of expedition cruise ships, ocean liners, and river ships. He is fluent in German, Spanish, French, Dutch, Danish, and English. Ian knows just a few phrases in Russian. He conquered the North Pole seven times. But his special talent was conquering his audience. He had great sense of humor along with willingness to help, inspire, and explain. And if only you could see how he held the traditional cruise auction! I should have filmed his performance on my camera to show it to my friends as an exciting television show. Ian charged everyone around him with his energy and made the long journey pass in a blink of an eye. It's quite a task to please the audience that paid twenty thousand dollars for the show. On our cruise it was accomplished brilliantly. The staff was passionate about traveling and very fond of natural beauty of the North. They were friendly and highly professional — you could feel their drive and it was obvious that the guys were doing their best. Interestingly enough, despite the fact that the cruise organizer was a Russian company, most of the staff comprised of foreigners (unlike the technical crew of the ship that has no contact with passengers, the cruise personnel works directly with people and is recruited solely for that purpose). I also met the owner of the icebreaker, Nikolay Savelyev. According to his words there was a time when the cruise staff was comprised of Russians. They were very diligent workers, but, being Russians, they had a bad habit of drinking at the bar after work. There is no harm for young men to do this for a couple of days. But there are some consequences after a long period of drinking. And even if these consequences are almost unnoticeable, they do affect the work performance in a negative way. Foreigners have different values: they put their work first and after finishing a shift they go to bed, because they understand how important it is to be fresh and energetic the next morning to take care of the passengers and address any possible issues. The cuisine onboard was beyond praise. Not every restaurant can offer you such exquisite dishes that our chefs made in challenging conditions. Organizers spared no effort in putting together a very talented team. A new selection of seafood, meat, vegetarian dishes, salads, and desserts was served to us on a daily basis. There was even a special chef responsible for the pastries. I brought back menu as a trip souvenir, which excited great interest among the female audience. For example, I've never tried a lobster like this before: The meat was so tender, but I can't remember the name of the dish: However a conqueror of the North Pole does not live by bread alone. This is how we celebrated the Poseidon Holiday as suggested by our court photographer — pardon — expeditionary artist Rainer Ulrich: Of course, at the end of the celebration we had a barbecue on the deck. With ice came polar bears. Truth be told, the number of polar bears that you come across during a cruise varies greatly. A year ago, during the "Chinese tour" (with Sergei Dolya, a famous blogger) there were so many polar bears that people stopped reacting to their appearance at all. Something that I cannot say about our tour... Usually, at a sight of a bear the icebreaker would slow down or stop to give passengers a chance to admire the animal and to take some pictures. The most beautiful sight is of bear with its cub. You can lose the sense of time while watching them. Although photos from other Polar expeditions show a lot of people with long-range "photo guns" ready to snap a perfect picture, I was surprised that there were no people with good cameras on board of our cruise. There were no cameras with telephoto lenses at all (I didn't take mine either, but only because I had tons of other photo equipment to carry around). So I made the following conclusion: smart bears simply decided that there was nobody worth posing for. Time flew by, and we finally reached the North Pole. Our captain showed miracles of maneuvering and steered the ship exactly on 90° latitude. According to my watch it happened at midnight. However, our cruise time was different (2 hours offset) for passengers and for the crew; and I set time on my cameras by Moscow. So what you should do in that situation? That's right: drink champagne, shout hurray, take pictures, and wave flags. After that, our captain looked for a suitable ice floe for debarkation. Each year this task becomes more and more difficult, because of a catastrophic melting of the Arctic ice. Average summer temperature rises to +5° C. Wind and currents break huge ice fields into separate blocks of ice that periodically freeze back together again at lower temperatures. It was very foggy due to the high humidity and air temperature fluctuations (around 0 degrees). Or, to be more precise, there was only fog at that time of the year :-) Finally, by noon the captain finally found the right field for debarkation. They began to unload equipment for a celebration. They put guards along the perimeter in case bears attacked. I took some panoramic photos while the snow was still fresh and untouched. By the way, the glade water is safe to drink, because it is sweet and consists entirely of melted ice, which is very clean. Some people even filled plastic bottles with the water to take home as a souvenir. According to an old tradition, everyone prepared to take a "tour around the world" by walking around the "NorthPole 90" pointer. We even arranged with organizers to make two circles instead of one. It would look much better when photographed from above! Click image to view a spherical panorama from the very center of the North Pole After taking a group photo people broke up into different interest groups: taking a walk along the ice track, pulling the icebreaker to the North Pole pointer, rinsing rubber boots in fresh puddles, and, of course, taking pictures of other people doing all of the above. As for me, I finally started shooting spherical panoramas from above the icebreaker — the photos had to be taken from the lowest possible height so a viewer could experience the "effect of personal presence." The idea was to create an illusion that if one reaches out he could touch the ship. Naturally, the actual distance to the elements of the icebreaker must be substantially longer than a person's arm; otherwise the icebreaker won't fit into a frame. Most of the time I shot from 5-meter distance above the ship or from the side. Click image to view a spherical panorama from «50 Let Pobedy” icebreaker It is possible that I was the first one to look directly into the pipe after the launch of the icebreaker. At least, our brave crewmembers, who volunteered to help me during the photo session, could hardly resist the temptation to send me away with a flea in my ear after I asked them to climb to the very top. However, they were quite pleased by new experience. While we had fun shooting panoramas, very delicious food was served on ice and hungry passengers ate it very quickly. Alas nothing was left when I came, but they told me the food was very tasty. Of course, I believed them :-) Our next entertainment was swimming in the ice-hole. Usually they search for big ice-holes and build ladder and other necessary small things. Although they couldn't find any large or small ice-hole, and had to skip that part of the entertainment program, it was still a Russian cruise! If they promised swimming in the ice-hole they should keep their promise! So said, so done: they found a crack in the ice, cleaned it a little bit, and invited people to take a dip. The first ones to go were Russian men followed by the foreigners. And then the rest of the crowd followed. It was particularly amusing to watch Chinese swimming in the ice-hole. After the "around the world" tour at the North Pole pointer and the ice-hole swimming we went to see the Franz Josef Land. These islands were discovered by accident in 1872 by Austro-Hungarian expedition that sailed on steam schooner called "Admiral Tegetthoff." The ship was damaged by ice and, as a result, it drifted into unknown territory. After exploring the land, Austrians named it after their emperor. Russians visited Franz Josef Land for the first time in 1901. Vice Admiral S. O. Makarov organized an expedition to explore the southern tip of the archipelago. Before that British, Americans, and Norwegians explored the new lands. Franz Josef Land fell under Russian jurisdiction in 1914 when I. I. Islyamov came here in search of the missing Sedov expedition and hoisted the Russian flag on the islands. Click image to view a spherical panorama of the Franz Josef Land We reached the Hooker Island, one of ten largest islands of the archipelago. There is famous basalt Skala Rubini (Rubini Rock) with largest sea bird colony in the world. According to Google, forty thousand birds of different species inhabit the rock. I was able to see the seagulls and something resembling penguins (helpful Google suggests that it were guillemots). Click image to view a spherical panorama of the birds colony on the Hooker Island When I started shooting panoramas I got greedy and used my 300mm lens in hope of getting a closer view of the birds, and was punished for doing so. Carried away by the process, I didn't notice the icebreaker slowly sailing away from the rock. There is no software that can stitch a panorama consisting of several hundred images taken from a moving ship! I noticed the change in image scale only on the 14th row of my panorama. It's funny, but at home I actually found a way to stitch them together without any problem. Another attraction of Hooker Island is located nearby in the Tikhaya Bay. There used to be a Soviet weather station, and some of its buildings have survived to this day. Gray, almost white wood doesn't decay in the cold, and it seems like the last expedition left the station only recently despite the fact that fifty years have passed since the weather station had been relocated to one of neighboring islands. There are three crosses that stand near each other by abandoned buildings. They were put there in memory of those who lost their lives in these lands trying to reach their destination. It was here in Tikhaya Bay, where members of the Georgy Sedov expedition spent their last winter stay; the expedition mechanic Ivan Zander is buried under one of the crosses; and Georgy Sedov is buried on Rudolf Island. Click image to view a spherical panorama of the Tikhaya Bay, Hooker Island There were people at the camp: a few unfriendly men and one female representative of the united Germany (who we couldn't see). The government has allocated huge funds to clean the Arctic and for that purpose a landing force was sent to the archipelago. Men told me that they were going to build a museum on the site of the old weather station. The next stop was the Champ Island. It's famous for its unique spherical stones, the origin of which still doesn't have a reasonable explanation. They resemble Moeraki Bolders in New Zealand, except for the variety of sizes: from the ones that can fit into the palm of your hand to giants two meters in diameter. We made an explorative voyage on "zodiac", but we couldn't disembark because of the heavy fog. Going around one of the island capes, we saw a polar bear (a white dot in the center of this picture) that was trying to find a meal in one of large bird colonies. But the bear was very unlucky: nests were built high up on cliffs, and it couldn't reach them. The bear looked ill and there was very little chance that it would survive for a long time. I felt sorry for the bear. Reading about the extinction of polar bears I didn't take it too personally. However, observing it in real life made me see things in a different light. And this is a huge glacier slowly slipping into the sea. From time to time icebergs break away (not very large ones comparing to the Antarctic, but not small either). Undoubtedly, the "heart" of the icebreaker is a nuclear-powered reactor (or two reactors to be precise), but it's impossible to look right into the "heart" because it's located deep inside the ship and connected to a number of other mechanisms and parts, and placed in a protective casing. However all other facilities are available for exploration and make a lasting impression. There are two generators in the great hall that convert energy from nuclear reactors to electricity. Icebreaker's "internal organs" make a deep impression on you, especially the combination of power and the manner in which it's operated. There is an insane number of sensors at every corner, and valves and monitors that someone must be able to "read" and "turn." Somehow it reminded me of Soviet missiles built entirely from lamps. It's hard to imagine how all of that works, how much effort it requires, and how to find people capable of operating them. And now the icebreaker crew, the staff of Poseidon Expeditions Company, and yours truly are saying "Goodbye!" Date of shooting: 30 June 2012 "Could you please give details of how do go about booking this cruise? Fantastic photos and inspiring story #9786" cameron reville, New Zealand "Parece una cosa imposible poder disfrutar de los maravillosos paseos que nos brinda vuestro hermoso trabajo con una tecnologia fotografica tan fascinante como incomprensible para mi. Habia buscado desde hace mucho esta Barbacoa en el Polo Norte, y la acabo de encontrar. Que envidia." Rogelio Emilio Pascual Montoto, Argentina "Es Espectacular, Fascinante. Quisiera hacer en mi vida un viaje como este. Muchas gracias. Spasibo. :-)" Grazyna Kiszko, Spain "Spectacular scenarios! Thank you for sharing with us." SM Naufer, Sri Lanka "it is amzing" florence pop, Australia "Thank you very much for giving us the opportunity to enjoy this scenic views, realy a wonderful work, waiting to see your work for Rose City of Petra in Jordan, once again thank you" Amjed Zraiqat, Jordan "Skvělé, nádherné, takto mohu ve svých 80ti letech cestovat po světě. Vřelé díky." Zora Štěrbová, Czech Republic "Imam dosta godina, sretna sam, da sam uspjela vidjeti i oduševiti se ,baš sa svime. Hvala Vam što ste mi omogućili vidjeti dio našeg predivnog planeta. Mirjana" Mirjana Gorički, Croatia "Absolutely marvelous experience through your eyes and how we are so fortunate to be able to see this from so far away and provably, never to see it as you have doneso. Thank you for sharing . What a beautiful planet we have and must take care of it." Nattalie Clouthier, USA "Thank you for allowing old warriors to see the beauty and the wonders of the world through your eyes." John Tunison, USA "Fascinating Photos and most interesting narrative" Ed Nelson, USA "Awesome and amazing pics. These look like a dream. I wish I could be a part of the team :(" Jayaprakash Mara, USA "Hello,wonderful and beautiful images from northern voyage, the whole team express great thanks." Karel Hofmann, Czech Republic Varvara, AirPano: Thank you, Karel! Hope you'll enjoy another our panoramas! "expérience insolite surtout pour les gens des pays chaud,où il neige presque jamais et n'ont jamais vu la glace!" mohamed dayi, Morocco "very nice jope" mohamed ieleish, Egypt "Iloved watching these beutiful places just by sitting at home thanks to airpano." meera misra, India "Iloved watching these beutiful places just by sitting at home thanks to airpano." meera misra, India "Beautiful journey. Many thanks." Ida Griffiths Zee, Hong Kong chandrup kaushal, Canada "The detail and thought results into a Brilliant VISUAL experience - my reaction to this master piece is to simply book the tour ..." mike packer, New Zealand Paulo Rattes, Brazil Edgar Loya, Mexico "Very nice as I was in North Pole.... Great work done by all of you..a total team work... The ship, the crew members and the Airpano! Plz keep taking us to such nice places..... Best Regards....Induski Grud" Naren Pradhan, India Varvara, AirPano: Thank you, Naren! You're always welcome! "Superb adventure. An experience for a lifetime." MUJTABA HUSSAIN, Qatar "Wonderful,congratulations, to THE BEST OF THE BEST; Merecido premio a SUS ESFUERZOS." German Porras, USA "Gracias por compartir!! Beautiful experience!" Kattya Ortiz, Panama "Wonderful.Mankind,be careful,save from global warming &save the ecology from too much expedition." D P Bhattacharya, India Syed Rashid Qamar, Pakistan Karthik Reddy, India Gracias por todo el trabajo y compartirlo, exitos¡¡¡¡¡" Daniel Netor Garcia Rusca, Argentina PERE EDUARD CARBONELL MAS, Spain "Unique part of our small world. What an experience!!!!!" Enrique García-Arenas, Spain
<urn:uuid:7577ef61-6267-413a-8381-5bf3ef666933>
CC-MAIN-2017-17
http://www.airpano.com/360Degree-VirtualTour.php?3D=North-Pole
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917121355.9/warc/CC-MAIN-20170423031201-00014-ip-10-145-167-34.ec2.internal.warc.gz
en
0.964336
5,841
3.40625
3
SPEAKER: This is a production of Cornell University. ROBERT RICHARDSON: Hello, today our subject is the propagation of electromagnetic radiation. It gives us an opportunity to use some of the most interesting 19th century lecture demonstration apparatuses in the Physics Departments collection. I'll be doing things that make lots of sparks. And in fact, one way that you'll know that electromagnetic radiation is being propagated is that sometimes there will be interference lines on the television pick-up of the monitor. My first experiment is going to be one very similar to that which was done by Hertz to propagate the first radio waves, or what were called Hertzian waves. I have here my transmitter, which is going to consist of a coil of wire, which is an inductor, and a capacitor, called a Leyden jar, and a source of voltage, and a spark gap. Spark breaks down as soon as the electric field becomes large enough to ionize the gas in between the gap, so that there is a sudden current flow. And following that, there's a transient frequency oscillation in my circuit. Let me talk about the capacitor for a minute because they are very interesting things that were devised in the 19th century. These are called Leyden jars. Early thoughts about the storage of electrical charge, conceptually the idea that occurred to people was that it should be stored in a way very much like the way they would choose to store tomatoes or peaches, in a jar. The container has a metal foil in the interior and then another metal foil on the outside. So there is an electric field between the metal foil on the outside and the metal foil on the inside. And the dielectric medium in between is just the glass of the jar. We make contact with the inner conductor with a spring-loaded device that has an electrode on it. We make contact with the outer conductor by just letting this rest on a metal plate. In this circuit, then, I have one Leyden jar from a capacitor in my inductor. The characteristic resonant frequency, then, is determined by 1 over the square root of the inductance times capacitance. That's the angular frequency 2 pi times f. When I turn on my electric potential, the spark gap breaks down. And I have an alternating current in my coil. And now, I can pick this up in a second coil, which I have here. There are no other wires attached to this coil. When the current's flowing this way in the loop, there's going to be a magnetic field pointing that way. When the current flows this way in the loop, there's going to be a magnetic field that way. And this alternates at a rather high frequency. Over in my second loop, this alternating magnetic field-- and there's also an electric field associated with it around it. The alternating magnetic field produces an EMF in the loop. And that's picked up. And it can cause a discharge in a neon bulb that we see down here. If I rotate the plane of the loop in my second coil-- which is really an antenna. And this you can think of as being something like an unusual radio station. If I rotate this to be 90 degrees away, there is no flux changes in the loop, and the loop doesn't work. But the total flux change in the loop is not the only thing that determines the strength of the signal that can be propagated in my second coil, picked up by my second coil, it has to have a frequency, a resonant frequency that's close to the resonant frequency of my transmitter. For instance, if I change the dimensions of my pickup coil by just making the loop bigger, if the only thing that were involved is the change in the magnetic flux in the second loop, you might expect it to have an even brighter glow in my neon bulb, but that's not the case. When I increase the area of this loop, I increase the inductance. And it's no longer near the resonant frequency of my source. You'll see that when I lower this again, I can get the neon bulb to glow. Well, this is basically the type of apparatus that Hertz discovered that he could use for the propagation of waves in the frequency range around 1 megahertz a million cycles per second. Let me turn this off. And we will go on to an apparatus that looks like one used by Dr. Frankenstein for his experiments. And that's the set of experiments that I'm going to do with another tuned circuit. But this time, I'm attached to a very strong voltage source. I have over here a transformer with enough turns in it so that the voltage that's on the output of this transformer is 60,000 volts when I hook it up to the 110 volt mains. And now I'm going to connect that to another LC circuit that also has a spark gap. And you'll see a very bright spark when this is running. And the frequency of this one is around a megahertz. All right, for my first experiment that I will be doing in this circuit, I have here three of these capacitors. I have an inductor in the circuit and a spark gap. You will see the bright spark when it's activated. And I'll have a coil that I'll attach in the circuit. So I'll have a very strong alternating magnetic field in this loop when I activate the circuit. And we'll do several experiments in it. Why don't you go ahead and turn it on then, please. This first experiment is one similar to one we've seen before. I have a loop with a light bulb in it. And I place this in front of my circuit. And you'll notice that the light bulb glows. This is very similar to an experiment that we demonstrated earlier for Faraday's law. The alternating magnetic field in here is picked up in my second coil, and I get an alternating electric field in the loop, which causes the light bulb to glow. The second demonstration with this is a little more unusual. Here I have a glass bulb that has a low pressure of air in the bulb. Watch what happens when I place this within the coil. You notice a ring. Now, what happened there? The gas in the tube was ionized. And you saw a circular ring of ions. The reason that we had the breakdown in that pattern is because we had alternating magnetic field this way. That, according to Faraday's law induces an EMF and a circle around the outside, where it's going to be the strongest there. And the electric field in that region was sufficiently large to cause a breakdown in the gas, in that region in the glass bulb. I can cause a breakdown in gas in the air with another apparatus that I'm going to next demonstrate. And that's called a Tesla coil. And I'm going to construct another form of transformer, and this time to an even larger voltage. I'm going to attach a coil similar to the one that we used before to the same terminals, but place within it a second coil that has a much larger number of turns, so that the potential that's generated in the second coil is close to a megahertz. And then we will do several experiments with it. I'm going to change the tuning of my circuit for coupling into this coil. And now, at the top of the coil, there is a small wire. And that's a region in space in which I'll have the largest electric field. And when the experiment begins, you will see a lot of blue sparks coming off. And that's the actual breakdown of the gas in the air, which is called corona. I will do several experiments with the corona discharge from the Tesla coil. The first one will be that I will hold this glass bulb, which is just a bulb with a wire attached to it, up in the air. And I will, in fact, then cause the bulb to be lit. And in this experiment, my body will be the return path that completes the circuit for the electricity in the coil. I will feel a tingle, and you might see me flinch, but it's not particularly dangerous because at this frequency-- it is a megahertz-- the electricity does not penetrate into my body very deeply. It just flows on the surface. And it will flow back down to ground potential on my body. Then I will also hold up several other glass bulbs. And you will see a fluorescent light glow. And you will also see the breakdown of hydrogen gas in another tube. For this segment, I'm going to take off the microphone because of the interference from the spark noise. Our next demonstrations come from a vintage in time, 50 years more modern than the million volt Tesla coil we demonstrated. And we will be studying now the properties of a dipole antenna. Let's consider an electric dipole. That is, we have a positive charge and a negative charge. And let's hook them up in a wire. There's an electric field line that will go-- a set of lines that will go from the positive charge to the negative charge. Now, if I arrange to change the potential on each end of my wire so that this end becomes minus and that end becomes positive, the electric field lines that come out will reverse in direction. So they go in the opposite direction. And that's the basic way that we can transmit with a dipole antenna electromagnetic radiation. During part of the cycle, we have the top end positive and the bottom end negative, electric field lines coming out like that. And when we reverse it, if we look out here at a certain distance, we'll see electric field lines that are up for part of the cycle and down for part of the cycle. Let's consider, then, what happens to the magnetic field at the same time. Because when we reverse the direction of the polarization, the electric potential in my rod, I have to change the current that's flowing in the rod. And I will have a maximum current flowing in the middle of my cycle change. And there will be a magnetic field going radially around the rod. Consider this rod. If I have an alternating electric field in the rod, there's going to be current going down the rod. And there's going to be a magnetic field going radially around the rod. The current's going up. The magnetic field is making a circular loop like this. So each time I have current flow in my rod, I have a magnetic field associated with that. And it's going to be in circular loops around the rod. And as I go out here further and further in space, I'm going to have a magnetic field perpendicular to my electric field. So during part of the time I'll have a magnetic field that's coming out, and the other part of the time a magnetic field that's going in. Let's demonstrate some of this with an apparatus here. Here I have a tube transmitter for my source and an antenna that's coupled to the source oscillator. So I have an alternating [BUZZING SOUND] electric potential placed across this antenna. And now, I will use for a receiver just this rod and a light bulb. And you'll notice that the light bulb glows, and quite a distance away from the source there. It glows most brightly when I go closer and closer to it, when I get closest to the source, of course, because the strength of the electric field that's radiated decreases as I move further away from the source. I also get, at a given distance away from the source, a maximum amount of signal that I can receive if I have this rod parallel to that rod. And that's because the electric field lines that are coming out of that source are parallel to the rod. If I make this perpendicular to the direction of the electric field lines, the light bulb doesn't glow, even if I go quite close to the source. On the other hand, if I turn it back so that it's parallel to it, I can make the bulb glow bright enough to actually burn out the bulb. There's another aspect to this, is the length of the antenna is important. If I make the antenna very much longer, I have to go quite a bit closer to the source to get the same amount of brightness. This, once again, is a resonance phenomena like the one that we had in Hertz's experiment. Let's look, in fact, at the strength of the electric field along such an antenna rod by having an antenna that has lots of light bulbs on it. And now you'll notice that as I go close to the transmitter so that they're glowing bright enough, the ones near the center are glowing quite brightly, and the ones on the ends are barely glowing at all. That's because the strength of the electric field or the amount of current in the circuit is largest in the center and falls off on each end. This, in fact, is then a half wavelength of the light going from here to there for my electromagnetic radiation. I have my maximum amplitude of electric field in the middle. And I have a node on each end. You can see qualitatively why that is the case, because in my antenna source-- and now I could demonstrate in my antenna, the fact that it's hooked up to some alternating source by having this hooked up to an amplifier box, it has alternating signal. In my alternating source, during part of the cycle, this is positive and that's negative. And then I reverse that. The characteristic distance between, then, the maximum and the positive direction and the maximum in the negative direction is roughly the dimensions of my antenna. The speed of light is 3 times 10 to the 8th meters per second. And now we say that this distance is approximately, then, a half wavelength, is the dimension between the most positive and the most negative. And it sort of means the wavelength here is roughly 2 meters. So you can, in fact, calculate the frequency associated with this electromagnetic wave that's being propagated. There's several other possible modes that an antenna could have. We could have an antenna in which we have several nodes in the middle so that there are several places in which I have maxima and minima if I want to drive it at a higher and higher frequency. And my next demonstration, in fact, is one in which we do that. I have, in this case, a rod of glass that has a wire wrapped around it. And so it's quite a long length of wire wrapped around the rod. And I place this in my antenna source. And we will test the strength of the electric field along the rod by moving a neon filled bulb beside the rod. And [CLICK SOUND] [BUZZING SOUND] if the electric field is strong enough, we can cause the neon to break down. Watch what happens as I move the neon bulb along the rod. Let me readjust the position of this in the source. There you have the neon breaking down at that point, which means I have a strong electric field there. And as I move further down the rod, it goes off. I should come to a place again later where it comes on again. So when I move down further, it goes off. It comes back on again. It glows weakly. It goes off. It comes back on again. So characteristic distance about like this is the distance between nodes for the radiation. In this case, that would be a half wavelength. And this would be a wavelength for the standing wave in my antenna. Of course, that's all coiled up. Characteristic distance is still roughly the same as this for a half wavelength. Our next demonstration is in a different frequency range yet. This is a shorter wavelength than the ones that I've just done. And this is at the microwave frequency range. I have here a dish antenna source with a dipole in the center that sees a high frequency oscillation on this little metal rod in the center. So the characteristic wavelength is roughly something like this, of order 3 centimeters. Once again, you should make a calculation to estimate the frequency of this radiation. And because the dipole is pointed this way, we know the electric field polarization is in the up and down direction, and it's propagating outward from that dish source. We have a receiver here, which is a very simple device. It's a meter with a little diode attached to it. And as the electric field alternates across a diode, it produces a current that can be detected in the meter. So we have a deflection on the meter due to the microwave radiation that's coming to it. We can do some simple experiments with the radiation. One thing, let's see what happens when I place a piece of metal in the path of the microwave radiation. And what you should observe is that the amplitude of the electric field intensity at this point is very much weaker. And that's because the metal is a very good reflector for light at the microwave frequency, as well as it is for visible light. I have a piece of Bakelite here, which of course completely is opaque to visible light, that is, no visible light gets through it very easily. Now, watch what happens when I put this in the path of the microwave radiation-- very little. This is invisible, or practically invisible, at the microwave frequency, so that if you and I had our eyes tuned to work at microwave frequency, we could see through all sorts of walls and other things if it behaved in a way that it does for this Bakelite demonstration. My next demonstration is related specifically to the polarization of the radiation that comes from this source. We know that we have the dipole antenna in the up and down direction, so the electric field is in the up and down direction. Now, this object is just a disk with a bunch of parallel conductors, wires, in a grid. And the question that I want you to think about is which orientation will disturb the microwave propagation most when I place it so that the wires are parallel to the electric field or perpendicular to the electric field. Let's try it with this direction. Let me put the grid so that it's perpendicular to the electric field lines in the propagation in this direction. Well, you see that very little happens. On the other hand, when I rotate this 90 degrees-- we have an experiment similar to this that you will be doing in the course-- you see that the radiation is absorbed. Well, part of one of the experiments you'll be doing is to understand that phenomena. But the answer is not that the radiation is polarized with the electric field in that direction. The electric field is, in fact, in the up and down direction. I want to conclude with a vector that relates the magnetic and electric field in a traveling wave. You'll remember in my dipole I had an electric field amplitude that was oscillating in the up and down direction. And I had, then, in the plane perpendicular to that, a magnetic field that was oscillating. That's a general result in a set of equations called Maxwell's equations. When the electric field is at a maximum, I also have a maximum in the magnetic field, which is represented by this blue arrow. There's a vector relation, in fact, between the direction of the electric field in a wave, the magnetic field in the wave, and the velocity of propagation, which is that way. It's another one of the right-hand rules. The velocity of propagation is in the direction of e cross b. So you turn e into b with a right-hand, and a thumb points in the direction of propagation. So all this radiation one should imagine as being such that I have these oscillating electric and magnetic fields that are moving along at the speed of light. We want to talk now about the instruments that are used for measuring voltages and currents. There are two classes of these instruments. There's a modern solid state set of instruments that typically have digital readouts that are for voltage and current reading. And then there's another group of instruments that are based upon measurements of electromagnetic forces in an instrument called a galvanometer. And the discussion today is going to be about instruments based upon the principle of a galvanometer. And the basic idea here-- and this is the raw elements of such a galvanometer-- is that we have usually a permanent magnet, which is this weird horseshoe magnet that I have here, and a coil. When I pass a current through the coil, it produces a magnetic moment that interacts with the permanent magnet, and I get a force which can cause a deflection of my meter. And I have current going one way. And you'll notice that the pointer goes in one direction. If I reverse the current by just changing the terminals on the battery, the torque is in the opposite direction, and I have the meter deflection in the opposite direction. So I have a magnetic moment that's proportional to the current passing through this coil that produces a torque when it's in this permanent magnet. And the size of the torque is proportional to the current through the loop. So the galvanometer is intrinsically an instrument that gives us a deflection that's proportional to the amount of current that goes through it. The amount of the deflection depends upon the spring constants. And if we look at some of these meters that we have over here-- for instance, this one, you will see that there is a tight spring that looks very much like a watch spring that controls the balance in the torque. And the motion of the meter can be used for damping the meter, and so forth. All right, this one basic instrument can be used to measure a variety of currents. And also it can be used to measure potential differences. And let's look at how we modify the instrument by addition of extra elements in order to make it a ammeter with a specified current range or a volt meter. Suppose this instrument had for its basic deflection full scale, that is that the meter deflected full scale, with one milliamp, and that the electrical resistance of my coil is 1 ohm. That means, without doing anything else to it, I'll get a full scale deflection of my meter with 1 milliamp current passing through it. And the thing that I will understand to mean when I write the symbol a in some sort of a circuit through which I want to pass current is that I have a device that has a resistance of 1 ohm. And there will be full scale deflection with a 1 milliamp here current passing through it. Suppose I wish to modify that in order to have the current be 1 ampere for full scale. Well, the way this is done is to actually place a very low resistance element in parallel with the meter so that most of the current passes through this shunt-- is the name of this element-- instead of through the meter. We just bleed a little bit of current in parallel through the meter. So I have 1 ampere going into a network that will contain the meter and my parallel resistance element. This will be my resistor, R, in parallel, the shunt resistance. And now I have 1 ohm for 1 milliamp going through the galvanometer. And now, in order to calculate how much resistance I have to put in parallel, I observe that I have 1 amp here going into the system, and it wants to split, with a certain fraction of the current going through the shunt resistor and 1 milliamp going through the meter itself. Well, the current obviously has to split so that 999 milliamperes go through the resistance R. And that has to be a much smaller resistance than the resistance of the meter itself. And the size of that resistance is just proportional to the ratio of the currents. That is, 1 milliamp divided by 999 milliamps-- current going through the upper branch-- must be equal to the resistance R divided by 1 ohm. So that evidently R should be equal to-- if I say this is 10 to the minus 3 amperes, to put that in resistance, it's going to be equal to 1.001 times 10 to the minus 3 ohm. So I can make this meter into a meter that reads 1 ampere full scale by adding a very small resistance in parallel approximately 10 to the minus 3 ohms. Now, there is a correction that one sometimes has to worry about with meters. In the case of my ammeter here, it has a finite resistance, and it can change the amount of current that flows in the circuit when I install it. For instance, back as a milliammeter let's see how it would change the current flowing in a simple circuit. Suppose I have a 1 volt battery, and I install my milliammeter here in series with that and 1,000 ohms. If I have no resistance in my meter, if it's perfect, or if I just put a short circuit around it for the time being, then the current flowing in this circuit, i equal v/r would be equal to 1/1000 would be equal to 1.000 times 10 to the minus 3, or 1 milliampere. But with a real meter where I have a real resistance of 1 ohm, when I'm making my measurement, the total resistance in the circuit now is going to be 1,000 ohms plus 1 ohm, so that I will have 1001 here, and then the current that will flow through the circuit will be 0.999. And I will have a small correction because this is not a perfect meter. An ideal ammeter is one with zero electrical resistance. Of course, we can only approach the ideal case. Suppose I wish to make this instrument into a volt meter. Now, a volt meter is an instrument that is used to measure the potential difference between two elements in a circuit. And we can arrange this so that some of the current in the circuit is bled off to pass through the galvanometer to cause a deflection that's proportional to the voltage. For instance, using the same meter again, suppose I wanted to measure the potential drop across a resistor. Here's a resistor R. And there's a current i flowing through it. And I want to know what the potential drop between this point and that point might be. And the way I achieve this is by placing a resistor in series with my galvanometer so that I have the desired calibration features. For instance, suppose I wish this galvanometer to have a full scale deflection when there is 10 volt potential difference between here and there, that is, when the potential is v is equal to 10 volts. Once again, the current i in the circuit there would be 1 milliampere so that the resistance that we have to add in is going to be determined by Ohm's law again. v equal i times R. So for this example I said that the potential was 10 volts. The current for that particular meter is 1 times 10 to the minus 3 amperes. And the total resistance will be my extra resistance that I add in series R plus the 1 ohm for my meter. We can solve this. So R plus 1 is equal to 10 to the 4th ohms. And now R, the extra series resistance that I need to add to my galvanometer to make it into a volt meter, will be equal to 9,999 ohms, so that if I do that, add this resistance in series with my meter, it can then be used as a volt meter. Pretend that that's in the box. And this is typical of the series resistors that were designed to be added to these delicate galvanometers. This will work perfectly well until I have to measure the potential drop across large resistors. For instance, if I had a resistor here of a million ohms and I tried to use this instrument as we've designed it here for a measurement, the current would primarily go through my meter itself because it would have only a resistance of 10 to the 4th ohms, so that, once again, we'll have a serious correction that I have to apply when I use this instrument in measuring potential drop across large resistors. The ideal volt meter, then, should have as large an apparent resistance as possible. Another statement is so that the correction comes about because we want to have the minimum amount of current drained from the circuit in making the measurement. The ideal volt meter should have an infinite apparent resistance. The ideal ammeter should have zero resistance. We've received your request You will be notified by email when the transcript and captions are available. The process may take up to 5 business days. Please contact [email protected] if you have any questions about this request. Sparks fly--literally--as CU physicist Bob Richardson lectures on the propagation of electromagnetic radiation for PHYS 101/102 (1981). Richardson is the Floyd R. Newman Professor of Physics, senior science advisor to the President and Provost, and director of the Kavli Institute at Cornell. His collaborative research with David M. Lee and Douglas D. Osheroff led in 1971 to the discovery that helium-3, a rare isotope of helium, can be made a superfluid, that is, flow without resistance at temperatures close to absolute zero. The importance of this discovery, which has transformed research in low-temperature physics, was recognized in 1976 with the awarding of the Sir Francis Simon Memorial Prize in Low-Temperature Physics by Britain's Institute of Physics, and in 1981 with the Oliver E. Buckley Solid State Physics Prize from the American Physical Society. In 1996 Richardson, Lee, and Osheroff shared the Nobel Prize in Physics. Richardson's 30 years of teaching college physics culminated in his co-authoring of College Physics with Alan Giambattista and Betty Richardson (McGraw-Hill, 2003).
<urn:uuid:56108c01-a07e-4778-ab59-9479701f2100>
CC-MAIN-2017-17
http://www.cornell.edu/video/phys-101102-1-electromagnetic-waves/s941
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917124478.77/warc/CC-MAIN-20170423031204-00605-ip-10-145-167-34.ec2.internal.warc.gz
en
0.960082
6,216
3.390625
3
Other Membrane-Associated Isoprenoids By some definitions, all isoprenoids from simple monoterpenes such as geraniol to complex polymers such as natural rubber should be classified as ‘lipids’. As discussed in my web page “What is a lipid?”, I believe this may go too far. Here, only those isoprenoids that have a functional role in cellular membranes are discussed, including many of the fat-soluble vitamins. Tocopherols and tocotrienols are described in a separate document. A molecule that is related to the tocopherols, plastoquinone, is found in plant chloroplasts and is produced by analogous biosynthetic pathways to those of tocopherols. It is also related structurally to the isoprenoid alcohol solanesol. The molecule is sometimes designated - 'plastoquinone-n' (or PQ-n), where 'n' is the number of isoprene units, which can vary from 6 to 9. Plastoquinone has a key role in photosynthesis, by providing an electronic connection between the two photosystems, generating an electrochemical proton gradient across the membrane. This subsequently provides energy for the synthesis of adenosine triphosphate (ATP). The reduced dihydroplastoquinone (plastoquinol) that results transfers further electrons to the photosynthesis enzymes before being re-oxidised by a specific cytochrome complex. X-Ray crystallography studies of photosystem II from cyanobacteria show two molecules of plastoquinone forming two membrane-spanning branches. The ubiquinones, which are also known as coenzyme Q or mitoquinones, have obvious biosynthetic and functional relationships to plastoquinone. They have a 2,3-dimethoxy-5-methylbenzoquinone nucleus and a side chain of six to ten isoprenoid units; the human form illustrated has ten units (‘coenzyme Q10’), while that of the rat has nine. Similarly in plants, ubiquinones tend to have nine or ten isoprenoid units. In mitochondria, it is present both as the oxidized (ubiquinone) and reduced (ubiquinol) forms. They are synthesised de novo in animal, plant and bacterial tissues, by a complex sequence of reactions with p-hydroxybenzoic acid as a primary precursor that is condensed with the polyprenyl unit via a specific transferase; this is followed by decarboxylation, hydroxylation and methylation steps, depending on the specific organism. Forms with a second chromanol ring, resembling the structures of tocopherols, are also produced (ubichromanols), but not in animal tissues. It is produced on an industrial scale by yeast fermentation. Because of their hydrophobic properties, ubiquinones are located entirely in membrane bilayers. They are essential components of the electron transport system in mitochondria, taking part in the oxidation of succinate or NADH via the cytochrome system, reactions that are coupled to ATP synthesis. In this process, coenzyme Q transfers electrons from the primary substrates to the oxidase system while simultaneously transferring protons to the outside of the mitochondrial membrane, resulting in a proton gradient across the membrane. It is reduced to ubiquinol as a consequence. Mitochondrial coenzyme Q is also implicated in the production of reactive oxygen species by a mechanism involving the formation of superoxide from ubisemiquinone radicals, and in this way is responsible for causing some of the oxidative damage behind many degenerative diseases. In this action, it is a pro-oxidant. In complete contrast in its reduced form (ubiquinol), it acts as an endogenous antioxidant, the only lipid-soluble antioxidant to be synthesised endogenously. It inhibits lipid peroxidation in biological membranes and serum low-density lipoproteins, and it may also protect mitochondrial membrane proteins and DNA against oxidative damage. Although it only has about one tenth of the antioxidant activity of vitamin E (α-tocopherol), it is able to stimulate the effects of the latter by regenerating it from its oxidized form. However, ubiquinones and tocopherols appear to exhibit both cooperative and competitive effects under different conditions. There are also suggestions that coenzyme Q may be involved in redox control of cell signalling and gene expression. In addition, it is a regulator of mitochondrial permeability, it is an essential cofactor for the proton transport function of uncoupling proteins, and it is required for pyrimidine nucleotide biosynthesis. Phylloquinone or 2-methyl-3-phytyl-1,4-naphthoquinone is synthesised in the chloroplasts of plants where it is a key component of the photosystem I complex and serves as an electron acceptor. In an obvious parallel to the plastoquinones (above), two molecules of phylloquinone form two membrane-spanning branches, as demonstrated by X-ray crystallography studies of photosystem I from cyanobacteria. Both phylloquinone cofactors pass on electrons to an iron-sulfur centre in the complex. The menaquinones are related bacterial products with a variable number (4 to 10) of unsaturated isoprenoid units in the tail, sometimes designated 'MK-4' to 'MK-10'. Phylloquinone is an essential component of the diet of animals and has been termed 'vitamin K1'. It must be supplied by green plant tissues or seed oils. In animal tissues, the primary role of vitamin K is to act as a cofactor specific to the vitamin K-dependent enzyme γ-glutamyl carboxylase, the function of which is the post-translational carboxylation of glutamate residues to form γ-carboxyglutamic acid in proteins, such as prothrombin. In this way, prothrombin and related proteins are activated to promote blood clotting. Vitamin K must first be converted to the reduced form, vitamin K hydroquinone, which is the actual cofactor for the enzyme, and the protein modification is driven by the oxidation of this metabolite to vitamin K 2,3-epoxide. A further enzyme system regenerates the hydroquinone form by reduction of the epoxide so that the former can be reutilized many times. Warfarin, the rodenticide, prevents blood clotting by interfering with vitamin K metabolism. In addition, it is now evident that vitamin K is involved in bone metabolism, vascular calcification, cell growth and apoptosis. For example, side effects of the use of anticoagulants that bind to vitamin K can be osteoporosis and increased risk of vascular calcification. Vitamin K is essential for the biosynthesis of sphingophospholipids in the unusual bacterium Bacteroides melaninogenicus, and it also influences sphingolipid biosynthesis in brain. Similarly, vitamin K–dependent proteins are known to have important functions in the central and peripheral nervous systems. The menaquinones also have vitamin K activity and are termed 'vitamin K2', while a synthetic saturated form of this, which is used in animal feeds, is known as 'vitamin K3 or menadione'. A deficiency in vitamin K results in inhibition of blood clotting and can lead to brain haemorrhaging in malnourished newborn infants, though this is not seen in adult humans, presumably because intestinal bacteria produce sufficient for our needs The term ‘vitamin A’ is used to denote retinol or all-trans-retinol and a family of biologically active retinoids derived from this. These are only found in animal tissues, where they essential to innumerable biochemical processes. However, their biosynthetic precursors are plant carotenoids (provitamin A) of which β-carotene is most efficient, and occurs in the green parts of plants and seed oils. In the human diet, plant sources tend to be less important than those from dairy products, meat, fish oils and margarine. In the U.K., for example, all margarine must be supplemented with the same level of vitamin A (synthetic retinol or β-carotene) as is found in butter. Retinol esters in the diet are hydrolysed to retinol and free fatty acids at the intestinal brush border prior to uptake by the intestinal mucosa. They are re-esterified before being packaged into chylomicrons for transport to the liver, where they are rapidly taken up, hydrolysed and re-esterified. Dietary carotenoids, including β-carotene, are taken up in intact form in humans, possibly facilitated by specific transport proteins. Conversion to retinoids leading ultimately to retinol esters occurs in the intestines before these and any unchanged β-carotene are also carried to the liver in chylomicrons for further metabolism. Retinol (and its ester) is the main form of the vitamin that is transported in blood, bound mainly to retinol-binding protein (with some directly from the diet in the chylomicrons and their remnants), from which it can be taken up by tissues by means of specific receptors, probably facilitated by hydrolysis to retinol by means of the enzyme lipoprotein lipase. Retinol esters (principally retinyl palmitate) are the main storage form, occurring chiefly but not exclusively in the liver, with over 90% in the form of lipid droplets in hepatic stellate cells. In addition, specialized cells in the eye store retinoids in the form of lipid droplets. These stores are mobilized when required by an initial hydrolysis to retinol. A relatively small proportion of the cellular retinoids is located in membranes. The biosynthesis of carotenoids has much in common with that of cholesterol, but this is too specialized a topic for discussion here. In animals, dietary β-carotene is subjected to oxidative cleavage, the first step of which is catalysed by a cytosolic enzyme β-C15,15'-oxygenase 1, at its centre to yield two molecules of all-trans-retinal, which is reversibly reduced to retinol and then esterified to form retinyl palmitate by transfer of fatty acids from the position sn-1 of phosphatidylcholine mainly via the action of a lecithin:retinol acyltransferase, but probably also by an acyl-CoA dependent pathway catalysed by the enzyme diacylglycerol acyltransferase 1. Activation of the retinol pathway, involves first mobilization of the ester, followed by hydrolysis and reversible oxidation of retinol to retinal. The last is then oxidized irreversibly to retinoic acid. Both retinol and retinoic acid are precursors of a number of metabolites (retinoids), which are required for specific purposes in tissues, by various desaturation, hydroxylation and oxidation reactions. It has long been know that retinoids are essential for vision, and there is a good appreciation of how this works at the molecular level. Retinal rod and cone cells in the eye contain membranous vesicles that serve as light receptors. Roughly a half of the proteins in these vesicles consist of the protein conjugate, rhodopsin, which consists of a protein – opsin – and 11-cis-retinal. The latter is produced via all-trans-retinol and 11-cis-retinol as intermediates. When 11-cis-retinal is activated by light, the cis-double bond is isomerized nonenzymatically to the 11-trans form with a change of conformation that in turn affects the permeability of the membrane and influences calcium transport. This results in further molecular changes that culminate in the release of opsin and all-trans-retinal, which is the trigger that sets off the nerve impulse so the light is perceived by the brain. The all-trans-retinal is converted back to 11-cis-retinal by various enzymatic reactions, in order that the rhodopsin can be regenerated. Excess all-trans-retinal forms a Schiff base adduct with phosphatidylethanolamine, retinylidene-phosphatidylethanolamine, which can then be transported by a specific transporter from the disc membranes to the cytoplasmic space. As a side reaction, some troublesome bis-retinoid adducts can be produced. It is now realized that retinoids also have essential roles in growth and development, reproduction and resistance to infection. They are particularly important for the function of epithelial cells in the digestive tract, lungs, nervous system, immune system, skin and bone at all stages of life. They are required for the regeneration of damaged tissues, including the heart, and they appear to have some potential as chemopreventive agents for cancer and for the treatment of skin diseases such as acne. Cirrhosis of the liver is accompanied by a massive loss of retinoids, but it is not clear whether this is a cause or a symptom. Many of the retinol metabolites function as ligands to activate specific transcription factors for particular receptors in the nucleus of the cell, and thus they control the expression of a large number of genes (>500), including those essential to the maintenance of normal cell proliferation and differentiation, embryogenesis, for a healthy immune system, and for male and female reproduction. Retinoic acid is especially important in this context and it is usually considered the most important retinoid in terms of function other than in the eye. It has also become evident that many of the functions of retinoids are mediated via the action of specific binding proteins, which control their metabolism in vivo by reducing the effective or free retinoid concentrations, by protecting them from unwanted chemical attack, and by presenting them to enzyme systems in an appropriate conformation. For example, a specific retinol-binding protein secreted by adipose tissue (RPB4) is involved in the development of insulin resistance and type 2 diabetes, possibly by affecting glucose utilization by muscle tissue, with obvious application to controlling obesity. Disturbances in retinoid metabolism have been implicated in diseases of the liver. 9-cis-Retinoic acid, a further metabolite, has valuable pharmaceutical properties. Vitamin A deficiency in children and adult patients is usually accompanied by impairment of the immune system, leading to a greater susceptibility to infection and an increased mortality rate. Thus it is not always easy to distinguish between these effects and primary defects of retinoid signalling. However, one of the main effects of vitamin A deficiency in malnourished children, and seen too often in the underdeveloped world, is blindness. This is doubly tragic in that it is so easily prevented. Cleavage of β-carotene at double bonds other than that in the centre or of other carotenoids leads to the formation of β-apocarotenals and β-apocarotenones, which may exert distinctive biological activities in their own right. Retinyl-β-D-glucoside, retinyl-β-D-glucuronide, and retinoyl-β-D-glucuronide are naturally occurring and biologically active metabolites of vitamin A, which are found in fish and mammals. Indeed, the last has similar activity to all-trans-retinoic acid without any of the unwanted side effects in some circumstances. On the other hand, these water-soluble metabolites may be rapidly removed from circulation and eliminated from the body via the kidney, together with oxidized metabolites such as 4-hydroxy-, 4-oxo- and 18-hydroxy-retinoic acids. Polyisoprenoid alcohols, such as dolichols, are ubiquitous if minor components, relative to the glycerolipids, of membranes of most living organisms from bacteria to mammals. They are hydrophobic linear polymers, consisting of up to twenty isoprene residues or a hundred carbon atoms (or many more in plants especially), linked head-to-tail, with a hydroxy group at one end (α-residue) and a hydrogen atom at the other (ω-end). In dolichols (or dihydropolyprenols), the double bond in the α-residue is hydrogenated, and this distinguishes them from the polyprenols with a double bond in the α-residue. Polyisoprenoid alcohols are further differentiated by the geometrical configuration of the double bonds into three subgroups, i.e. di-trans-poly-cis, tri-trans-poly-cis, and all-trans. For many years, it was assumed that polyprenols were only present in bacteria and plants, especially photosynthetic tissues, while dolichols were found in mammals or yeasts, but it is now known that dolichols can also occur at low levels in bacteria and plants, while polyprenols have been detected in animal cells. Within a given species, components of one chain length may predominate, but other homologues are usually present. The chain length of the main polyisoprenoid alcohols varies from 11 isoprene units in eubacteria, to 16 or 17 in Drosophila, 15 and 16 in yeasts, 19 in hamsters and 20 in pigs and humans. In plants, the range is from 8 to 22 units, but some species of plant have an additional class of polyprenols with up to 40 units. In tissues, polyisoprenoid alcohols can be present in the free form, esterified with acetate or fatty acids, phosphorylated or monoglycosylated phosphorylated (various forms), depending on species and tissue. Polyisoprenoid alcohols per se do not form bilayers in aqueous solution, but rather a type of lamellar structure. However, they are found in most membranes, especially the plasma membrane of liver cells and the chloroplasts of plants. Dolichoic acids, i.e. related molecules with a terminal carboxyl group and containing 14–20 isoprene units, have been isolated from the substantia nigra of the human brain. However, they were barely detectable in pig brain. Biosynthesis of the basic building block of dolichols, i.e. isopentenyl diphosphate, follows either the mevalonate pathway discussed in relation to cholesterol biosynthesis elsewhere on this site, or a more recently described methylerythritol phosphate pathway, depending on the nature of the organism. Subsequent formation of the linear prenyl chain is accomplished by prenyl transferases that catalyse the condensation of isopentenyl diphosphate and the allylic prenyl diphosphate. The end products are polyprenyl pyrophosphates, which are dephosphorylated first to polyprenol phosphate and thence to the free alcohol. Although polyprenols and dolichols were first considered to be simply secondary metabolites, they are now known to have important biological functions. In particular, glycosylated phosphopolyisoprenoid alcohols serve as carriers of oligosaccharide units for transfer to proteins and as glycosyl donors, i.e. substrates for glycosyl transferases for the biosynthesis of glycans in a similar manner to the cytosolic sugar nucleotides. They differ from the latter in their intracellular location, with the lipid portion in the membrane of the endoplasmic reticulum and the oligosaccharide portion specifically located either on the cytosolic or lumenal face of the membrane. In eukaryotes, N-glycosylation begins on the cytoplasmic side of the endoplasmic reticulum with the transfer of carbohydrate moieties from nucleotide-activated sugar donors, such as uridine diphosphate N-acetylglucosamine, onto dolichol phosphate. Then, N-acetylglucosamine phosphate is added to give dolichol-pyrophosphate linked to N-acetylglucosamine, to which a further N-acetylglucosamine unit is added followed by five mannose units. The resulting dolichol-pyrophosphate-heptasaccharide is then flipped across the endoplasmic reticulum membrane to the luminal face with the aid of a “flippase”. Four further mannose and three glucose residues are added to the oligosaccharide chain by means of glycosyltransferases, which utilise as donors dolichol-phospho-mannose and dolichol-phospho-glucose, which are also synthesised on the cytosolic face of the membrane and flipped across to the luminal face. The final lipid product is a dolichol pyrophosphate-linked tetradecasaccharide, the oligosaccharide unit of which is transferred from the dolichol carrier onto specific asparagine residues on a developing polypeptide in the membrane. The carrier dolichol-pyrophosphate is dephosphorylated to dolichol-phosphate then diffuses or is flipped back across the endoplasmic reticulum to the cytoplasmic face. Most bacteria use undecaprenyl phosphate as a glycosylation agent in a similar way (next section), but the Archaea use multiple species of dolichol in their synthesis of lipid-linked oligosaccharide donors with both dolichol phosphate and pyrophosphate as carriers. Archaea of course use isoprenyl ethers linked to glycerol as major membrane lipid components. Undecaprenyl phosphate (a C55 isoprenoid), also referred to as bactoprenol, is a lipid intermediate that is essential for the biosynthesis of peptidoglycan and many other cell-wall polysaccharides, and for N-linked protein glycosylation in prokaryotes (both in gram-negative and gram-positive bacteria). It is synthesised by the addition of eight units of isopentenyl pyrophosphate to farnesyl pyrophosphate, a reaction catalysed by undecaprenyl pyrophosphate synthase, followed by the removal of a phosphate group. Undecaprenyl phosphate is required for the synthesis and transport of hydrophilic GlcNAc-MurNAc-peptide monomers across the cytoplasmic membrane to external sites for polymer formation. Undecaprenyl diphosphate-MurNAc-pentapeptide-GlcNAc, sometimes termed lipid II, is the last significant lipid intermediate in this process, and it has only recently been identified as a normal constituent in vivo of the membranes of Escherichia coli, by the application of modern mass spectrometric methods. This molecule must be translocated by an as yet unknown mechanism from the cytosolic to the exterior membrane of the organism, where it yields up the MurNAc-pentapeptide-GlcNAc monomer to form the complex peptidoglycan polymer that provides strength and shape to bacteria. Synthesis and transport of lipid II is now considered an important target for antibiotics. In gram-negative bacteria, undecaprenyl phosphate is also required for the biosyntheses of lipid A and of the O-antigen. There appear to be parallels with the involvement of glycosylated phosphopolyisoprenoid alcohols as carriers of oligosaccharide units for transfer to proteins and as glycosyl donors in higher organisms (see above). 7. Farnesyl Pyrophosphate and Related Compounds Farnesyl pyrophosphate is a key intermediate in the biosynthesis of sterols such as cholesterol and it is the donor of the farnesyl group for isoprenylation of many proteins (see the web page on proteolipids), but it is also known to mediate various biological reactions via interaction with a specific receptor. It is synthesised by two successive phosphorylation reactions of farnesol. Presqualene diphosphate is unique among the isoprenoid phosphates in that it contains a cyclopropylcarbinyl ring. In addition to being a biosynthetic precursor of squalene, and thence of cholesterol, it is a natural anti-inflammatory agent, which functions by inhibiting the activity of phospholipase D and the generation of superoxide anions in neutrophils. - Bouhss, A., Trunkfield, A.E., Bugg, T.D.H. and Mengin-Lecreulx, D. The biosynthesis of peptidoglycan lipid-linked intermediates. FEMS Microbiol. Rev., 32, 208-233 (2008) (DOI: 10.1111/j.1574-6976.2007.00089.x). - Cranenburg, E.C., Schurgers, L.J. and Vermeer, C. Vitamin K: The coagulation vitamin that became omnipotent. Thromb. Haemost., 98, 120-125 (2007) (DOI: 10.1160/TH07-04-0266). - D'Ambrosio, D.N. Clugston, R.D. and Blaner, W.S. Vitamin A metabolism: an update. Nutrients, 3, 63-103 (2011) (DOI: 10.3390/nu3010063). - de Kruijff, B., van Dam, V. and Breukink, E. Lipid II: A central component in bacterial cell wall synthesis and a target for antibiotics. (DOI: 10.1016/j.plefa.2008.09.020). - DellaPenna, D. and Pogson, B.J. Vitamin synthesis in plants: tocopherols and carotenoids. Annu. Rev. Plant Biol., 57, 711-738 (2006) (DOI: 10.1146/annurev.arplant.56.032604.144301). - Jones, M.B., Rosenberg, J.N., Betenbaugh, M.J. and Krag, S.S. Structure and synthesis of polyisoprenoids used in N-glycosylation across the three domains of life. Biochim. Biophys. Acta, 1790, 485-494 (2009) (DOI: 10.1016/j.bbagen.2009.03.030). - Laredj, L.N., Licitra, F. and Puccio, H.M. The molecular genetics of coenzyme Q biosynthesis in health and disease. Biochimie, 100, 78-87 (2014) (DOI: 10.1016/j.biochi.2013.12.006). - Littarru, G.P. and Lambrechts, P. Coenzyme Q10: multiple benefits in one ingredient. Oleagineux, Corps Gras, Lipides, 18, 76-82 (2011) (www.revue-ocl.fr). - O'Byrne, S.M. and Blaner, W.S. Retinol and retinyl esters: biochemistry and physiology. J. Lipid Res., 54, 1731-1743 (2013) - plus three further relevant reviews in this journal issue (DOI: 10.1194/jlr.R037648 ). - Surmacz, L. and Swiezewska, E. Polyisoprenoids - secondary metabolites or physiologically important superlipids? Biochem. Biophys. Res. Commun., 407, 627-632 (2011) (DOI: 10.1016/j.bbrc.2011.03.059). - Ziouzenkova, O. and Harrison, E.H. (Editors) Retinoid and Lipid Metabolism. Biochim. Biophys. Acta, 1821, January issue – 22 articles (2012); www.sciencedirect.com/science/journal/13881981/1821/1. Updated May 12, 2014
<urn:uuid:11550343-cf30-499c-983b-d84bfaf6977d>
CC-MAIN-2017-17
http://lipidlibrary.aocs.org/Primer/content.cfm?ItemNumber=39336
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917118713.1/warc/CC-MAIN-20170423031158-00305-ip-10-145-167-34.ec2.internal.warc.gz
en
0.923813
6,105
2.8125
3
By Edward S. Goldstein In 2006, John C. Mather, a senior astrophysicist and project scientist at NASA’s Goddard Space Flight Center, became the first NASA civil servant to receive the Nobel Prize in Physics. Mather was recognized, along with George F. Smoot of the University of California, for the “discovery of the black body form and anisotropy of the cosmic microwave background radiation.” This work helped validate the big-bang theory of the universe. Mather and Smoot analyzed data from NASA’s Cosmic Background Explorer (COBE) satellite, which studied the pattern of radiation from the first few instants after the universe was formed. According to the Nobel Prize committee, “The COBE project can also be regarded as the starting point for cosmology as a precision science.” In July 2007, Mather, who generously has donated his Nobel Prize winnings to scholarships for science students, discussed his career at NASA and expressed his hopes for the future of NASA astronomy. Dr. Mather, you have worked at NASA for over three‑fifths of the agency’s history, and you obviously could have worked elsewhere. What was it about NASA and the Goddard Space Flight Center that was so enriching to your career? Proud scientist - John Mather meets the press at NASA Headquarters the day in 2006 he was informed about his Nobel Prize. It seemed to me that NASA, especially Goddard, was the place where I could carry out the dreams that I had, which were to push forward an experiment that would measure the big bang radiation better than anyone had ever tried before. Therefore, it seemed like the perfect place to go. It was a sort of place where scientists and engineers could rub shoulders and where very ambitious projects could be taken on, perhaps the only place in the world where you could do that kind of project, and that’s how it turned out. NASA labs are the pride of the world for things that we can do, where science and engineers meet together and say let’s do the impossible, and we will do it. That was the appeal to me then and still is the appeal to me now. In your book, The Very First Light, you said it was important that the COBE satellite was done in‑house at NASA, that you had the scientists and the engineers working through the problems together. Yes. It was very important. I can’t imagine that we could have ever done the COBE project, on contract where we would say, “Well, make a measurement that is a thousand times better than anyone has ever done before.” It just was impossible to write a specification for something you could buy. We had to develop it with a combination of engineers and scientists who could solve problems together. What makes your colleagues at NASA special? The colleagues that I have and that we have together here at NASA are people who self‑selected. They decided they wanted to do these amazing things in space. They were willing to make those choices that said they would be here whenever it was that they had to be here to make this happen. Our space projects are very demanding. A person often has to work nights and days and weekends when the schedule requires us to finish something on time. We feel, I feel, that the entire world is watching us and waiting for us to do our thing and to get it right, and it is a tremendous opportunity. It is also a tremendous responsibility, and it has always been pretty clear that we do this together as teams and that it is the only way you could possibly do these amazing projects. It takes a huge team of people. For COBE, it was about 1,500 people working together over the course of the project’s many years. For the James Webb Space Telescope we have over 1,000 people working right now, and organizing and managing this huge team of people to do this is clearly the greatest challenge of all. You’ve made a point in your book (The Very First Light; Basic Books, 1996) and at the Nobel Prize ceremony of saluting your colleagues and mention the concept of teamwork. Stockholm celebrity - Nobel Prize recipient John Mather meets University of Stockholm students after giving a science lecture during Nobel Prize Week in 2006. Yes, it has always been clear to me that it was the team that solved problems, not one individual. I just heard a quote that if you see a turtle on top of a fencepost, you know he needed help to get there. My experience from working with people is that you can have a conversation with someone or have a meeting with a group of people, and from that meeting will derive an answer to a question that no individual could have ever thought of by him or herself. Technical problems are solved that way very often, and managerial problems are solved that way. There is strength in numbers, but organizing those numbers is one of the great challenges. The 20th century was characterized by physicists who created tremendous destructive power and those who provided us incredible insight into the origin in the universe, the category you fall into. What will the 21st century harbor for physicists, especially at NASA? My crystal ball is quivering. I think that we are going to see a tremendous change over the years, not quickly, but over a long term, in what we can accomplish because we will be able to ask our computers to help us a lot more. These days, knowledge for engineering resides in the hearts and minds of individual human beings and in team work, that we remember what we did last time when we create something together. When we want to know how somebody did it 10 or 20 or 30 years ago, we can’t find out. Very often, the work was proprietary, and the person that did it worked for some company. Our engineering history is largely lost. I would think that eventually computers can save that [history] for us if we can figure out how to do that. I picture that maybe in 100 years, we will be able to come into the lab and the computers will say to you, “Well, I think I am ready to go to Mars. Can you get me this stuff?” and we will say, “OK, computer. We will fetch you this stuff. How much silicon do you want?” and something of that sort could happen. We just have no idea what the computers will eventually be able to do with us. Tell me about your current work on the James Webb Space Telescope and in helping NASA chart the future of space astronomy. The James Webb Space Telescope is the premier observatory for the next decade. We plan to launch it in 2013 as a follow‑on to the Hubble Space Telescope and also to the Spitzer Space Telescopes. Both of them have pushed our knowledge far beyond where we ever guessed they would go, and our new telescope will be bigger, better, and more powerful, see farther back in time, see into the dust clouds where stars are being born. We expect and hope that we will be able to even detect stars with planets going around them or in front of them, even pick up some signals from those planets, and learn about the possibility of how our Earth may itself have been formed. We hope to touch the whole history of our own situation, how the Earth could come to be, beginning with the primordial material, and this is a tremendously exciting thing for us, but it is still only the beginning of the questions that people have in mind. There are many huge questions that are being considered right now, and NASA is trying to decide which ones we can tackle first and with which partners. These questions range from how do black holes work; what the big bang was really like; what else can we learn about it from the residue that we have left; was Einstein really right; and are there gravity waves coming from the deaths of stars and black holes spiraling together to meet each other. There are a tremendous range of questions that are open and that astrophysicists have, and that is not even thinking about ‘let’s go visit a planet and see if there is life here in the solar system.’ There are plenty of signs of water here in the solar system. Mars was wet. We have very clear evidence of that. We have two satellites of Jupiter that are wet with ice covering the oceans. Recently, we have discovered that a little satellite of Saturn sends out little spritzes of water, and we see now water is or has been in at least four places besides Earth. If that is a prerequisite for life, clearly we should be going to look to see what is there. I think a truly revolutionary discovery would be that we are not alone. Even if all we find is pond scum, which is most likely in hostile places like that, still the fact that there would be life elsewhere that may have had a separate origin from ours would be truly a world‑changing discovery for our understanding of our position in history and in the universe. Well, goodness. I think if we want to see life or signs of life on planets around other stars, number one, we have to see the planets or see radiation from the planets, and we have a series of things we have conceived of to try to do this. The ones we already know about, of course, we have discovered with existing telescopes such as the Hubble Space Telescope and Spitzer Space Telescope. We are about to fly one called the Kepler Mission that will observe 100,000 stars. We will use Kepler to see if a planet happens to go in front of the star. We will follow up with bigger telescopes on the ground to see can we learn the detailed properties of these objects. Then after we have a list of candidate objects, the James Webb Space Telescope will surely point at them and see what we can learn. After that we have ambitions for bigger and better things. There are other ways to find planets around nearby stars, watching the motions of the stars as the planets pull on them. There is a mission called the Space Interferometry Mission that was planned to do that, and we had to stop because we didn’t have enough money, but eventually, we will have to either do that project or find an alternative because that is a very important technique to try. When we know where there is a planet to look for, we also have at least three different concepts for how to see it directly. One is to build an almost‑perfect telescope and then block the light from the stars, so you can see the planet next to it. It is called a coronagraphic telescope. A second idea is to build a synthetic telescope with several separate telescopes that combine light in a central place that is called an interferometer, and that would operate at infrared wavelengths, and a third kind would have a combination of a telescope with a remote obscuring device that we call an occulter. Such a distant object could cast a shadow of the star on the telescope, and you could see a planet next to it. We have these three, and I think we will have to find out which one is the most feasible, but all three are, in principle, capable of showing us light, direct light from a planet like Earth around a nearby star. With enough information about such a planet, you can look for the chemistry of the atmosphere of that planet, and people have recognized that if you could find the chemical signature of three different gases ‑‑ carbon dioxide, water and oxygen ‑‑ that that combination would be unlikely unless there were photosynthetic life happening on that planet. We have thought of techniques to find the signs of life on a planet around another star… It may take us 10, 20, 30 years before we know these kinds of answers, but that is still within the time frame of people who are coming through school today. I think within the lifetimes of many of us, that we will know whether we are alone or not with respect to planets outside the solar system. Ever since Frank Drake came up with his ideas about searching for radio signals from distant stars, that has been an intriguing, but frustrating, experiment. What do you think about the future of that kind of search for signals? This is the Search for Extraterrestrial Intelligence, or SETI. There are quite a lot of different opinions about what is the right way to hunt for civilizations that might be sending out signals. The one that we have been trying is relatively cheap, and people around the world have their computers helping to analyze the data. Everyone can get a screen saver that will spend your computer’s spare time hunting for signals from other planets. I think we knew when we started off that the odds of success were very small, but that the consequences of discovery were truly immense. It started off as a government‑sponsored project. Eventually, it was switched over to private sponsorship, and it is still going. There are a lot of people who believe that this is very well worth pushing, even though the odds of discovery are small, just because it is so important. On the subject of government sponsorship, you wrote in your book, “There will always be scientific questions simply too difficult or of limited commercial benefit. Fortunately, there exists in the U.S., a formal competitive process for Government funding of scientific projects that values originality and new perspectives. Had such a system not been in place, COBE would have never been launched. In the U.S. only, NASA can undertake certain large scientific projects such as looking back at the beginning of the universe or speculating about the future as we monitor the loss of ozone layer. NASA has opened new windows on the cosmos with the Hubble and gamma‑ray observatories and on Earth by monitoring the size of polarized caps to determine whether global warming is significant.” Do you think that NASA will always have a role in these kinds of fundamental scientific inquiries? I think so. I believe we are capable, and we may be the only agency that is capable of taking on these very large questions. Some things really require the perspective of looking at Earth from space or looking out into space from space because of the particular kinds of questions that we are asking, and so far, NASA is the only big agency that does this. Other countries in the world could take this on, but at the moment, our NASA is much bigger and more capable than others [space agencies] and enjoys very strong public support, which I think is far beyond what is available in other countries as well. I think we can be very proud of what we have accomplished. We look back in history, and we think this country was founded in part by a scientist, Ben Franklin, and a future president, Thomas Jefferson, who was very fond of science and sponsored the first national science expedition to go out and explore the new territory, the Louisiana Purchase, and they would be so proud and thrilled to see what our country has accomplished following on with science and the engineering miracles that have happened as a consequence. I think they never would have guessed [what happened], but they would still be thrilled to see what has been done. How did you get to NASA? I actually got to NASA by trying to avoid what happened to me. In graduate school, my thesis project was to measure the cosmic microwave background radiation. But it didn’t work very well, and it was very difficult, and I thought my destiny was to be a radio astronomer. As it happened, the radio astronomer that I wanted to work with was a NASA person in New York City where we had a small radio astronomy laboratory, and I went to work there [Goddard Institute for Space Studies]. Right then NASA announced this opportunity to propose new satellite missions, and I thought, “My thesis project would have worked better in space, so let’s talk about it.” That’s how I was drawn back into the subject of cosmology because NASA wanted to do satellite missions. I went from my postdoctoral position directly into the fire and took on the job of working on COBE. That is when I moved to Maryland, to work at the Goddard Space Flight Center. On the morning of Oct. 3, 2006, you were awoken with a phone call announcing your Nobel Physics Prize. Can you tell me about that call? It was from Per Carlson, the chair of the Swedish committee that chose the winners. Soon I was receiving phone calls from reporters and family members and after talking for an hour and a half I realized I was never going to have breakfast at this rate, and I better take the phone off the hook and just proceed with my day. Within an hour, my neighbors had decorated my house with balloons. A lot of people were watching to see this event happen, and I was sort of pretending that this might happen and it might not. I was trying to pretend this was going to be a great surprise. At a celebration at Goddard the next day you said, “This is us, this is my family here, these are the people I love.” Yes, I did. That was exactly how it felt, and it still gives me goose bumps to remember that moment because it is my family. These are the people that I have worked with for so long, and we have given so much to each other to make our dreams come true. What was it like in Sweden? It was a wonderful time. The Swedes had filled up my calendar with two or three major events every day, with a lecture or an interview or sometimes two or three of those every day. There were also banquets and parties. On the Nobel presentation day, I sat with the queen of Sweden, and I talked with her, and my wife got to meet the king and talk with him. During the Nobel ceremony period, you met your fellow laureates. What did you learn from each other? I learned a little bit about biology because there is an awful lot more to learn, but one of the stunning discoveries of biology that was recognized this year is called RNA interference. It turns out the genes in our bodies can be turned off by the reactions of our bodies to incoming things, and there is an even greater surprise that this can be inherited. This goes against the whole picture that we have from Mendel’s discovery of genes with planting peas back in the 19th century because we had never been able to show before, that there was some inherited genetic change, but it is now recognized in some cases that that can happen. It is not just a mixture of random genetic codes that happens. There is an actual reaction to incoming infections or various things that happen to us. That has changed a lot, and it is within just a few years, it has become a multi‑billion‑dollar industry to use this technique to help people. I was reminded about the practical nature of biology and the theoretical nature of astronomy, and nevertheless, astronomy is really an exciting thing for people because it tells us how we got here, and people are determined to know how we got here. If you drive through the countryside in this area of the country, everywhere there is a little sign that says a battle was fought here or Morse’s telegraph was tested right over here, just north of Washington, D.C., and people are very interested in our history. We fight over the proper interpretation of it, and it is very close to our hearts. Astronomy works on that part, and biologists do, too. They tell us how we got here as well, but then they also have the applied biology that we all care about so much for daily survival. When you travel through the countryside and you are out there on a moonlit night and you see the vast expanse of stars, what are you thinking in your heart? When I look at things, almost immediately I am thinking how they are made. When I look at a piece of plastic, I think of the carbon and the oxygen and the chemistry and the history of those atoms and how they came from stars, and I’ve just immersed myself in this for so long, that that is almost automatic for me, and I look at the stars from a dark night, and I say, “Oh, those are our ancestors out there. I wonder where we are all going together. I wish we knew how this all worked,” and it is this huge open question for me, at the same time that I am ruminating on how it all happened. Do you also feel any emotion, given the fact that you, by happenstance and through your skill and training and good genes, are alive at a time that you were able to make this discovery? It’s funny. I feel more strongly about the astonishing nature of life than I do about the fact that I did something. I am just amazed at our human organizational systems that enable this to happen. I feel somewhat personally blessed that I am here right now, but I am also amazed that starting from a couple of hundred years ago when our country started and people never guessed that you could cross the country in less than a year, and here, now our ideas will cross the nation in an instant. I am more amazed than I am taking personal credit for anything. I am just amazed at our world. But don’t you feel at least fortunate that your life has taken place at the time that you were able to accomplish your discovery? Yes, I do. I feel very fortunate that I have been part of the discovery process because it gives me great personal pleasure to discover things, and there’s something new to read about every day. I love reading the scientific news because it is always a wonderful discovery every day, and it is much more fun to me than hearing about the latest fires and disasters that are typical on the news. To me, science is a very inspiring and enjoyable thing, but also opens up the prospects for a completely unimaginable future, and I am very curious to see what is going to happen from here. You are supporting scholarships for science students, and you speak to science students all the time. Do you share the worry of some that American students are falling way behind? Some American students are still among the most brilliant in the world. A large number of American students do not get the opportunity that they could have to really learn and appreciate science and to exert the technical and scientific leadership that I think our country depends on for our own prosperity in the future. I am very glad when people come to this country because they feel it is the land of opportunity, and scientists and engineers come from everywhere to work here, but it makes it just slightly worrisome that our own children are not seeing all those opportunities and that actually the majority of scientists and engineers today are being born and trained in other countries. It should make us all a little bit nervous that the position of great prosperity that we currently have may not last forever if we don’t keep on doing what it takes to be supporting and attracting bright young people to do what we have to do. Can NASA have a special role in inspiring this next generation of explorers and scientists? It seems to me that NASA has a very public role in doing this because it’s an area which attracts public attention like no other. When we send a person to the moon again, when we send people up into orbit, when we discover planets around other stars, when we see the picture of the big bang itself, people can get excited about that, that may never understand it, but some of them will actually say this is so exciting, I want to be a part of it, and maybe they will go on to become a biologist or who knows what they are going to do. Maybe they will just think that this is the most exciting thing, and they will vote to support it later, but all of these things are part of our leadership position in the United States, and I think NASA has a tremendous role to play in making sure that that continues. What was your reaction when Stephen Hawking commented on the COBE findings and said, “It was the discovery of the century, if not of all time.” I thought, “Well, Stephen, that is very nice of you to say that, but there have been other really exciting and important discoveries before this.” He was talking specifically about our measurement of the hot and cold spots in the big bang radiation which show us not only is the big-bang theory right, but also, it gives us the map of the seeds of the structure, the things that will eventually grow into galaxies. I was very appreciative that he liked our discovery, but I also thought, “OK, well, relativity, that was pretty important. What about Albert Einstein?” I think I would grant him priority over our discovery, to tell you the truth. You sometimes speak to church groups and you have written about this intersection of spiritual or religious interest in our creation or the creation story and what science can help inform humans about creation. What kind of questions do you get about this subject, and how do you respond to questions from people who ask you to produce a simple answer for them? That is actually an interesting question because when you talk to people about their understanding of history and the universe, everyone has an opinion, and people care deeply about this. For many, many decades, people didn’t agree with the big-bang theory. They thought something else must be true, and generations of astronomers thought it must have been the steady‑state theory. Einstein didn’t like the idea. Scientists have fought one another over what is the right story. The general public has many other opinions. We have our religious traditions coming from many thousands of years, and I think to myself, well, you know, if Moses had come down with tablets from the mountain that said, “And guess what? There are protons and neutrons, and they are made out of quarks,” people wouldn’t have understood what he said. So he didn’t. We are discovering what the universe is really like, and it is totally magnificent, and one can only be inspired and awestruck by what we find. I think my proper response is complete amazement and awe at the universe that we are in, and how it works is just far more complicated than humans will ever properly understand. This is where sort of a faith in how it is working comes to be important to people, and some people’s faith says all the world is falling apart, we are all going to heck, it is getting terrible here, and it is getting hot. Other people say I see the children of today, and they are going to build tomorrow, and I have faith that they are going to do the right thing. That is where I am about it. I see the brilliant young people today, and they feel to me much smarter than I feel that I was then, and I think they are going to do a good job. I meet the youngsters that are studying science. I meet kids that are scholar athletes, getting scholarships for the most astonishing things. I think the world is going to work out. We will solve these difficult problems that we have in front of us. On COBE, you had to solve many problems. Tell me about how you had to move very rapidly to fit your satellite on a new launch vehicle and to reduce your satellite’s weight by 5,000 pounds after the Challenger tragedy. That was a terrible tragedy, and what we did in our project was we didn’t stop working a minute. We knew that we had something that we had planned to do, that was really important to the country, and we just had to find a way to get it going, even if the Challenger had shown us that the space shuttle wasn’t going to take us into space. We started looking around. Our project managers hunted around the world for alternative launch vehicles, and eventually, we found that the Delta rockets which had been made, but mostly discontinued back in those days, could still be found, if we could find enough spare parts. We had to find the spare parts and get permission to use the old parts, and then we had to say, “OK, well, we had a 10,000‑pound design, what does it take to get it back onto the Delta rocket?” It sounds heroic, but in truth, the Delta rocket was able to take us directly to the orbit that we needed, and the space shuttle could not. That was the technical fact that made it possible to solve this problem. We took about eight months to figure out that it was possible, and then NASA said, “OK, it is possible to do it, and do it as fast as you possibly can.” We did. It was a very amazing process. People were up nights and weekends for months on end to make this thing go from impossible to possible, and it worked. Two days after your launch in November 1989 you got a phone call early in the morning telling you that a gyroscope on COBE had failed. Yes. We did have a gyroscope fail, and my first thought was, “You mean we’ve lost the mission?” It was the fear that I had. I had just gotten home at 4 in the morning, but I got back up, put clothes back on, and headed back to Goddard and found out how we were doing. As it turned out, we were lucky in our bad luck because ‑‑ and this isn’t exactly luck ‑‑ our engineers said, this is something that could have happened, and they planned for it. So this was a challenge that they had already anticipated and taken on. The spacecraft was safe, and we had enough gyros to keep on going. We were very lucky in our unluck, but we planned for that luck. Tell me what happened when you presented the COBE results to a meeting of the American Astronomical Society in the spring of 1990? I showed this one chart, and the entire room of astronomers, which is maybe almost 2,000 people, they stood up and they cheered, and it was not something I had expected. A little while afterwards I thought, “Well, I knew that was the right answer. How come they didn’t know? Why are they cheering?” And finally, it occurred to me: they didn’t know that was the right answer. We had had some decades of getting wrong answers about this measurement, and so the whole idea of the big-bang theory had been in some doubt, and so when we got the measurements that fit exactly on the theoretical prediction for the big bang, this tremendous sigh of relief went through the world as well, and that is what I think they were telling us, as well as it was a beautiful measurement. You mentioned that astronomy doesn’t intrinsically provide benefits to people. Yet, we would be much poorer without it. Is that a hard argument to make to people who tend to measure the cost of everything? I don’t think so, really, because it is the actual truth. This is why people care about stuff. We spend money on plenty of stuff that actually has no importance besides culture. How important is it really that our team wins throwing a little ball around the field? But it is immensely important to people, and it seems to me that something that changes our entire world view is immensely important for decades, hundreds, thousands of years, and so I am proud that we can accomplish it as we do, and I think most people that think about it see it that way in the end as well. Also, I like to remember that from astronomy have come many unexpected benefits. Who knew that this was going to happen? The Space Age began for curiosity. We and the Soviet Union launched little satellites to explore the upper atmosphere and looked for what was just above Earth in space, and we had immense surprises, and then our nation was rather upset by being beaten into space, and so we invested extraordinary efforts into recovering. We said science and engineering are our future. We brought up generations of young people to become scientists and engineers, and we look around at our world now, and it is filled with the results of that effort. It was something that started off because we were curious, and now our entire world is different. In the 1960s, there were scientists like Richard Leakey and Jacques Cousteau who had worldwide acclaim. Is that the case today? Do you sense that? Well, I don’t pay very much attention to who is a famous scientist, although maybe I should now. But Jacques Cousteau was one of my heroes. I think the very first book that I ever bought was by Jacques Cousteau, and so I was just thrilled with the idea that you could strap on a tank of compressed air and go swimming under water and see stuff. He was one of those early space explorers, as far as I was concerned. Before we could go into outer space, we went into the water. I don’t know how famous scientists are now. There is no Einstein right now, but Steven Hawking I think comes close to sort of tantalizing us with the mystery of what are space and time. When you went to Sweden, one of the things you wanted to mention was that it is important that 14 Nobel Prizes have been given on the subject of light. Why? I thought it was interesting that so many scientists and physicists have been studying light, which you might think, well, this is pretty simple, everybody knows about light waves, but we don’t, and light turns out to be one of the most fascinating and interesting topics in science, even though it starts off seeming to be very simple. I just thought that was interesting. Thinking of NASA’s next 50 years of space astronomy, where do you think we will be placing the telescopes? How difficult will it be to make the quantum leaps like you said, the computer that will order up the new set of instruments? I think a lot of our telescopes are going to be going into deep space near the [gravitationally balanced] Lagrange points… But for every observatory, we are going to have to decide, “Is this something where it is important to keep it near home, so we could fix it, or put it out into deep space near the Lagrange points where it is protected from various other kinds of trouble?” Maybe over the years, we will develop the ability to go visit those things in deep space and fix them. I think the robotic revolution is continuing. It has turned out to be more difficult than people hoped, but it is still going, and Ph.D.s are being given every year for robotic studies, and we are beginning to get household robots that vacuum the floor, and I think eventually, the commercial world and the academic world will produce things that NASA wants to use in space. It is too early for us to be sort of making a space‑worthy version of a robot yet, sort of the general purpose robot that you would have seen in science fiction stories. But robotic servicing for space missions is already possible, and we looked into it very seriously for the Hubble Space Telescope. And we almost decided to do it, and we know that it’s feasible, and with time, I think it will get easier. What about astronomy on the moon? There are a few things which astronomers recognize as special on the moon. One is a measurement of the distance to the moon, which might seem to be uninteresting, but as it turns out, very, very precise measurements are a test of relativity, and we can find out of Einstein was right or if there is another fifth kind of force in the universe besides the ones that we know about. We recognize that one is a good one. We have been doing it for some time, but we could do it a lot better. The other thing that is special on the moon is very long wavelength radio telescopes. As it turns out, if you want to build a radio telescope that works at wavelengths longer than about 30 meters, then you can’t do it on the ground because the atmosphere of the Earth actually, completely reflects back those waves. So you just can’t see from here. So there is a whole piece of the universe that is almost unknown, and it is the radio astronomy and those longer wavelengths. Now, if you do want to put a telescope in space, it turns out this is a very noisy place to put one, also, that Earth sends out its very intense radiation, and so does the sun. So where would you really like to go? You would really like to put your wonderful new telescope on the far side of the moon and use it for the two weeks of the month when it is also in the dark. So this is a terrible engineering problem, but a wonderful scientific opportunity. So I think that is the other big opportunity for us, for astronomy on the moon. Most of the telescopes that look like telescopes, the ones that you think of like Hubble, actually work pretty well without being on solid ground and actually better. So that’s where we put them. You have worked on COBE, now Webb. How would you like to cap off your career? I would like at least to use the James Webb Space Telescope for observations and take on some new mystery of the early universe and try to make sense of something out there. Right now, while we are busy building Webb, I am not thinking very much about what we are going to use it for, but eventually, I want to use this wonderful tool, and I would love to make some new discovery with it.
<urn:uuid:77fbf7c5-4a1b-4d5b-9373-669d08bc093e>
CC-MAIN-2017-17
https://www.nasa.gov/50th/50th_magazine/matherInterview.html
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917119637.34/warc/CC-MAIN-20170423031159-00541-ip-10-145-167-34.ec2.internal.warc.gz
en
0.982347
7,820
2.671875
3
File and directory timestamps are one of the resources forensic analysts use for determining when something happened, or in what particular order a sequence of events took place. As these timestamps usually are stored in some internal format, additional software is needed to interpret them and translate them into a format an analyst can easily understand. If there are any errors in this step, the result will clearly be less reliable than expected. My primary purpose this article is to present a simple design of test data suitable for determining if there are errors or problems in how a particular tool performs these operations. I will also some present some test results from applying the tests to different tools. For the moment, I am concerned only with NTFS file timestamps. NTFS is probably the most common source of timestamps that an analyst will have to deal with, so it is important to ensure that timestamp translation is correct. Similar tests need to be created and performed for other timestamp formats. Also, I am ignoring time zone adjustments and daylight savings time: the translation to be examined will cover Universal Coordinated Time (UTC) only. NTFS file timestamps, according to the documentation of the ‘FILETIME’ data structure in the Windows Software Development Toolkit, is a “64-bit value representing the number of 100-nanosecond intervals since January 1, 1601 (UTC)”. Conversion from this internal format to a format more suitable for human interpretation is performed by the Windows system call FileTimeToSystemTime(), which extracts the year, month, day, hour, minutes, seconds and milliseconds from the timestamp data. On other platforms (e.g. Unix), or in software that is intentionally platform-independent (e.g. Perl or Java) other methods for translation is be required. The documentation of FileTimeToSystemTime(), as well as practical tests, indicate that the FILETIME value to be translated must be 0x7FFFFFFFFFFFFFFF or less. This corresponds to the time 30828-09-14 02:48:05.4775807. File timestamps are usually determined by the system clock at the time some file activity was performed. It is, though, also possible to set file time stamps to arbitrary values. On Vista and later, the system call SetFileInformationByHandle() can be used; on earlier versions of Windows, NtSetInfomationFile() may be used. No special user privileges are required. These system calls have a similar limitation in that only timestamps less than or equal to 0x7fffffffffffffff will be set. Additionally, the two timestamp values 0x0 and 0xffffffffffffffff are reserved to modify the operation of the system call in different ways. The reverse function, SystemTimeToFileTime(), performs the opposite conversion: translating a time expressed as the year, month, day, hours, minutes, seconds, etc into the 64-bit file time stamp. In this case, however, the span of time is restricted to years less than or equal to 30827. Before any serious testing is done, some kind of baseline requirements need to be established. - Tests will be performed mainly by humans, not by computers. The number of test points in each case must not be so large as to overwhelm the tester. A maximum limit around 100 test points seems reasonable. Tests designed to be scored by computer would allow for more comprehensive tests, but would also need to be specially adapted to each tool being tested. - The currently known time range (0x0 to 0x7FFFFFFFFFFFFFFF) should be supported. If the translation method does not cover the entire range, it should report out-of-range times clearly and unambiguously.That is, there must be no risk for misinterpretation, either by the analyst or by readers of any tool-produced reports. A total absence of translation is not quite acceptable on its own — it requires special information or training to interpret, and the risk for misinterpretation appears fairly high. A single ‘?’ is better, but if there are multiple reasons why a ‘?’ may be used, additional details should be provided. - The translation of a timestamp must be accurate, within the limits of the chosen representation.We don’t want a timestamp translated into a string become a very different time when translated back again. The largest difference we tolerate is related to the precision in the display format: if the translation doesn’t report time to a greater precision than a second, the tolerable error is half a second (assuming rounding to nearest second) or up to one second (assuming truncation) If the precision is milliseconds, then the tolerable error is on the corresponding order. Test 1: Coverage The first test is a simple coverage test: what period of time is covered by the translation? The baseline is taken to be the full period covered by the system call FileTimeToSystemTime(), i.e. from 1601-01-01 up to 30828-09-14. The first subtest checks the coverage over the entire baseline. In order to do that, and also keep the number of point tests reasonably small, each millennium is represented by a file, named after the first year of the period, the timestamps of which are set to the extreme timestamps within that millennium. For example, the period 2000-2999 is tested (very roughly, admittedly) by a single file, called ‘02000’, with timestamps representing 2000-01-01 00:00:00.0000000 and 2999-12-31 23:59:59.9999999 as the two extreme values (Tmin and Tmax for the period being tested). The second subtest makes the same type of test, only it checks each separate century in the period 1600 — 8000. (There is no particular reason for choosing 8000 as the ending year.) The third subtest makes the same type of test, only it checks each separate year in the period 1601 — 2399. In these tests, Tmin and Tmax are the starting and ending times of each single year. The fourth subtest examines the behaviour of the translation function at some selected cut-off points in greater detail. These tests could easily be extended to cover the entire baseline time period, but this makes them less suitable for manual inspection: the number of points to be checked will become unmanageable for ‘manual’ testing. Test 2: Leap Years The translation must take leap days into account. This is a small test, though not unimportant. The tests involve checking the 14-day period ‘around’ February 28th/29th for presence of leap day, as well as discontinuities. Two leap year tests are provided: ‘simple’ leap years (2004 – year evenly divisible by 4), and ‘exceptional’ leap years (2000 – year even divisible by 400). Four non-leap tests: three for ‘normal’ non-leap years (2001, 2002, 2003) and one ‘exceptional’ non-leap tear (1900 — year is divisible by 100). More extensive tests can easily be created, but again the number of required tests would surpass the limits of about 100 specified in the requirements. It is not entirely clear if leap days always are/were inserted after February 28th in the UTC calendar: if they are/were inserted after February 23th, additional tests may be required for the case the time stamp translation includes the day of the week. Alternatively, such tests should only be performed in timezones for which this information is known. Tests 3: Rounding This group of tests examines how the translation software handles limited precision. For example, assume that we have a timestamp corresponding to the time 00:00:00.6, and that it is translated into textual form that does not provide sub-second precision. How is the .6 second handled? Is it chopped off (truncated), producing a time of ’00:00:00′? Or is it rounded upwards to the nearest second: ’00:00:01′? In the extreme case, the translated string may end up in another year (or even millennium) than the original timestamp. Consider the timestamp 1999-12-31 23:59:59.6: will the translation say ‘1999-12-31 23:59:59’ or will it say ‘2000-01-01 00:00:00’? This is not an error in and by itself, but an analyst who does not expect this behaviour may be confused by it. If he works after an instruction to ‘look for files modified up the end of the year’, there is a small probability that files modified at the very turn of the year may be omitted because they are presented as belonging to the following year. If that is a real problem or not will depend on the actual investigation, and if and how such time limit effects are handled by the analyst. These tests are split into four subgroups, testing rounding to minutes, seconds, milliseconds and microseconds, respectively. For each group, two directories corresponding to the main unit are created, one for an even unit, the other for an odd unit. (The ‘rounding to minutes’ test use 2001-01-01 00:00 and 00:01. In each of these directories files are created for the full range of the test (0-60, in the case of minutes), and timestamped according to the Tmin/Tmax convention already mentioned. If the translation rounds upwards, or round to nearest even or odd unit, this will be possible to identify from this test data. More complex rounding schemes may not be possible to identify. Tests 4: Sorting These tests are somewhat related to the rounding test, in that the test examines how the limited precision of a timestamp translation affects sorting a number of timestamps into ascending order. For example, a translation scheme that only includes minutes but not seconds, and sorts these events by the translation string only will not clearly produce a sorted order that follows the actual sequence of events. Take the two file timestamps 00:00:01 (FILE1) and 00:00:31 (FILE2). If the translation truncates timestamps to minutes, both times will be shown as ’00:00’. If they are then sorted into ascending order by that string, the analyst cannot decide of FILE1 was timestamped before FILE2 or vice versa. And if such a sorted list appears in a report, a reader may draw the wrong conclusions from it. The tests are subdivided into sorting by seconds, milliseconds, microseconds and nanoseconds respectively. Each subtest provides 60, 100 or 10 files with timestamps arranged in four different sorting order. The name of these files have been arranged in an additional order to avoid the situation where files already sorted by file names are not rearranged by a sorting operation. Finally, the files are created in random order. The files are named on the following pattern: <nn>_C<nn>_A<nn>_W<nn>_M<nn>, e.g. ’01_C02_A07_W01_M66′. Each letter indicates a timestamp field (C = created, A = last accessed, W = last written, M = last modified), with <nn> indicating the particular position in the sorted sequence that timestamp is expected to appear in. The initial <nn> adds a fifth sorting order (by name), which allows for the tester to ‘reset’ to a sorting order that is not related to timestamps. Each timestamp differs only in the corresponding subunit: the files in the ‘sort by seconds’ have timestamps that have the same time, except for the second part, and the ‘sort by nanoseconds’ files differ only in the nanosecond information. (As the timestamp only accommodates 10 separate sub-microsecond values, only 10 files are provided for this test.) The test consists in sorting each set of files by each of the timestamp fields: if sorting is done by the particular subunit (second, millisecond, etc.) the corresponding part of the file name will appear in sorted order. Thus, an attempt to sort by creation time in ascending order should produce a sequence in which the C-sequence in the file name also appears in order: C00, C01, C02, … etc, and no other sequence should be the same ascending order. An implementation with limited precision in the translated string, but that sorts according to the timestamp values will sort perfectly also when sorting by nanoseconds is tested. If the sort is by the translated string, sorting will be perfect up to that smallest unit (typically seconds), and further attempts to sort by smaller units (milliseconds or microseconds) will not produce a correct order. If an implementation that sorts by translated string also rounds timestamps, this will have additional effects on the sorting order. Tests 5: Special tests In this part, additional timestamps are provided for test. Some of these cannot be created by the documented system calls, and need to be created by other methods. These timestamp can be set by the system calls, and may not have been tested by other test. This timestamp should translate to 1601-01-01 00:00:00.0000000, but it cannot be set by any of the system calls tested. These timestamps cannot be set by system call, and need to be edited by hand prior to testing. These values test how the translation mechanism copes with timestamps that produce error messages from the FileTimeToSystemTime() call. TZ & DST — Time zone and daylight saving time adjustments are closely related to timestamp translation, but are notionally performed as a second step, once the UTC translation is finished. For that reason, no such tests are included here: until it is reasonably clear that UTC translation is done correctly, there seems little point in testing additional adjustments. Leap seconds — The NTFS timestamp convention is based on UTC, but ignores leap seconds, which are included in UTC. For a very strict test that the translation mechanism does not take leap seconds into account, additional tests are required, probably on the same pattern as the tests for leap years, but at a resolution of seconds. However, if leap seconds have been included in the translation mechanism, it should be visible in the coverage tests, where the dates from 1972 onwards would gradually drift out of synchronization (at the time of writing, 2013, the difference would be 25 seconds). Day of week — No tests of day-of-week translation are included. A Windows program that creates an NTFS structure corresponding to the tests described has been written, and used to create a NTFS image. The Special tests directory in this image have been manually altered to contain the timestamps discussed. Both the source code and the image file is (or will very shortly be) available from SourceForge as part of the ‘CompForTest’ project. It must be stressed that the tests described should not be used to ‘prove’ that some particular timestamp translation works as it should: all the test results can be used for is to show that it doesn’t work as expected. As the test image was being developed different tools for examination of NTFS timestamps were tried out. Some of the results (such as incomplete coverage) was used to create additional tests. Below, some of the more interesting test results are described. It should be noted that there may be additional problems that affect the testing process. In one tool test (not included here), it was discovered that the tool occasionally did not report the last few files written to a directory. If this kind of problem is present also in other tools, tests results may be incomplete. Notes on rounding and sorting have been added only if rounding has been detected, or if sorting is done by a different resolution than the translated timestamp. 1970-01-01 00:00:01 — 2106-02-07 06:28:00 1970-01-01 00:00:00.0000000 is translated as ‘0000-00-00 00:00:00’ Timestamps outside the specified range are translated as if they were inside the range (e.g. timestamps for some periods in 1673, 1809, 1945, 2149, 2285, etc. are translated as times in 2013. This makes it difficult for an analyst to rely only on this version of Autopsy for accurate time translation. In the screen dump below, note that the 1965-1969 timestamps are translated as if they were from 2032-2036. EnCase Forensic 6.19.6: 1970-01-01 13:00 — 2038-01-19 03:14:06 1970-01-01 00:00 — 12:00 are translated as ” (empty). The period 12:00 — 13:00 has not been investigated further. Remaining timestamps outside the specified ranges are also translated as ” (empty). The screen dump below show the hours view of the cut-off date 1970-01-01 00:00.The file names indicate the offset from the baseline timestamps, HH+12 indicating an offset of +12 hours to 00:00. It is clear that from HH+13, translation appears to work as expected, but for the first 13 hours (00 — 12), no translation is provided, at least not for these test points. ProDiscover Basic 126.96.36.199: 1970-01-02 — 2038, 2107 — 2174, 2242 — 2310, 2378 — 2399 (all ranges examined) Timestamps prior to 1970-01-02, and sometime after 3000, are uniformly translated as 1970-01-01 00:00, making it impossible to determine actual time for these ranges. Timestamps after 2038, and outside stated range are translated as ‘(unknown)’. Translation truncates to minutes. The following screen dump shows both the uniform translation of early timestamps as 1970-01-01, as well as the ‘(unknown)’ and the reappearance of translation in the 2300-period. (The directories have also been timestamped with the minimum and maximum times of the files placed in them.) WinHex 16.6 SR-4: 1601-01-01 00:00:01 — 2286-01-09 23:30:11. 1601:01:01 00:00:00.0000000 and .00000001 are translated as ” (blank). Timestamps after 2286-01-09 23:30:11 are translated partly as ‘?’, partly as times in the specified range, the latter indicated in red. The cut-off time 30828-09-14 02:48:05 is translated as ” (blank). Two additional tests on tools not intended primarily for forensic analysis were also performed: Windows Explorer GUI and PowerShell command line. Neither of these provide for additional time zone adjustment: their use will be governed by the current time configuration of the operating system. In the test below, the computer was reset to UTC time zone prior to testing. 1601-01-01 00:00:00 — 9999-12-31 23:59:59 Timestamps outside the range are translated as blank. Sorting is by timestamp binary value. The command line used for these examination was: Get-ChildItem path | Select-Object name,creationtime,lastwritetime for each directory that was examined. Sorting was tested by using Get-ChildItem path | Select-Object name,creationtime,lastwritetime,lastaccesstime | Sort timefield The image below shows sorting by LastWriteTime and nanoseconds (or more exactly tenths of microseconds). Note that the Wnn specifications in the file names appear in the correct ascending order : Windows Explorer GUI: 1980-01-01 00:00:00 — 2107-12-31 23:59:57 2107-12-31 23:59:58 and :59 are shown as ” (blank) Remaining timestamps outside the range are translated as ” (blank) . It must be noted that the timestamp range only refers to the times shown in the GUI list. When the timestamp of an individual file is examined in the file property dialog (see below), the coverage appears to be full range of years. Additionally, the translation on at least one system appears to be off by a few seconds, as the end of the time range shows. Additional testing is required to say if this happens also on other Windows platforms. However, when the file ‘119 – SS+59’ is examined by the Properties dialog, the translation is as expected. (A little too late for correction I see that the date format here is in Swedish — I hope it’s clear anyway.) Interpretation of results In terms of coverage, none of the tools presented above is perfect: all are affected by some kind of restriction to the time period they translate correctly. The tools that comes off best are, in order of the time range they support: PowerShell 1.0 (1601–9999) Windows Explorer GUI (1980–2107) EnCase 6.19 (1970–2038) Each of these restricts translations to a subset of the full range, and shows remaining timestamps as blank. PowerShell additionally sorts by the full binary timestamp value, rather than the time string actually shown. The Windows Explorer GUI also appears to suffer from an two-second error: the last second of a minute, as well as parts of the immediately preceding second are translated as being the following minute. This affects the result, but as this is not a forensic tool it has been discounted. The tools that come off worst are: ProDiscover Basic 188.8.131.52 WinHex 16.6 SR-4 Each of these show unacceptably large errors between all or some file time stamps and their translation. ProDiscover comes off only slightly better in that timestamps up to 1970 are all translated as 1970-01-01, and so can be identified as suspicious, but at the other end of the spectrum, the translation error is still approximately the same as for Autopsy: translations are more than 25000 years out of register. WinHex suffers from similar problems: while it flags several ranges of timestamps as ‘?’, it still translates many timestamps totally wrong. It should be noted that there are later releases of both Autopsy and ProDiscover Basic that have not been tested. It should probably also be noted that additional tools have been tested, but that the results are not ‘more interesting’ that those presented here. How to live with a non-perfect tool? - Identify if and to what extent some particular forensic tool suffers from the limitations described above: does it have any documented or otherwise discoverable restrictions on the time period it can translate, and does it indicate out-of-range timestamps clearly and unambiguously, or does it translate more than one timestamp into the same date/time string? - Evaluate to what extent any shortcomings can affect the result of an investigation, in general as well as in particular, and also to what extent already existing lab practices mitigate such problems. - Devise and implement additional safeguards or mitigating actions in the case where investigations are significantly affected . These steps could also be important to document in investigation reports. In daily practice, the range of timestamps is likely to fall within the 1970–2038 range that most tools cover correctly — the remaining problem would be if any outside timestamps appeared in the material, and the extent to which they are recognized as such and handled correctly by the analyst. The traditional advice, “always use two different tools” turns out to be less than useful here, unless we know the strengths and weaknesses of each of the tools. If they happen to share the same timestamp range, we may not get significantly more trustworthy information from using both than we get from using only one.
<urn:uuid:8db430fa-f71c-4733-affe-32907638a6b0>
CC-MAIN-2017-17
https://articles.forensicfocus.com/2013/04/06/interpretation-of-ntfs-timestamps/
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917120338.97/warc/CC-MAIN-20170423031200-00072-ip-10-145-167-34.ec2.internal.warc.gz
en
0.931412
5,135
2.78125
3
A company called EnChroma has built a pair of glasses that claims to restore color vision for the colorblind. Predictably, the internet has erupted with excitement. But it’s not the first instance in which a piece of technology has made this bold assertion, and the science behind color perception isn’t straightforward. We decided it was time to figure out what’s really going on. For some colorblind people, donning EnChroma lenses is nothing short of life-changing. For others, the experience is lackluster. To understand why, let’s take a deep dive into the science of color vision, some of the different forms of colorblindness, and what these glasses are actually doing. How Does Color Vision Work? When people with normal color vision look at a rainbow, they see the whole swath of colors–from red to violet–within the part of the spectrum we call ‘visible light.’ But although every shade represents a specific wavelength of light, our eyes don’t contain unique detectors to pick out each and every wavelength. Electromagnetic spectrum. Image Credit: Wikimedia Instead, our retinas make do with only three types of color sensitive cells. We call them cone cells. They’re specialized neurons that fire off electrical signals in response to light, but they’re not actually very precise: a cone cell is sensitive to a wide range of colored light. But when the brain collects and aggregates the information collected by all three types of cone cell in the eye, it’s able to make fine discriminations between different shades of the same color. Here’s how that works. Cone cells contain a light-sensitive pigment that reacts to wavelengths of light from one segment of the spectrum. The photopigment is slightly different in each type of cone cell, making them sensitive to light from different parts of the spectrum: we may call them red, green, and blue cones, but it’s actually more accurate to say that each type detects either long (L), medium (M), or short (S) wavelengths of light. Typical light response curves for cones in a human eye. Image Credit: BenRG / Wikimedia The graph above, which shows how strongly each kind of cone cell responds to different wavelengths of light, makes that idea easier to visualize. You can see that each type of cone cell has a strong response–a peak–for only a narrow range of wavelengths. The ‘red’ L cones respond most strongly to yellow light, the ‘green’ M cones to green light, and the ‘blue’ S cones to blue-violet light. Cones are also triggered by a wide range of wavelengths on either side of their peaks, but they respond more weakly to those colors. That means there’s a lot of overlap between cone cells: L, M, and S cones actually respond to many of the same wavelengths. The main difference between the cone types is how strongly they respond to each wavelength. These features are absolutely critical to the way our eye perceives color. Image Credit: EnChroma Imagine you have a single cone cell. Make it an M cone if you like. If you shine a green light on the cell, it’s perfectly capable of sensing that light. It’ll even send an electrical signal to the brain. But it has no way to tell what color the light is. That’s because it can send out the same electrical signal when it picks up a weak light at a wavelength that makes it react strongly as when it detects a strong light at a wavelength that makes it react more weakly. To see a color, your brain has to combine information from L, M, and S cone cells, and compare the strength of the signal coming from each type of cone. Find the color of a beautiful cloudless blue sky on the graph, a wavelength around 475 nm. The S cones have the strongest reaction to that wavelength, but the red and green cones are weighing in with some signal action, too. It’s the relative strength of the signals from all three cone types that lets the brain say “it’s blue”! Each wavelength of light corresponds to a particular combination of signal-strengths from two or more cones: a three-bit code that lets the brain discriminate between millions of different shades. What Makes Someone Colorblind? The three-bit code is sensitive, but a ton of things can mess it up. The gene for one of the three photopigments might go AWOL. A mutation could shift the sensitivity of a photopigment so it responds to a slightly different range of wavelengths. (Damage to the retina can cause problems, too.) In a colorblind person, the cone cells simply don’t work the way they’re supposed to; the term covers a huge range of possible perceptual differences. Cone cell responses in two forms of red-green color blindness. Image Credit: Jim Cooke The most common forms of inherited color blindness are red-green perceptual defects. One version is an inability to make L photoreceptors, another stems from a lack of M photoreceptors. People with these genetic defects are dichromats: they have only two working photoreceptors instead of the normal three. Their problem is actually pretty straightforward. Remember that the brain compares how strongly each type of cone responds to a given wavelength of light? Now disappear either the L or M curve in that photoreceptor response graph in your mind, and you can see how the brain loses a ton of comparative information. The problem is more subtle for people who have a version of the L or M photoreceptor that detect a slightly different range of wavelengths than normal. These people are anomalous trichromats: like someone with normal vision, their brains get information from three photoreceptors, but the responses of one type of photoreceptor are shifted out of true. Depending on how far that photoreceptor’s response curve has shifted, an anomalous trichromat may perceive reds and greens slightly differently than a person with normal vision, or be as bad at discriminating between the two as a dichromat. Fall colors, seen six different ways. Top left: Normal color vision. Bottom left: Deuteranomaly (Green weak). Top middle: Protanomaly (Red weak). Bottom middle: Tritanomaly (Blue weak). Top right: Deuteranopia (Green blind). Bottom Right: Tritanopia (Blue blind). But a child born with one of these color perception deficiencies has no way to tell the difference. Learning he sees the world differently from the people around him can be an enormous surprise. That was true for media consultant Carlos Barrionuevo, who first discovered he was colorblind when he was 17. “I didn’t really notice it when I was a kid.” he told Gizmodo. “And my parents didn’t pick up on it. I honestly did not know until I applied for the Navy. I went in for my physical, and they start flipping through this book and say ‘Just tell us what number you see.’ And I said, ‘What number? There’s a number?’” The book Barrionuevo mentions contained some version of the Ishihara test: circles made up of colored dots in a variety of sizes and shades that serve as a quick-and-dirty way to screen for colorblindness. The circle can contain a symbol or a number that is difficult if not impossible for someone with one form of color blindness to see. It can also be designed so the symbol is visible to the colorblind, but invisible to everyone else. The test below looks like a 74 to people with normal vision, but appears to be a 21 to people with red/green colorblindness. Ishihara color test plate. People with normal color perception can see the number 74. People with red/green colorblindness see a 21. Image Credit: Wikimedia Barrionuevo stresses that it’s really not a simple matter of not seeing red or green. “I can usually tell what’s green and what’s red, but different shades of red or green all look the same to me. I get very confused on certain colors. If I go in a paint store, a lot of those paint chips just look similar, and I can’t make distinctions between them.” What Are EnChroma Lenses Doing? If color perception is basically an intensity game, that raises an obvious question: Could we restore normal color vision, simply by tweaking the proportions of light a colorblind person’s eyes are exposed to? Andy Schmeder, COO of EnChroma, believes that we can. A mathematician and computer scientist by training, Schmeder began exploring color vision correction a decade ago, along with his colleague Don McPherson. In 2002, McPherson, a glass scientist, discovered that a lens he’d created for laser surgery eye protection caused the world to appear more vivid and saturated. For some colorblind people, it felt like a cure. Image Credit: Frameri / EnChroma With a grant from the National Institutes of Health, McPherson and Schmeder set about to determine whether the unusual properties of this lens could be translated into an assistive device for the colorblind. “I created a mathematical model that allows us to simulate the vision of a person with some kind of colorblindness,” Schmeder told Gizmodo. “Essentially, we were asking, if your eyes are exposed to this spectral information and your eye is constructed in this particular way, what does that do to your overall sense of color?” Using their model results, Schmeder and McPherson developed a lens that filters out certain slices of the electromagnetic spectrum; regions that correspond with high spectral sensitivity across the eye’s M, L, and S cones. “Essentially, we’re removing particular wavelengths of light that correspond to the region of most overlap,” Schmeder said. “By doing so, we’re effectively creating more separation between those two channels of information.” Spectral response of red, green, and blue cones, with gray regions indicating regions of “notch filtering” by the EnChroma glasses. Image Credit: EnChroma EnChroma doesn’t claim its lenses will help dichromats, those people who lack an M or L cone. It also isn’t claiming to have developed a cure. Rather, the company likes to call its product an “assistive device,” one that can help anomalous trichromats—those people with M or L cones that have shifted their wavelength sensitivities—discriminate colors in the red-green dimension. Many users report dramatic changes to their color vision while wearing EnChroma glasses. “Any color with red or green appears more intense,” one anonymous user reported in a product validation study. “In fact, almost everything I see looks more intense. The world is simply more interesting looking.” Another user writes: “I never imagined I would be so incredibly affected by the ability to see distinct vivid colors, once confusing and hard to differentiate.” If you’re curious about the experience, you can check out any one of EnChroma’s many promotional videos, in which a colorblind person dons the glasses and is immediately overwhelmed by the vibrancy of the world. But some wearers are underwhelmed. “It’s not like they were worse than regular sunglasses — there was a way in which certain things popped out — but not in the way that it felt like it was advertised,” journalist Oliver Morrison told Gizmodo. Morrison’s account of his experience with the glasses, which appeared in The Atlantic earlier this year, highlights the challenge of objectively evaluating whether a device of this nature works. Here’s an excerpt: I met Tony Dykes, the CEO of EnChroma, in Times Square on a gray, rainy day, our eyes hidden behind his glasses’ 100 reflective coatings... I described to Dykes what I saw through the glasses: deeper oranges, crisper brake lights on cars, and fluorescent yellows that popped. I asked him if that is what a normal person sees. Dykes, a former lawyer and an able salesman, answered quickly. “It’s not something where it’s immediate,” he said. “You’re just getting the information for the first time.” Maybe the glasses were working. Maybe exchanging the colors I was accustomed to for real colors just wasn’t as great an experience as I’d been expecting. Dykes asked if I could tell the difference between the gray shoelaces and the pink “N” on the side of my sneakers. “The ‘N’ is shiny,” I said. “So I don’t know if I can tell they’re different by the colors or because of the iridescence.” Although I’d never confused my shoelace with my shoe before, I realized then that, until he had told me, I didn’t know the “N” was pink. Jay Nietz, a color vision expert at the University of Washington, believes EnChroma is capitalizing on this lack of objectivity. “Since red-green colorblind people have never experienced the red and green colors a normal person sees, they are easily fooled,” Nietz told Gizmodo in an email. “If the glasses could add light, maybe it’d be different. But all they can do is block out light. It’s hard to give people color vision by taking things away.” Neitz, for his part, believes the only way to cure colorblindness is through gene therapy — by inserting and expressing the gene for normal M or L cones in the retinas of colorblind patients. He and his wife have spent the last decade using genetic manipulation to restore normal vision to colorblind monkeys, and they hope to move on to human trials soon. A monkey named Dalton, post gene therapy, performing a colorblindness test. Dalton used to be red-green colorblind. But if the glasses aren’t enabling people to see more colors, what could account for the positive testimonials? Nietz suspects the lenses are altering the brightness balance of reds and greens. “If somebody was totally colorblind, all the wavelengths of light in a rainbow would look exactly the same,” Nietz said. “If they went out in the real world and saw a green and red tomato, they’d be completely indistinguishable because they’re the same brightness to our eyes. Then, if that person put on glasses with a filter that blocked out green light, all of a sudden, the green tomato looks darker. Two things that always looked identical now look totally different.” “I wouldn’t claim that the EnChroma lens has no effect on brightness,” Schmeder said in response to Gizmodo’s queries. “Pretty much anything that’s strongly colored will suddenly seem brighter. It’s a side effect of the way the lens works.” But according to Schmeder, the lens’s neutral gray color maintains the balance of brightness between reds and greens. That is, all red things aren’t going to suddenly become brighter than all green things, he says. In the end, the best way to sort out whether the glasses are working as advertised is through objective testing. EnChroma has relied primarily qualitative user responses to evaluate the efficacy of its product. The company has also performed some clinical trials using the D15 colorblindness test, wherein subjects are asked to arrange 15 colored circles chromatically (in the order of the rainbow). In the 100 hue test, subjects arrange the colors within each row to represent a continuous spectrum of shade from one end to the other. Colors at the end of each row serve as anchors. Image Credit: Jordanwesthoff / Wikimedia In test results shared with Gizmodo, nine subjects all received higher D15 scores — that is, they placed fewer chips out of sequence — while wearing EnChroma glasses. “What is apparent from the study is that not everyone exhibits the same degree of improvement, nor does the extent of improvement correlate to the degree of [colorblindness] severity,” EnChroma writes. “However, everyone does improve, some to that of mild/normal from severe.” But there’s still the concern that wearing a colored filter while taking the D15 test will alter the relative brightness of the chips, providing a context cue that can help subjects score higher. For a more objective test, Nietz recommends the anomaloscope, in which an observer is asked to match one half of a circular field, illuminated with yellow light, to the other half of the field, which is a mixture of red and green. The brightness of the yellow portion can be varied, while the other half can vary continuously from fully red to fully green. Screenshot from an online color matching test that mimics the anomaloscope. Via colorblindness.com. “This is considered to be the gold standard for testing red-green color vision,” Nietz said. “The anomaloscope is designed in such a way that adjustments can be made so that colorblind people can’t use brightness as a cue so the brightness differences produced by the glasses would not help colorblind people cheat.” Is It All About Perception? Whether EnChroma’s glasses are expanding the red-green color dimension, or simply creating a more saturated, contrast-filled world, there’s no doubt that the technology has had positive effects for some colorblind people. “The biggest point for me wearing this glasses is that I’m more inspired,” Cincinnati-based guitarist and EnChroma user Lance Martin told Gizmodo. Image Credit: Shutterstock Martin, who has been “wearing these things nonstop” for the last several months, says that ordinary experiences, like looking at highway signs or foliage while driving, now fill him with insight and awe. “I always interpreted interstate road signs as a really dark evergreen, but they’re actually a color green i’d never been able to see before,” he said. “I’ve been walking more, just to see the flowers. Inspiration fuels my career, and for me to be inspired by the mundane, everyday — that is mind-blowing.” The world of color is inherently subjective. Even amongst those who see “normally,” there’s no telling whether our brains interpret colored light the same way. We assume that colors are a shared experience, because we can distinguish different ones and agree on their names. If a pair of glasses can help the colorblind do the same — whether or not the technology causes them to see “normally”— that’s one less reason to treat this condition as a disadvantage. “People are looking for access to jobs where they’re being excluded because of colorblindness,” Schmeder said. “My belief is that if we really analyze this problem closely, we can come up with a reasonable accommodation that works for some situations. Even if we can’t help everyone, if we can elevate the level of discussion around this and help some people, that’d be amazing.” Top image: Frameri / En Chroma
<urn:uuid:75cd2311-d09b-43b9-9ad3-b6796bead817>
CC-MAIN-2017-17
http://gizmodo.com/can-these-glasses-help-the-colorblind-we-put-en-chroma-1739433668
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917122619.71/warc/CC-MAIN-20170423031202-00308-ip-10-145-167-34.ec2.internal.warc.gz
en
0.937726
4,180
3.609375
4
Most life forms exhibit daily rhythms in cellular, physiological and behavioral phenomena that are driven by endogenous circadian (≡24 hr) pacemakers or clocks. Malfunctions in the human circadian system are associated with numerous diseases or disorders. Much progress towards our understanding of the mechanisms underlying circadian rhythms has emerged from genetic screens whereby an easily measured behavioral rhythm is used as a read-out of clock function. Studies using Drosophila have made seminal contributions to our understanding of the cellular and biochemical bases underlying circadian rhythms. The standard circadian behavioral read-out measured in Drosophila is locomotor activity. In general, the monitoring system involves specially designed devices that can measure the locomotor movement of Drosophila. These devices are housed in environmentally controlled incubators located in a darkroom and are based on using the interruption of a beam of infrared light to record the locomotor activity of individual flies contained inside small tubes. When measured over many days, Drosophila exhibit daily cycles of activity and inactivity, a behavioral rhythm that is governed by the animal's endogenous circadian system. The overall procedure has been simplified with the advent of commercially available locomotor activity monitoring devices and the development of software programs for data analysis. We use the system from Trikinetics Inc., which is the procedure described here and is currently the most popular system used worldwide. More recently, the same monitoring devices have been used to study sleep behavior in Drosophila. Because the daily wake-sleep cycles of many flies can be measured simultaneously and only 1 to 2 weeks worth of continuous locomotor activity data is usually sufficient, this system is ideal for large-scale screens to identify Drosophila manifesting altered circadian or sleep properties. 14 Related JoVE Articles! Measuring Circadian and Acute Light Responses in Mice using Wheel Running Activity Institutions: John Hopkins University. Circadian rhythms are physiological functions that cycle over a period of approximately 24 hours (circadian- circa: approximate and diem: day)1, 2 . They are responsible for timing our sleep/wake cycles and hormone secretion. Since this timing is not precisely 24-hours, it is synchronized to the solar day by light input. This is accomplished via photic input from the retina to the suprachiasmatic nucleus (SCN) which serves as the master pacemaker synchronizing peripheral clocks in other regions of the brain and peripheral tissues to the environmental light dark cycle3-7 . The alignment of rhythms to this environmental light dark cycle organizes particular physiological events to the correct temporal niche, which is crucial for survival8 . For example, mice sleep during the day and are active at night. This ability to consolidate activity to either the light or dark portion of the day is referred to as circadian photoentrainment and requires light input to the circadian clock9 . Activity of mice at night is robust particularly in the presence of a running wheel. Measuring this behavior is a minimally invasive method that can be used to evaluate the functionality of the circadian system as well as light input to this system. Methods that will covered here are used to examine the circadian clock, light input to this system, as well as the direct influence of light on wheel running behavior. Neuroscience, Issue 48, mouse, circadian, behavior, wheel running Quantitative Measurement of the Immune Response and Sleep in Drosophila Institutions: University of Pennsylvania Perelman School of Medicine. A complex interaction between the immune response and host behavior has been described in a wide range of species. Excess sleep, in particular, is known to occur as a response to infection in mammals 1 and has also recently been described in Drosophila melanogaster2 . It is generally accepted that sleep is beneficial to the host during an infection and that it is important for the maintenance of a robust immune system3,4 . However, experimental evidence that supports this hypothesis is limited4 , and the function of excess sleep during an immune response remains unclear. We have used a multidisciplinary approach to address this complex problem, and have conducted studies in the simple genetic model system, the fruitfly Drosophila melanogaster . We use a standard assay for measuring locomotor behavior and sleep in flies, and demonstrate how this assay is used to measure behavior in flies infected with a pathogenic strain of bacteria. This assay is also useful for monitoring the duration of survival in individual flies during an infection. Additional measures of immune function include the ability of flies to clear an infection and the activation of NFκB, a key transcription factor that is central to the innate immune response in Drosophila . Both survival outcome and bacterial clearance during infection together are indicators of resistance and tolerance to infection. Resistance refers to the ability of flies to clear an infection, while tolerance is defined as the ability of the host to limit damage from an infection and thereby survive despite high levels of pathogen within the system5 . Real-time monitoring of NFκB activity during infection provides insight into a molecular mechanism of survival during infection. The use of Drosophila in these straightforward assays facilitates the genetic and molecular analyses of sleep and the immune response and how these two complex systems are reciprocally influenced. Immunology, Issue 70, Neuroscience, Medicine, Physiology, Pathology, Microbiology, immune response, sleep, Drosophila, infection, bacteria, luciferase reporter assay, animal model Recording and Analysis of Circadian Rhythms in Running-wheel Activity in Rodents Institutions: McGill University , Concordia University. When rodents have free access to a running wheel in their home cage, voluntary use of this wheel will depend on the time of day1-5 . Nocturnal rodents, including rats, hamsters, and mice, are active during the night and relatively inactive during the day. Many other behavioral and physiological measures also exhibit daily rhythms, but in rodents, running-wheel activity serves as a particularly reliable and convenient measure of the output of the master circadian clock, the suprachiasmatic nucleus (SCN) of the hypothalamus. In general, through a process called entrainment, the daily pattern of running-wheel activity will naturally align with the environmental light-dark cycle (LD cycle; e.g. 12 hr-light:12 hr-dark). However circadian rhythms are endogenously generated patterns in behavior that exhibit a ~24 hr period, and persist in constant darkness. Thus, in the absence of an LD cycle, the recording and analysis of running-wheel activity can be used to determine the subjective time-of-day. Because these rhythms are directed by the circadian clock the subjective time-of-day is referred to as the circadian time (CT). In contrast, when an LD cycle is present, the time-of-day that is determined by the environmental LD cycle is called the zeitgeber time (ZT). Although circadian rhythms in running-wheel activity are typically linked to the SCN clock6-8 , circadian oscillators in many other regions of the brain and body9-14 could also be involved in the regulation of daily activity rhythms. For instance, daily rhythms in food-anticipatory activity do not require the SCN15,16 and instead, are correlated with changes in the activity of extra-SCN oscillators17-20 . Thus, running-wheel activity recordings can provide important behavioral information not only about the output of the master SCN clock, but also on the activity of extra-SCN oscillators. Below we describe the equipment and methods used to record, analyze and display circadian locomotor activity rhythms in laboratory rodents. Neuroscience, Issue 71, Medicine, Neurobiology, Physiology, Anatomy, Psychology, Psychiatry, Behavior, Suprachiasmatic nucleus, locomotor activity, mouse, rat, hamster, light-dark cycle, free-running activity, entrainment, circadian period, circadian rhythm, phase shift, animal model The FlyBar: Administering Alcohol to Flies Institutions: Florida State University, University of Houston. Fruit flies (Drosophila melanogaster ) are an established model for both alcohol research and circadian biology. Recently, we showed that the circadian clock modulates alcohol sensitivity, but not the formation of tolerance. Here, we describe our protocol in detail. Alcohol is administered to the flies using the FlyBar. In this setup, saturated alcohol vapor is mixed with humidified air in set proportions, and administered to the flies in four tubes simultaneously. Flies are reared under standardized conditions in order to minimize variation between the replicates. Three-day old flies of different genotypes or treatments are used for the experiments, preferably by matching flies of two different time points (e.g. , CT 5 and CT 17) making direct comparisons possible. During the experiment, flies are exposed for 1 hr to the pre-determined percentage of alcohol vapor and the number of flies that exhibit the Loss of Righting reflex (LoRR) or sedation are counted every 5 min. The data can be analyzed using three different statistical approaches. The first is to determine the time at which 50% of the flies have lost their righting reflex and use an Analysis of the Variance (ANOVA) to determine whether significant differences exist between time points. The second is to determine the percentage flies that show LoRR after a specified number of minutes, followed by an ANOVA analysis. The last method is to analyze the whole times series using multivariate statistics. The protocol can also be used for non-circadian experiments or comparisons between genotypes. Neuroscience, Issue 87, neuroscience, alcohol sensitivity, Drosophila, Circadian, sedation, biological rhythms, undergraduate research Eye Tracking, Cortisol, and a Sleep vs. Wake Consolidation Delay: Combining Methods to Uncover an Interactive Effect of Sleep and Cortisol on Memory Institutions: Boston College, Wofford College, University of Notre Dame. Although rises in cortisol can benefit memory consolidation, as can sleep soon after encoding, there is currently a paucity of literature as to how these two factors may interact to influence consolidation. Here we present a protocol to examine the interactive influence of cortisol and sleep on memory consolidation, by combining three methods: eye tracking, salivary cortisol analysis, and behavioral memory testing across sleep and wake delays. To assess resting cortisol levels, participants gave a saliva sample before viewing negative and neutral objects within scenes. To measure overt attention, participants’ eye gaze was tracked during encoding. To manipulate whether sleep occurred during the consolidation window, participants either encoded scenes in the evening, slept overnight, and took a recognition test the next morning, or encoded scenes in the morning and remained awake during a comparably long retention interval. Additional control groups were tested after a 20 min delay in the morning or evening, to control for time-of-day effects. Together, results showed that there is a direct relation between resting cortisol at encoding and subsequent memory, only following a period of sleep. Through eye tracking, it was further determined that for negative stimuli, this beneficial effect of cortisol on subsequent memory may be due to cortisol strengthening the relation between where participants look during encoding and what they are later able to remember. Overall, results obtained by a combination of these methods uncovered an interactive effect of sleep and cortisol on memory consolidation. Behavior, Issue 88, attention, consolidation, cortisol, emotion, encoding, glucocorticoids, memory, sleep, stress Inducing Plasticity of Astrocytic Receptors by Manipulation of Neuronal Firing Rates Institutions: University of California Riverside, University of California Riverside, University of California Riverside. Close to two decades of research has established that astrocytes in situ and in vivo express numerous G protein-coupled receptors (GPCRs) that can be stimulated by neuronally-released transmitter. However, the ability of astrocytic receptors to exhibit plasticity in response to changes in neuronal activity has received little attention. Here we describe a model system that can be used to globally scale up or down astrocytic group I metabotropic glutamate receptors (mGluRs) in acute brain slices. Included are methods on how to prepare parasagittal hippocampal slices, construct chambers suitable for long-term slice incubation, bidirectionally manipulate neuronal action potential frequency, load astrocytes and astrocyte processes with fluorescent Ca2+ indicator, and measure changes in astrocytic Gq GPCR activity by recording spontaneous and evoked astrocyte Ca2+ events using confocal microscopy. In essence, a “calcium roadmap” is provided for how to measure plasticity of astrocytic Gq GPCRs. Applications of the technique for study of astrocytes are discussed. Having an understanding of how astrocytic receptor signaling is affected by changes in neuronal activity has important implications for both normal synaptic function as well as processes underlying neurological disorders and neurodegenerative disease. Neuroscience, Issue 85, astrocyte, plasticity, mGluRs, neuronal Firing, electrophysiology, Gq GPCRs, Bolus-loading, calcium, microdomains, acute slices, Hippocampus, mouse Design and Analysis of Temperature Preference Behavior and its Circadian Rhythm in Drosophila Institutions: Cincinnati Childrens Hospital Medical Center, JST. The circadian clock regulates many aspects of life, including sleep, locomotor activity, and body temperature (BTR) rhythms1,2 . We recently identified a novel Drosophila circadian output, called the temperature preference rhythm (TPR), in which the preferred temperature in flies rises during the day and falls during the night 3 . Surprisingly, the TPR and locomotor activity are controlled through distinct circadian neurons3 locomotor activity is a well known circadian behavioral output and has provided strong contributions to the discovery of many conserved mammalian circadian clock genes and mechanisms4 . Therefore, understanding TPR will lead to the identification of hitherto unknown molecular and cellular circadian mechanisms. Here, we describe how to perform and analyze the TPR assay. This technique not only allows for dissecting the molecular and neural mechanisms of TPR, but also provides new insights into the fundamental mechanisms of the brain functions that integrate different environmental signals and regulate animal behaviors. Furthermore, our recently published data suggest that the fly TPR shares features with the mammalian BTR3 are ectotherms, in which the body temperature is typically behaviorally regulated. Therefore, TPR is a strategy used to generate a rhythmic body temperature in these flies5-8 . We believe that further exploration of Drosophila TPR will facilitate the characterization of the mechanisms underlying body temperature control in animals. Basic Protocol, Issue 83, Drosophila, circadian clock, temperature, temperature preference rhythm, locomotor activity, body temperature rhythms A Method to Study the Impact of Chemically-induced Ovarian Failure on Exercise Capacity and Cardiac Adaptation in Mice Institutions: University of Arizona. The risk of cardiovascular disease (CVD) increases in post-menopausal women, yet, the role of exercise, as a preventative measure for CVD risk in post-menopausal women has not been adequately studied. Accordingly, we investigated the impact of voluntary cage-wheel exercise and forced treadmill exercise on cardiac adaptation in menopausal mice. The most commonly used inducible model for mimicking menopause in women is the ovariectomized (OVX) rodent. However, the OVX model has a few dissimilarities from menopause in humans. In this study, we administered 4-vinylcyclohexene diepoxide (VCD) to female mice, which accelerates ovarian failure as an alternative menopause model to study the impact of exercise in menopausal mice. VCD selectively accelerates the loss of primary and primordial follicles resulting in an endocrine state that closely mimics the natural progression from pre- to peri- to post-menopause in humans. To determine the impact of exercise on exercise capacity and cardiac adaptation in VCD-treated female mice, two methods were used. First, we exposed a group of VCD-treated and untreated mice to a voluntary cage wheel. Second, we used forced treadmill exercise to determine exercise capacity in a separate group VCD-treated and untreated mice measured as a tolerance to exercise intensity and endurance. Medicine, Issue 86, VCD, menopause, voluntary wheel running, forced treadmill exercise, exercise capacity, adaptive cardiac adaptation Automated, Quantitative Cognitive/Behavioral Screening of Mice: For Genetics, Pharmacology, Animal Cognition and Undergraduate Instruction Institutions: Rutgers University, Koç University, New York University, Fairfield University. We describe a high-throughput, high-volume, fully automated, live-in 24/7 behavioral testing system for assessing the effects of genetic and pharmacological manipulations on basic mechanisms of cognition and learning in mice. A standard polypropylene mouse housing tub is connected through an acrylic tube to a standard commercial mouse test box. The test box has 3 hoppers, 2 of which are connected to pellet feeders. All are internally illuminable with an LED and monitored for head entries by infrared (IR) beams. Mice live in the environment, which eliminates handling during screening. They obtain their food during two or more daily feeding periods by performing in operant (instrumental) and Pavlovian (classical) protocols, for which we have written protocol-control software and quasi-real-time data analysis and graphing software. The data analysis and graphing routines are written in a MATLAB-based language created to simplify greatly the analysis of large time-stamped behavioral and physiological event records and to preserve a full data trail from raw data through all intermediate analyses to the published graphs and statistics within a single data structure. The data-analysis code harvests the data several times a day and subjects it to statistical and graphical analyses, which are automatically stored in the "cloud" and on in-lab computers. Thus, the progress of individual mice is visualized and quantified daily. The data-analysis code talks to the protocol-control code, permitting the automated advance from protocol to protocol of individual subjects. The behavioral protocols implemented are matching, autoshaping, timed hopper-switching, risk assessment in timed hopper-switching, impulsivity measurement, and the circadian anticipation of food availability. Open-source protocol-control and data-analysis code makes the addition of new protocols simple. Eight test environments fit in a 48 in x 24 in x 78 in cabinet; two such cabinets (16 environments) may be controlled by one computer. Behavior, Issue 84, genetics, cognitive mechanisms, behavioral screening, learning, memory, timing Direct Imaging of ER Calcium with Targeted-Esterase Induced Dye Loading (TED) Institutions: University of Wuerzburg, Max Planck Institute of Neurobiology, Martinsried, Ludwig-Maximilians University of Munich. Visualization of calcium dynamics is important to understand the role of calcium in cell physiology. To examine calcium dynamics, synthetic fluorescent Ca2+ indictors have become popular. Here we demonstrate TED (= targeted-esterase induced dye loading), a method to improve the release of Ca2+ indicator dyes in the ER lumen of different cell types. To date, TED was used in cell lines, glial cells, and neurons in vitro . TED bases on efficient, recombinant targeting of a high carboxylesterase activity to the ER lumen using vector-constructs that express Carboxylesterases (CES). The latest TED vectors contain a core element of CES2 fused to a red fluorescent protein, thus enabling simultaneous two-color imaging. The dynamics of free calcium in the ER are imaged in one color, while the corresponding ER structure appears in red. At the beginning of the procedure, cells are transduced with a lentivirus. Subsequently, the infected cells are seeded on coverslips to finally enable live cell imaging. Then, living cells are incubated with the acetoxymethyl ester (AM-ester) form of low-affinity Ca2+ indicators, for instance Fluo5N-AM, Mag-Fluo4-AM, or Mag-Fura2-AM. The esterase activity in the ER cleaves off hydrophobic side chains from the AM form of the Ca2+ indicator and a hydrophilic fluorescent dye/Ca2+ complex is formed and trapped in the ER lumen. After dye loading, the cells are analyzed at an inverted confocal laser scanning microscope. Cells are continuously perfused with Ringer-like solutions and the ER calcium dynamics are directly visualized by time-lapse imaging. Calcium release from the ER is identified by a decrease in fluorescence intensity in regions of interest, whereas the refilling of the ER calcium store produces an increase in fluorescence intensity. Finally, the change in fluorescent intensity over time is determined by calculation of ΔF/F0 Cellular Biology, Issue 75, Neurobiology, Neuroscience, Molecular Biology, Biochemistry, Biomedical Engineering, Bioengineering, Virology, Medicine, Anatomy, Physiology, Surgery, Endoplasmic Reticulum, ER, Calcium Signaling, calcium store, calcium imaging, calcium indicator, metabotropic signaling, Ca2+, neurons, cells, mouse, animal model, cell culture, targeted esterase induced dye loading, imaging Drosophila Adult Olfactory Shock Learning Institutions: University of Bristol. have been used in classical conditioning experiments for over 40 years, thus greatly facilitating our understanding of memory, including the elucidation of the molecular mechanisms involved in cognitive diseases1-7 . Learning and memory can be assayed in larvae to study the effect of neurodevelopmental genes8-10 and in flies to measure the contribution of adult plasticity genes1-7 . Furthermore, the short lifespan of Drosophila facilitates the analysis of genes mediating age-related memory impairment5,11-13 . The availability of many inducible promoters that subdivide the Drosophila nervous system makes it possible to determine when and where a gene of interest is required for normal memory as well as relay of different aspects of the reinforcement signal3,4,14,16 Studying memory in adult Drosophila allows for a detailed analysis of the behavior and circuitry involved and a measurement of long-term memory15-17 . The length of the adult stage accommodates longer-term genetic, behavioral, dietary and pharmacological manipulations of memory, in addition to determining the effect of aging and neurodegenerative disease on memory3-6,11-13,15-21 Classical conditioning is induced by the simultaneous presentation of a neutral odor cue (conditioned stimulus, CS+ ) and a reinforcement stimulus, e.g ., an electric shock or sucrose, (unconditioned stimulus, US), that become associated with one another by the animal1,16 . A second conditioned stimulus (CS- ) is subsequently presented without the US. During the testing phase, Drosophila are simultaneously presented with CS+ and CS- odors. After the Drosophila are provided time to choose between the odors, the distribution of the animals is recorded. This procedure allows associative aversive or appetitive conditioning to be reliably measured without a bias introduced by the innate preference for either of the conditioned stimuli. Various control experiments are also performed to test whether all genotypes respond normally to odor and reinforcement alone. Neuroscience, Issue 90, Drosophila, Pavlovian learning, classical conditioning, learning, memory, olfactory, electric shock, associative memory P50 Sensory Gating in Infants Institutions: University of Colorado School of Medicine, Colorado State University. Attentional deficits are common in a variety of neuropsychiatric disorders including attention deficit-hyperactivity disorder, autism, bipolar mood disorder, and schizophrenia. There has been increasing interest in the neurodevelopmental components of these attentional deficits; neurodevelopmental meaning that while the deficits become clinically prominent in childhood or adulthood, the deficits are the results of problems in brain development that begin in infancy or even prenatally. Despite this interest, there are few methods for assessing attention very early in infancy. This report focuses on one method, infant auditory P50 sensory gating. Attention has several components. One of the earliest components of attention, termed sensory gating, allows the brain to tune out repetitive, noninformative sensory information. Auditory P50 sensory gating refers to one task designed to measure sensory gating using changes in EEG. When identical auditory stimuli are presented 500 ms apart, the evoked response (change in the EEG associated with the processing of the click) to the second stimulus is generally reduced relative to the response to the first stimulus (i.e. the response is "gated"). When response to the second stimulus is not reduced, this is considered a poor sensory gating, is reflective of impaired cerebral inhibition, and is correlated with attentional deficits. Because the auditory P50 sensory gating task is passive, it is of potential utility in the study of young infants and may provide a window into the developmental time course of attentional deficits in a variety of neuropsychiatric disorders. The goal of this presentation is to describe the methodology for assessing infant auditory P50 sensory gating, a methodology adapted from those used in studies of adult populations. Behavior, Issue 82, Child Development, Psychophysiology, Attention Deficit and Disruptive Behavior Disorders, Evoked Potentials, Auditory, auditory evoked potential, sensory gating, infant, attention, electrophysiology, infants, sensory gating, endophenotype, attention, P50 Monitoring Cell-autonomous Circadian Clock Rhythms of Gene Expression Using Luciferase Bioluminescence Reporters Institutions: The University of Memphis. In mammals, many aspects of behavior and physiology such as sleep-wake cycles and liver metabolism are regulated by endogenous circadian clocks (reviewed1,2 ). The circadian time-keeping system is a hierarchical multi-oscillator network, with the central clock located in the suprachiasmatic nucleus (SCN) synchronizing and coordinating extra-SCN and peripheral clocks elsewhere1,2 . Individual cells are the functional units for generation and maintenance of circadian rhythms3,4 , and these oscillators of different tissue types in the organism share a remarkably similar biochemical negative feedback mechanism. However, due to interactions at the neuronal network level in the SCN and through rhythmic, systemic cues at the organismal level, circadian rhythms at the organismal level are not necessarily cell-autonomous5-7 . Compared to traditional studies of locomotor activity in vivo and SCN explants ex vivo , cell-based in vitro assays allow for discovery of cell-autonomous circadian defects5,8 . Strategically, cell-based models are more experimentally tractable for phenotypic characterization and rapid discovery of basic clock mechanisms5,8-13 Because circadian rhythms are dynamic, longitudinal measurements with high temporal resolution are needed to assess clock function. In recent years, real-time bioluminescence recording using firefly luciferase as a reporter has become a common technique for studying circadian rhythms in mammals14,15 , as it allows for examination of the persistence and dynamics of molecular rhythms. To monitor cell-autonomous circadian rhythms of gene expression, luciferase reporters can be introduced into cells via transient transfection13,16,17 or stable transduction5,10,18,19 . Here we describe a stable transduction protocol using lentivirus-mediated gene delivery. The lentiviral vector system is superior to traditional methods such as transient transfection and germline transmission because of its efficiency and versatility: it permits efficient delivery and stable integration into the host genome of both dividing and non-dividing cells20 . Once a reporter cell line is established, the dynamics of clock function can be examined through bioluminescence recording. We first describe the generation of P(Per2 reporter lines, and then present data from this and other circadian reporters. In these assays, 3T3 mouse fibroblasts and U2OS human osteosarcoma cells are used as cellular models. We also discuss various ways of using these clock models in circadian studies. Methods described here can be applied to a great variety of cell types to study the cellular and molecular basis of circadian clocks, and may prove useful in tackling problems in other biological systems. Genetics, Issue 67, Molecular Biology, Cellular Biology, Chemical Biology, Circadian clock, firefly luciferase, real-time bioluminescence technology, cell-autonomous model, lentiviral vector, RNA interference (RNAi), high-throughput screening (HTS) Assessment of Murine Exercise Endurance Without the Use of a Shock Grid: An Alternative to Forced Exercise Institutions: VA Puget Sound Health Care System, Seattle Institute for Biomedical and Clinical Research, University of Washington, VA Puget Sound Health Care System. Using laboratory mouse models, the molecular pathways responsible for the metabolic benefits of endurance exercise are beginning to be defined. The most common method for assessing exercise endurance in mice utilizes forced running on a motorized treadmill equipped with a shock grid. Animals who quit running are pushed by the moving treadmill belt onto a grid that delivers an electric foot shock; to escape the negative stimulus, the mice return to running on the belt. However, avoidance behavior and psychological stress due to use of a shock apparatus can interfere with quantitation of running endurance, as well as confound measurements of postexercise serum hormone and cytokine levels. Here, we demonstrate and validate a refined method to measure running endurance in naïve C57BL/6 laboratory mice on a motorized treadmill without utilizing a shock grid. When mice are preacclimated to the treadmill, they run voluntarily with gait speeds specific to each mouse. Use of the shock grid is replaced by gentle encouragement by a human operator using a tongue depressor, coupled with sensitivity to the voluntary willingness to run on the part of the mouse. Clear endpoints for quantifying running time-to-exhaustion for each mouse are defined and reflected in behavioral signs of exhaustion such as splayed posture and labored breathing. This method is a humane refinement which also decreases the confounding effects of stress on experimental parameters. Behavior, Issue 90, Exercise, Mouse, Treadmill, Endurance, Refinement
<urn:uuid:6e0b730d-7905-4b02-94c5-d82b792f0c1e>
CC-MAIN-2017-17
https://www.jove.com/visualize/abstract/25950516/deletion-metabotropic-glutamate-receptors-2-3-mglu2-mglu3-mice
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917119120.22/warc/CC-MAIN-20170423031159-00129-ip-10-145-167-34.ec2.internal.warc.gz
en
0.913976
6,398
3.875
4
Catalan is not just the language of Catalonia, but a language shared with other areas of Spain, France, and even Italy. Catalan is also the official language of Andorra, the small country set in the middle of the Pyrenees. Most Catalan speakers are bilingual, with Catalan being used as a first language by fewer than half of them. There is a certainly very solid literary tradition in Catalan, which includes a long list of sf works – among them an indispensable masterpiece, Manuel de Pedrolo's Mecanoscrit del segon origen ["Typescript of the Second Origin"] (1974). Science fiction entered the domain of the Catalan language in the last quarter of the 19th century. Some nineteenth-century highlights are the short stories "La darrera paraula de la ciència" ["Science's Last Word"] (1875), a Parody of Mary Shelley's Frankenstein by Joan Sardà Lloret; "El moviment continu" ["Perpetual Motion"] (1878) by Antoni Careta Vidal, a satire of the search for the perpetual motion machine; and "El radiòmetre" ["The Radiometre"] (1880) by Joaquim Bartrina and Narcís Oller, about the dangers of transgressing certain scientific principles. The first translation of Edgar Allan Poe and Bret Harte, Noveletas escullidas de Edgart Poe y Bret Harte ["Selected Novelettes by Edgart [sic] Poe and Bret Harte"] (coll 1879 chap), included "L'home girafa" ["Four Beasts in One: The Homo-Cameleopard"], "Lo gat negre" ["The Black Cat"] and the short essay "Génessis d'un poema. Lo corb. Método de la composició" ["The Philosophy of Composition"]. Also very popular were some stage plays influenced by Jules Verne, such as De la Terra al Sol ["From the Earth to the Sun"] (1879) by Narcís Campmany and Joan Molas; Quinze dies a la Lluna ["Fifteen Days on the Moon"] (1890) by C Gumà and L'any 13.000 ["The Year 13,000"] (1893) by Miquel Figuerola Aldofreu. Sf grew in Catalan under the influence of translation, which mixed novelties with the Victorian classics. The list of sf works translated into Catalan in the first third of the twentieth century (we give here the translation dates) include Camille Flammarion's Urània ["Urania"] (1903), H G Wells's L'home que no es veu ["The Invisible Man"] (1908), a selection of short fiction by Nathaniel Hawthorne (coll 1908), Karel Čapek's play RUR ["R.U.R."] (1928), H Rider Haggard's Ella ["She"] (1931) and Robert Louis Stevenson's novella El cas misteriós del Dr. Jekyll i Mr. Hyde ["Strange Case of Dr. Jekyll and Mr. Hyde"] (1934); plus best-selling novels by Jules Verne and other French pioneers. Early twentieth-century Catalan sf is also influenced by the work of Catalonia's pioneer filmmaker, Segundo de Chomón (1871-1929). Together with the playwright Adrià Gual (1872-1943), de Chomón started the local Catalan fantastic film tradition with titles such as Magatzem d'invents ["Store of Inventions"] (1905) and Física diabòlica ["Diabolical Physics"] (1911), comprising together only some of his work for the Barcelona branch of Pathé Films which he himself founded (de Chomón was also the local delegate of Georges Méliès's Star Films). This fantastic film tradition is still alive today with Catalan directors such as Jaume Balagueró (1968- ) and José Antonio Bayona (1975- ). Still in the first third of the twentieth century, short stories in Catalan dealing with sf multiplied, expanding into varied territories: "L'ull acusador" ["The Accusing Eye"] (1905) by Antoni Careta (1834-1924) deals with a supposedly functional technique to print the images of a murderer captured by his victim's eyes; "Una resurrecció a París" ["A Resurrection in Paris"] (1908) by Diego Ruiz narrates an experiment to keep the heart of a dead person beating; "Com va caure la Marta Clarissa" ["How Marta Clarissa Fell"] (1919) by Joan Santamaria – an author of Gothic and fantasy stories influenced by Poe – is a story about an Antigravity device; "El llamp blau" ["Blue Lightning"] (1935) by Joaquim M de Nadal (1883-1972) focuses on a machine to control lightning; "Els habitants del pis 200" ["The Residents of Apartment 200"] (1936) by Elvira Augusta Lewi (?1910-?1970) is a tale of sociological anticipation; "Tres arguments" ["Three Arguments"] (1938) by Francesc Trabal (1889-1957) conveys a surrealist, theatrical atmosphere. Other anticipation stories using the literary device of the prophetic dream are "Un somni" ["A Dream"] (1906) by Manuel de Montoliu and "La fi del món a Girona" ["The End of the World in Girona"] (1919) by Joaquim Ruyra (1858-1939). In this early period sf can be found not only in Catalan short fiction but also in drama, a bit tongue-in-cheek. L'escudellòmetre ["The Stewmetre"] (1905) by Santiago Rusiñol (1861-1931) presents a machine which might solve the problem of hunger for good; Un somni futurista espatarrant ["An Astonishing Futurist Dream"] (1910) by Pompeu Gener i Babot (circa 1846-1920) deals with a cosmic revolution; Temps ençà... temps enllà ["Time Here... Time There"] (1926) by Ambrosi Carrión i Juan (1888-1973) and Enric Lluelles (1885-1943) focuses on Time Travel; Molock i l'inventor ["Molock and the Inventor"] (1930), also by Carrion, imagines the invention of the 'definitive' explosive; Les gàrgoles de la seu ["The Gargoyles of the Cathedral"] (1935) by Lluís Masriera connects with Huxley's Brave New World (1932). A growing range of Fantastika began to become evident around this time in the Catalan literary novel, with, among others, El gegant dels aires ["The Giant of the Airs"] (1911) and L'extraordinària expedició d'en Jep Ganàpia ["Jep Bigboy's Extraordinary Expedition"] (1922) by Josep M Folch i Torres (1880-1950), adventures of Vernian inspiration. Homes artificials ["Artificial Men"] (1912, from a previous short story, 1904) by Frederic Pujulà i Vallés (1877-1893), narrates the creation in the laboratories of diverse hominids; La vida del món ["The Life of the World"] (1925) by Clovis Eimeric centers on an expedition to the Sun which causes a catastrophe back on Earth; L'illa del gran experiment ["The Island of the Great Experiment"] (1927) by Onofre Parés (1891-? ) tells of a social and scientific experiment to Terraform the Moon between 1950 and 2000, while Retorn al Sol ["Return to the Sun"] (1936) by Josep M Francès i Ladrón de Cegama (1891-1966) deals with an Underground society founded by the survivors of a future world-wide conflict. Tragically, just when the label 'science fiction' was becoming consolidated in the United States to define a new genre born with the new techno-scientific society of the twentieth century, the outcome of the Spanish Civil War (1936-1939) destroyed the existing movement created in Catalan fiction around the genres of Fantastika. Many of the authors active before the war were forced into exile to avoid the repressive policies of Franco's new right-wing, militarist regime. Many books in Catalan were destroyed and all public manifestations of the Catalan language forbidden. One of the main consequences was the destruction in the popular memory of sf production prior to the war. From the cultural catacombs created by this thorough linguistic and cultural repression, all had to be started again – with renewed impetus and eventual success. Only in 1953, fourteen years after the end of the war, could volumes of fantastic short stories by three significant earlier authors – Manuel de Pedrolo, Antoni Ribera (1920-2001) and Joan Perucho (1920-2003) – be published to general acclaim; they would soon become leading names. Joan Perucho's Amb la tècnica de Lovecraft ["Using Lovecraft's Technique"] (1956) not only introduced H P Lovecraft to Catalan speakers but also defined Perucho's own peculiar brand of sf, which includes elements of Gothic and fantasy in books such as Llibre de cavalleries ["The Book of Chivalry"] (coll 1957) and Les històries naturals (1960; trans David H Rosenthal as Natural History 1988). Pere Calders (1912-1994), exiled in Mexico, published L'espiral ["The Spiral"] (1956), an anticipation story directed against the arms race (see Cold War), and Demà, a les tres de la matinada ["Tomorrow, at Three in the Morning"] (1959), which narrates an expedition to the Moon. The slow work of consolidating sf in Catalan continued in the 1960s and 1970s, no doubt reaching a climax with Pedrolo's aforementioned extremely popular Mecanoscrit. Prominent examples of 1960s Catalan sf novels are El misteri de Clara ["The Mystery of Clara"] (1962) by Ferran Canyameres i Casamada (1898-1964), a tale about artificial reproduction; La gesta d'en Pamoressi ["Pamoressi's Feat"] (1964), a Hollow Earth tale by Antoni Muset i Ferrer (1892-1968); La gran sotragada ["The Big Shock"] (1965) by Nicolau Rubió i Tudurí (1891-1981), a story within the catastrophe sub-genre; El cronomòbil ["The Chronomobile"] (1966), a time-travel tale, and El mirall de protozous ["The Protozoan Mirror"] (1969), on molecular plasticity, both by Pere Verdaguer (1929- ); also, Paraules d'Opoton el vell ["The Words of Opoton the Elder"] (1968), a uchronia about the discovery of Europe by the Americans. Sebastià Estradé i Rodoreda (1923-2016) introduced in 1967 the sub-genre of Space Opera to what we would call today a Young Adult readership with Més enllà no hi ha fronteres ["There Are No Borders Beyond"] (1967) and Més enllà del misteri ["Beyond Mystery"] (1970). This proved to be the beginning of a field within Catalan sf that remains very productive today. As regards short fiction, between 1966 and 1970 Tele-Estel ["Tele-Star"] – the first (weekly) magazine in Catalan authorized by Franco's regime – published sf by Pere Calders, Antoni Ribera, Lluís Busquets i Grabulosa (1947- ), Màrius Lleget, J Ministral, Pere Verdaguer and J B Xuriguera (1908-1987). Articles by Ribera, Lleget and Sebastià Estradé constituted a first attempt to establish Catalan Fandom. Catalan theatre also offered a handful of sf plays in this period: Llibre dels retorns ["The Book of Returns"] (1957), dealing with time transgressions, by Antoni Ribera; Calpúrnia ["Capulrnia"] (1962), on a Robot spy, by Alfred Badia i Gabarró (1912-1994), and Tot enlaire ["Up in the Air"] (1970), focused on an interplanetary agent, by Jaume Picas i Guiu (1921-1976) The 1970s generated a rich crop of sf in Catalan: La ciutat dels joves ["City of the Young"] (1971), a futurist Utopia by Aurora Bertrana (1892-1974); the novels by Llorenç Villalonga i Pons (1897-1980) Introducció a l'ombra ["Introduction in the Shade"] (1972), about unknown Dimensions, and Andrea Víctrix ["Andrea Victrix"] (1974), a Dystopia imitating Brave New World; the media dystopia L'enquesta del Canal 4 ["The Survey of Channel 4"] (1973) by Avel·lí Artís-Gener (1912-2000); Àngela i els vuit mil policies ["Angela and the 8,000 Policemen"] (1974) by Maria-Aurèlia Capmany (1918-1991), a utopia inspired by the political activist Angela Davis (1944- ); La finestra de gel ["The Ice Window"] (1974) by Anna Murià (1904-2002), a novel about Cryonics; La vedellada de Mister Bigmoney ["Mr. Bigmoney's Bullfight"] (1975) by Pere Verdaguer and Trajecte final (1975) [Final Trajectory (1985)] by Manuel de Pedrolo. Apart from Pedrolo's own Mecanoscrit the other outstanding sf novel to emerge from the 1970s is the dystopian Memòries d'un futur bàrbar ["Memoirs of a Barbarian Future"] (1975) by Montserrat Julió (1929- ). The restrictions on Catalan were gradually lifted in the years following Franco's death in 1975, coinciding with the arrival of democracy in Spain in the period known as the Transition. Pedrolo contributed new sf with the novels Aquesta matinada i potser per sempre ["This Dawn and Perhaps For Ever"] (1980) and Successimultani ["Simultaneousevent"] (1981), on parallel universes and Time Travel. Pere Verdaguer published Nadina bis ["Nadina Twice"] (1982), L'altra ribera ["The Other Shore"] (1983) and Quaranta-sis quilos d'aigua ["Forty-Six Kilograms of Water"] (1983), works based on his axiomatic concept of sf, which abandons the certainty of classical science in imitation of the axioms of modern mathematics. For his part, Joaquim Carbó (1932- ) published the apocalyptic Calidoscopi de l'aigua i del sol ["The Kaleidoscope of Sun and Water"] (1979). The younger Catalan writers of the following generation involved themselves in popularizing sf through Anthologies such as Lovecraft, Lovecraft! (anth 1981) and individual efforts such as the genre-subverting short story collection Qualsevol-cosa-ficció ["Anything-fiction"] (coll 1976) by Josep ("Pep") Albanell (1945- ), or the novel Grafèmia ["Graphemia"] (1982) by Margarida Aritzeta (1953- ) about the obliteration of writing. Plenty of new sf was directed at young readers, such as La Principal del Poble Moll any 2590 ["The Poble Moll Orchestra in the Year 2590"] (1981) by M Dolors Alibés i Riera (1941-2009), on Time Travel, and El secret del doctor Givert ["Dr. Givert's Secret"] (1981) by Agustí Alcoberro (1958- ), on Robotics. Sf drama continued with the very successful plays by Josep M Benet i Jornet (1940- ): Taller de fantasia ["Fantastic Workshop"] and Supertot ["Superall"] (both 1976), Helena a l'illa del baró Zodíac ["Helena on Baron Zodiac's Island"] (1977) on a mad doctor (see Mad Scientist), and La nau ["The Ship"] (1977), about the trope of the Generation Starship. Other notable plays were Josep M González Cúber's L'abominable home de la Neus ["Neus's Abominable Man"] (1976), dealing with brain transplants, and the collective plays of the avant-garde theatre company Joglars: M-7 Catalònia ["Catalonia M-7"] (1978), Laetius ["Laetius"] (1980) and Olimpic Man Movement [original title in English] (1981). The commemoration of Orwell's masterpiece in the emblematic year 1984 inaugurated the modern period of Catalan sf. The first series of books from a Catalan publisher, simply called 2001, started publication with translations into Catalan of Isaac Asimov, Joanna Russ and other great sf authors. Rosa Fabregat i Armengol (1933- ) published an indispensable novel on artificial reproduction, Embrió humà ultracongelat núm. F-77 ["Ultrafrozen Human Embryo F-77"] (1984), followed by Pel camí de l'arbre de la vida ["On the Road of the Tree of Life"] (1985). Montserrat Galicía, the most prolific Catalan sf writer so far – above all, for Young Adult readers – started her career with the space adventure PH1A Copèrnic ["Copernicus PH1A"] (1984); like her, Xavier Borràs (1956- ) addressed a YA audience with Manduca atòmica ["Atomic Grub"] (1984). The next year an essential short story collection was published: Essa efa ["Ess Ef"] (coll 1985), and the first anthology surveying the field of Catalan sf, Narracions de ciència ficció ["Science Fiction Stories"] (anth 1985), edited by Antoni Munné-Jordà (1948- ). The late 1980s and the 1990s saw the overlapping of two generations. Senior literary authors like Pere Calders, Avel·lí Artís-Gener and Joan Perucho were still active; as were others specializing in sf. Antoni Ribera published, among others, El dia dels mutants ["Day of the Mutants"] (1992); Sebastià Estradé wrote A l'espai no hi volem guerra ["We Want no War in Space"] (1993) and Quan tornis, porta una mica de pluja ["Bring Some Rain When You Return"] (1996), whereas Pere Verdaguer penned Àxon ["Axon"] (1985), La dent de coral ["The Coral Tooth"] (1985), La gosseta de Sírius ["Sirius' Little She-Dog"] (1986) and Arc de Sant Martí ["The Rainbow"] (1992). Members of the following generation also published new works, such as Josep Albanell's L'implacable naufragi de la pols ["The Implacable Wreck of Dust"] (1987), Víctor Mora's Barcelona 2080 (1989) and El parc del terror ["Horror Park"] (1996). Among the younger authors Ricard de la Casa Pérez (1954- ) published Més enllà de l'equació QWR ["Beyond the QWR Equation"] (1992) and Sota pressió ["Under Pressure"] (1996); Xavier Duran wrote Traficant d'idees ["Idea Dealer"] (1994) and Jordi Solé-Camardons (1959- ), Els silencis d'Eslet ["Eslet's Silences"] (1996). YA Catalan sf continued to expand with Robòtia ["Robotics"] (1985) and L'esquelet de la balena ["The Skeleton of the Whale"] (1986) by David Cirici i Alomar (1954), Lior ["Lior"] (1995) by Núria Pradas (1954- ) and Montserrat Canela's trilogy Deserts asteroidals ["Asteroid Desert"] (1997-1999). The 1990s were also a decade when new sf awards were established: The Juli Verne (in Andorra, now discontinued), the prestigious Premi UPC (1991- ) and the Manuel de Pedrolo Award (1997- ). Also in 1997 the 'Societat Catalana de Ciència-ficció i Fantasia' (SCCFF) ["Catalan Society of Fantasy and Science Fiction"] was established with the goal of uniting Catalan Fandom. The year 2000 brought a renewed impulse for Catalan sf with the beginnings of the series Ciència-ficció published by Pagès Editors and directed by Munné-Jordà. In 2002 the enormously popular La pell freda ["Cold Skin"] (2002; trans Cheryl Leah Morgan as Cold Skin 2006) by Albert Sánchez Piñol (1965- ) gave Catalan sf its second masterpiece after Mecanoscrit. Víctor Martínez Gil's anthology Els altres mons de la literatura catalana ["The Other Worlds of Catalan Literature"] (anth 2004) brought the Catalan fantastic to a wide mainstream readership. At present, more veteran Catalan sf authors, such as Montserrat Galícia (1947- ), are still maintaining their careers, and for the new writers reading and writing sf is no longer a marginal pursuit. The list of remarkable novels is, fortunately, long: Testimoni de Narom ["Narom's Testimony"] (2000) by Miquel Barceló (1957- ) and Pedro Jorge Romero (1967- ), El cant de les dunes ["The Song of the Dunes"] (2006) by Jordi de Manuel (1962- ), El cogombre sideral ["The Space Cucumber"] (2000) by Sebastià Roig (1965- ), Hipnofòbia ["Hypnophobia"] (2012) by Salvador Macip (1970- ), Jordi Navarri i Ginestà's Les cartes de Nèxiah ["Nexiah's Letters"] (2009), La febre del vapor ["Steam Fever"] (2011) by Jordi Font-Agustí (1955- ), La mutació sentimental ["The Sentimental Mutation"] (2008) by Carme Torras (1956- ), L'any de la plaga ["The Plague Year"] (2010) by Marc Pastor (1977- ), Joan Marcé's El visitant ["The Visitor"] (2015), Sírius 4 ["Sirius 4"] (2012) by Alfons Mallol Garcia (1980- ) or Jordi Gimeno's El somriure d'un eco ["The Smile of an Echo"] (2013). Finally, the 2000s also saw the emergence of new fanzines such as Miasma ["Miasma"] (2006-2008) and the still on-going Catarsi ["Catharsis"], La lluna en un cove ["Promise the Moon"] and Les males herbes ["Weeds"], all founded in 2009. To sum up, despite the small number of Catalan speakers, there is a very rich Catalan sf tradition, awaiting the critical and academic attention it certainly deserves. The selection of Barcelona to host the 2016 Eurocon, as well as several of the stories assembled in Barcelona Tales (anth 2016) edited by Ian Whates, may signal an international arousal of interest in this literature. [AM-J/SMA] Previous versions of this entry
<urn:uuid:44fcefa0-79fd-49d2-97bb-5018ff037b16>
CC-MAIN-2017-17
http://sf-encyclopedia.com/entry/catalan_sf
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917120001.0/warc/CC-MAIN-20170423031200-00306-ip-10-145-167-34.ec2.internal.warc.gz
en
0.744154
5,234
2.9375
3
Montreal in the 1820s was not a cultural haven for the working man. Few educational opportunities existed for them in the 1820s and 1830s--or for the young immigrants arriving. There were no schools open to them; buying books was expensive. A few libraries offered books, magazines, and newspapers, but they were not free. The early libraries in Montreal could be categorized: • those run by religious institutions, not open to the general public; • those run by professional groups such as medical and legal libraries, open to members of those professions; • subscription libraries organized by and dependent on proprietor or shareholder support from men who had already achieved success. They sometimes allowed lower-cost annual and semi-annual memberships, which had to be approved by the proprietors--and thus were possibly from dependents living in the households of the proprietors, such as family, apprentices and clerks. Probably the earliest of the subscription libraries was the Montreal Library that opened in 1796. The “proprietors” were English and French-speaking notables, and the share cost was 10 guineas ($42.50); this library amalgamated at times with the Montreal News Room, and existed in various forms until the 1840s. In 1827, the Natural History Society was established, by a group of mostly physicians and educators, to sponsor lectures on scientific topics; it counted a library as one of its essential elements. The Montreal Mechanics' Institution was established 1828 to provide library facilities and lectures for members and courses for sons of members and apprentices. The Mercantile Library was opened in 1842 by merchants' clerks who opted not to join the Mechanics' Institution, and was thus an effort to provide a wider access to persons of lower status; and shortly thereafter the Institut canadien was formed in 1844 to provide a library and debating facility for mainly French-speaking intellectuals, lawyers and other young professionals. The Natural History Society survived until 1925, and the Mechanics' Institution continues today as the Atwater Library and Computer Centre. The formation of the Montreal Mechanics’ Institution (MMI) in 1828 was the first effort in Montreal to provide an educational facility where, for a modest fee, working men and youths could have access to a library as well as classes, lectures, and meetings where discussions (called “conversations”) were held on the particular interests or needs of the group. Membership costs were modest, as befitting the incomes of the people it was designed to serve; in 1831, new members of MMI were required to pay 20 shillings & seven pence half-penny, of which 10 shillings was considered an entrance fee. At the time, “mechanics” referred broadly to skilled workers such as carpenters and joiners, tanners, blacksmiths, plasterers, masons, painters, coopers, plumbers, bootmakers and blacksmiths. The MMI also welcomed employers of the skilled, such as builders, iron founders, printers, brewers, confectioners, innkeepers, bookkeepers--and was open to members of the new professions including surveyors, civil engineers and architects. Lawyers and doctors also joined the new organization, perhaps partly as a means to contribute to society. The officers of the first MMI were a cross-section of business and civic-minded community leaders, chosen for their interest in education, and the active MMI leadership was generally provided by craftsmen, as provided in the MMI constitution. The History of the Book in Canada notes that “the principles of equality, accessibility and utility [that the Mechanics’ Institutes embodied] anticipated the development of free public libraries later in the 19th century.” (This did not apply universally in Quebec, where the 1828 Montreal Mechanics’ Institution survives today as the Atwater Library and Computer Centre, and remains at its core a subscription library.) Among the aims of the MMI in 1828 were to establish: • library and reading room; • museum of machinery and models, minerals and natural history; • school for teaching such subjects as arithmetic and algebra and their applications in architecture and navigation; and languages; • lectures on natural and experimental philosophy, practical mechanics, astronomy, chemistry, civil history, political economy, literature and the arts. MMI activities were disrupted by devastating cholera outbreaks in 1832 and 1834, and by political tensions involving liberal thinkers seeking to loosen the control of Britain over the governments in Upper and Lower Canada. MMI activities ceased abruptly in April 1835, brought about apparently by the deepening political crisis that evolved into the Lower Canada rebellions of 1837-38. There was a hiatus in MMI activities until 1840, when it amalgamated with the newly-formed Mechanics’ Institute of Montreal, led by John Redpath, contractor, who had been active in the earlier organization as early as 1831. Glimpses of what interested the upwardly mobile working-class and young professional men of the time, and how they sought to develop educational resources and expertise, are revealed in the minutes of the (sometimes) weekly meetings of Montreal Mechanics’ Institution. The minutes of the “Committee of Managers” have been lost. MMI members appear to have been guided in their activities not only by the patterns set in the British mechanics institutes where education was emphasized, but also by the activities of the Franklin Institute in Philadelphia that had been founded in 1824 along the same principles. This is of relevance in the area of patents, a particular interest of Dr. Isaac Hays, long-time active member of the Franklin Institute, who was a corresponding member of MMI, and a relative of Moses Judah Hays, an MMI member. One activity of the MMI was to provide a forum for members’ inventions; it may have helped in patent applications in England and Canada. Such applications required a description, diagram and model where feasible. The histories of mechanics’ institutes often have the refrain expressed in the preface to Bruce Sinclair’s Philadelphia’s Philosopher Mechanics: “On public occasions leaders of the [Franklin] Institute frequently hearkened back to the democratic impulses which gave it its early vitality. But educational reforms initially designed to benefit poor and disadvantaged artisans usually served a more literate clientele.” This appears to have been true at the MMI from the beginning. How successful were the founders of the Montreal Mechanics’ Institution in achieving their aims? Certainly their efforts were not always completed or effective, but on the whole the 1828-35 efforts by the MMI members achieved modest success. The projected school got off to the slowest start, in part because of the difficulties in finding suitable teachers that the MMI could afford with its low fee structure. Following are edited excerpts from the 1828-35 MMI minutes, focusing on the range of practical and scientific discussions. Some spellings have been modernized, or corrected where research has indicated errors may have been made in the original minutes. Also incorporated is research that has identified how some MMI members earned a living. 16 December: MMI founder, the Rev. Henry Esson, gave the introductory lecture: “Objects and Advantages of Mechanics Institutions.” The Rev. Esson was minister at the Presbyterian St. Gabriel’s Street Church, and a founding member of both the Montreal Library and the Natural History Society. 23 December: William Antrobus Holwell [Hallowell], ordnance officer, proposed an improvement in the construction of steam engine valves. A “secret” committee was struck to report the merits of the proposed improvement: Joseph Clark, builder/surveyor/architect; John Henderson, civil engineer/iron founder; James Clarke; [Guy or Joseph] Warwick, iron founders; and John Bennet, iron founder. Alexander Stevenson, surveyor/schoolmaster, read an essay on “Causes and Cures of Cahots” [potholes]. A committee was struck to consider the suggestions: William Shand, cabinetmaker and builder; Alexander Stevenson; William Holwell; Teavill Appleton, builder; William Boston, painter; [Alfred or Francis] Howson. William Shand suggested the importance of introducing to the “parent country,” the Canadian gin [crane] for raising large logs of timber; and the Canadian truck or cart for raising and carrying off large burdens. (The Canadian gin had been used to raise the timber in building Notre Dame Church). Suggestions were referred to the “Cahot committee.” 30 December: William Boston submitted specimens of a paint pigment, and a paper descriptive of its qualities and use. “The paper was held in retentis.” (His submission may have been inspired by William Green, who had submitted a paint specimen to the Literary and Historical Society in Quebec City, and had been awarded a prize by the British Society of Arts and Manufacture.) William Shand promised to furnish a paper and drafts relative to the Canadian gin and truck cart. 6 January: Aaron Philip Hart, lawyer, presented an improved scuttle pipe. It was unanimously agreed to. Joseph Clark read an essay on “Progress of the Arts.” Mr. Esson gave a sketch on the manner in which the Society could direct its attention. 13 January: William Boston donated four copies of Nicholson’s Journal [Journal of Natural Philosophy, Chemistry and the Arts]. 20 January: Aaron Philip Hart read on essay on “Prison Discipline,” to which there were many comments. It was read again 31 March 1829. 27 January: William Ayers [Ayres], painter, read from the American Mechanics magazine, published by the Franklin Institute, about a baking machine to obviate the unseemly practice of immersing hands and arms in the material in preparation of bread. Mr. Ayers said he had completed a model of a machine to the end proposed, and sought a committee to examine it. Committee appointed: Clarke; James Poet, turner; Teavill Appleton; John Cliff, builder/architect; Joseph Russell Bronsdon, builder/fireman. 31 March: (Dr.) A. J. Christie, MMI corresponding member, military and civilian doctor at Bytown during the building of the Rideau Canal, editor of the Bytown Gazette, and former editor of the Montreal Herald and the Montreal Gazette, donated Linnaeus’ The System of Nature. 29 March: Joseph Clark read an essay on “The Excellence and Utility of the Arts.” A committee was appointed to devise the best means of carrying out the course of education prescribed in the Constitution: Louis Gugy, MMI president, and sheriff of Montreal; Horatio Gates, merchant, and MMI vice-president; William Shand; Aaron P. Hart; Charles Wand, builder; Robert Cleghorn, garden nursery owner; George Holman, navigator; the Rev. Henry Esson; John Molson, industrialist; Clarke; William Holwell; William Boston; George Gray, furniture maker; Howson; Warwick; William Buchanan. 31 March: Mr. Sinnott suggested an improvement for brickmaking. Committee appointed to examine and report thereon: Messrs Clarke, Stevenson, Ayers, Cooper, Wand. 7 April: James Cooper, joiner, suggested a Canadian window & door. 14 April: A Query Book was set up, in order for members to suggest topics for discussion. A further explanation was requested on Mr. Bronsdon’s report respecting Mr. Ayers’ invention of baking machine. Alexander Stevenson read an essay on “Nature and Culture of Lucerne.” James Cooper presented a [plan of an apparatus] for boiling grain, feeding cattle, heating water etc., and read an explanation of same. Referred to Committee of Management for consideration. 21 April: A secret committee had investigated the merits of William Holwell’s model of a new type of steam valve, and this report was received and read. Mr. Holwell deposited the model with MMI. A copy of it would be sent to England, along with a copy of the report. 28 April: Alexander Stevenson suggested a subject for discussion at the next meeting: “What is the reason why the rivers running southward have not the abrupt rapids that exist in those that run in a northerly direction?” 5 May: The Rev. Esson suggested a discussion subject: “What are the peculiar advantages to be obtained from mechanicsí institutes in the existing state of society in this part of the world?” William Ayers suggested a Query: “What is the cause of water spouts in seas, lakes and rivers?” 12 May: Subjects for conversation at weekly meetings were to be made public in city newspapers so that members could be acquainted thereon. William J. Spence, printer, submitted an improvement for supplying ink for the type used in the printing press. Committee to examine and report: William Boston; Clarke; Alexander Stevenson; Robert Armour, merchant/printer; William Holwell. (This inking machine later went into commercial production.) 26 May: William Holwell proposed an improvement in the construction of watch keys. He was requested to furnish a plan and description. James Cooper asked, “Has it ever been published to the world which is the best way of cutting timber into planks and board to promote strength, durability and elegance for all the particular purposes of joiners and cabinet work? As it is generally known that almost all the evils arising from contracting, splitting and twisting of boards and planks is through mis-management in cutting them out of the log.” Question was entered into the Query Book. 28 July: Aaron Philip Hart said he would deliver an essay on “The Discovery and Progress of Architecture.” [It was never delivered.] William Satchwell Leney, engraver, was awarded a life membership for his gift of a copper plate for Institute cards. He donated Manual on Museum Français; Lives and Works of the Most Celebrated Painters; by Mr. Maclean, Companion to the Glasgow Botanic Garden. 29 September: Lucius L. Solomons donated a box containing “several specimens of mineralogy, also an analysis of Saratoga Water.” 6 October: James Cooper presented two pieces of timber and gave his opinion of each, further to his query of 26 May. He was requested to give his opinion in writing. 13 October: James Cooper submitted a Query: “What would be the advantages of a Rail Road between the town of Montreal and the best stone quarry in its vicinity that would supply the demands for building, exportation, roads and all other public works. What would be the probable expense of constructing it in the best manner and what would be the best course to pursue in the execution of all the various parts of the work to promote the interests of this town and the province in general?” Ordered to be entered into book as Query No. 8. 20 October: Corresponding member Thomas O’Neill (O’Neil) of Boncher Pointe, Bonnechere Township, Upper Canada, donated natural history specimens. 15 December: Alexander Stevenson read an “Essay on Pure Lime.” Report on the sub-committee on Mr. Cooper’s apparatus for boiling grain or feeding milk cows and other cattle was handed in from the Committee of Management. 22 December: Alexander Stevenson read a continuation of his answer to Query No. 3, being “Essay on the Impurities of Lime (Hydraulic Mortar).” 29 December: Benjamin Workman, teacher, and Robert Armour, Jr., lawyer, from the Natural History Society appeared. They noted that out of a NHS grant from the Provincial Parliament, £20 was directed to be spent for the purchase of Philosophical Instruments. That sum being inadequate, Andrew F. Holmes, M D., Professor of Chemistry in the University of McGill College, had kindly volunteered to deliver a course of popular lectures on Chemistry, illustrated by experiments, in aid of the instrument fund. The NHS had accepted this offer and had fixed the admission for its members and those of its sister institution the Mechanics Institution at 10p. Gentlemen not associated with either institution would be charged 15p, and ladies at 7p5. The MMI meeting agreed to this proposal. Alexander Stevenson announced an “Essay on Gypsum (Selenite).” Robert Armour submitted a Query: “To Whom is the Palm of Merit mostly due for their Eminence in the Arts and Sciences, the Ancients or the Moderns?” 5 January: Robert Cleghorn presented to MMI a Register of Meteorological Observations, taken at his Blinkbonny Gardens for the year 1829. 15 January: Dr. A. F. Holmes offered the use of his cabinet of minerals for the gentleman about to give a course of lectures on Materials for Building. 19 January: Joseph Clark read an “Essay on Principles of Architecture.” 26 January: J. T. Gaudet presented a box, send by corresponding member Richard Power, containing a shell snail from the farm of Commissary Forbes at Carillon on the River Ottawa. Mr. T. French donated a large red-crested woodpecker, or poule du bois. 2 February: Henry Johnson and William Smaille presented to MMI several very beautiful specimens of [sphene school] black lead [graphite] from the Plumbago mine in the Township of Chatham, Lower Canada. Alexander Stevenson laid before the meeting a plan of a winter carriage referred to in his previous essay on Cahots. 16 February: Alexander Stevenson presented a letter from Donald Livingston, Esq., DPS of Land, Mount Johnson, containing a solution to lawyer James R. Pomainville’s Query No. 9. William Spence applied to the Institution for a certificate relative to the improvement he has made for the distribution of ink to the printing press. Said application referred to the sub-committee appointed to investigate its merits. 9 March: “The report of the sub-committee appointed (in May last) to examine the improvement by W. J. Spence in the hand printing press for the distribution of ink etc was made to the meeting.” [On December 19th 1829, W. J. Spence had obtained the 9th patent ever issued in Canada, for “a machine for distributing ink over printing types.” On May 31, 1830, a contract was notarized at the office of George D. Arnoldi for John Fellows, blacksmith (also an MMI member) to produce the inking machine of William John Spence.] George Holman read an answer to lawyer J. R. Pomainville's Query No. 9 from “a friend in the country. The opinion was that the answer is a general application to isosceles right-angled triangles but is not in point with respect to the proposed triangle.” 16 March: Donald Livingston presented an answer to J. R. Pomainville’s Query No. 9 from John P. Johnson of the Cote near the Tannery. 23 March: Samuel Joseph of St. Jacques, Township of Rawdon, donated a specimen of mineralogy. 30 March: Willard Ferdinand Wentzel, retired furtrader, donated a specimen of claystone from Nipigon to the northward of Lake Superior, also a piece of spunt or Indian tinder. 13 April: A paper was handed in by William Boston concerning the mechanical construction of a fireproof safe “apparently invented by John Scott.” Mr. Boston recommended that a sub-committee be appointed to examine it. It was referred to the Committee of Management to appoint a review committee. 24 June: James Cooper invited members to examine a geometrical staircase at his dwelling. 6 July: David Hollinger presented eight specimens of minerals from the Falls at Niagara and its vicinity. 13 July: Samuel Joseph, tobacconist and merchant, donated an Indian carved pipe in the form of a monkey. Charles Lamontagne donated a specimen of talc. A curious stone and two handsome specimens of gum copal were presented by William Ayers. A remarkable handsome knot from the butternut tree was presented by James Allison. James Allison, land agent, reported that there were in his opinion about 140 members of MMI. By donations and deposits, many valuable works had been added to the library; about 600 specimens were in the museum; plus valuable specimens of the mineral kingdom, and other curiosities. 17 August: Thomas A. Starke, printer/ bookstore owner, donated six books: Scott’s Mechanics’ Magazine; Marly, or A Planterís Life in Jamaica; J. J. Griffin, A Practical Treatise on the use of the Blowpipe; W. M. Wade, The History of Glasgow, Ancient and Modern; William Paley, Natural Theology. James Snedden donated a petrified snake. 31 August: Thomas O’Neill donated the skull, tail, hind foot, forefoot and smellows of a beaver; a hummingbird; velvet buds of deer horn. 7th September: John James Williams donated two books, one 287 years old and the other 190 years old, plus eight ancient coins. 21 December: James Allison donated part of an elk’s horn taken out of the Ottawa River. 26 January (second anniversary meeting): President Louis Gugy was not in attendance, but sent a donation of six books on Experimental Philosophy. Benjamin Workman, teacher, offered to give a few discussions on Practical Geometry to any members of the Institution who choose to attend them. Benjamin Workman gave a report of the Committee of Management on the progress of the Institution. A vote of thanks was proposed to (teacher) Mrs. James Huddell for the donation of “a great Natural Curiosity.” 1 February: Corresponding member Samuel Hudson, machinist, “gave an interesting explanation of the method of hardening steel, partly in answer to Query No. 12 entered by Henry Johnson.” Samuel Hudson gave his opinion re Query No. 5 by Alexander Stevenson, “What is the reason why the rivers running southward have not the abrupt rapids as those running in a northerly direction?” Answer: “The rocks generally incline to the southward and water running in that direction forms an incline plane, but running in a contrary direction, it falls abruptly over the edge of the rocks.” 22 February: Joseph Andrews, master builder, gave mahogany compasses and John White gave Coopers compasses to assist Benjamin Workman in his course of lectures. George Holman said that James Matthews [millwright, Niagara?] intended to lay before the Institution a model of a bridge for their inspection and asked for a sub-committee to examine its merits. Committee appointed: Joseph Clark, William Shand, Joseph R. Bronsdon; J. Andrews; John Fellow(s); John Redpath; William Lauder, builder; Samuel Hudson; Lieut. William Bradford, 8th Regiment. 29 March: The corresponding secretary read a letter from James Matthews on the subject of his model of a bridge left for the inspection of the members some time past. Query No. 14 by Samuel Hudson was read. Joshua Woodhouse, grocer, gave his opinion: “That in the day the ray of the sun causes the water to expand and becomes impregnated with the common air. Consequently, the water is lighter in the daytime than at night and does not condense the steam so quick in the daylight as at night.” 5 April: Alexander Stevenson read a paper in answer to Query No. 9. He requested that a sub-committee be appointed to examine the same and report thereon. 12 April: Thanks given to Robert Cleghorn of Blink Bonny Gardens in the vicinity of Montreal for his neat and useful Diary of the Weather for the Year 1830. Received from James Teacher of Lachine, an answer to Query No. 9. Referred to sub-committee. Donation by George Holman of a piece of lace bark from the Island of Jamaica in the West Indies. 19 April: Messrs Douglas and Wilkinson gave front piece of American military cap, a georget and part of a bayonet, dug up by a plough in Chateauguay where ìthe Americans were defeated in the last war.î 26 April: Alexander Stevenson reported that he had received a solution on Query No. 9 in algebraic characters and that he had delivered it to Benjamin Workman for examination and had requested him to lay the same before the Managing Committee for it to be referred to the Sub-Committee according to the usual form. Donation received from Mr. Allen of Cote à Baron, a potato of curious growth, having a piece of iron attached to it. William Boston donated a book on Turning; a Land Steward’s Guide; and an ancient Roman coin of the reign of Emperor Commodus who began his reign in AD 180. 3 May: Plan of Mr. Dow’s bridge referred to the Managing Committee. 7 June: Received from (Dr.) A. J. Christie a box containing specimens of materials used in construction of the Public Works in the Rideau Canal: stone used at Cataraqui, Jones’ Falls, Davis’ Falls, Smiths Falls; natural stone found at Davis’ Falls; specimens of pyrites found near Old Slicers on the Rideau River, Bytown 28 May 1831. Donation by James Allison of a number of natural curiosities collectd by him on a tour through Upper Canada. 6 December: Donation by James Allison of a petrified turtle from a young lady. Lewis Betts, engineer, donated a book containing views of the Liverpool Railroad. William Holwell donated a book by Dr. Brewster, On the Kaleidoscope. 10 January: Robert McGinnis donated a Greek & Latin Testament. 24 January: Alexander Stevenson donated specimens and curiosities taken by him from different excavations through Point à Moulin near the Cedars, Upper Canada. A number of specimens of minerals from Saxony were presented by James Allison from J.-M. Arnault, machinist, of Montreal. 21 September: Meeting called to investigate merits of a plan said to be an improvement on the present work of propelling steamboats. Present: Louis Gugy, William Boston, Alexander Stevenson, Samuel Hudson; Lewis Betts; James Cooper, John Whitelaw, carpenter; James Poet, turner. “Resolved that model now exhibited by Mr. John Pigott [Piggot] of 3 Rivers offers in principle much simplicity and some novelty and bids fair to overcome in its future application the principle of acknowledged objections against the work of propelling steam engines now in general use, to wit, by paddle wheels, and such as a valuable improvement in the application of power, is highly deserving the approbation and encouragement of this Institution. “Wherefore the Society hereby recommends the inventor and his plan to the attention of all persons or Societies having mechanical improvements for their principal objective. “Resolved secondly that Mr. John Pigott be requested to furnish this Society with a plan, elevation and specifications of his said invention.” “The Society as a tribute to the considered merit of the said plan awarded Mr. Pigott with the sum of Thirty Dollars.” Alexander Stevenson donated the skeleton of a frog. 30 January: Received a model of bridge and three plans of bridges from James Porteous, Esq. of St. Therese, corresponding member, by hands of William Boston. 5 February: James Cooper delivered remarks upon the injurious effects of the frost on stone foundations of wooden houses, walls, etc, and effectual work of preventing the same. He was requested to give same in writing. 26 February: James Cooper presented in writing a mode of securing the foundations of buildings from the injurious effects of frost. 5 March: Alexander Skakel, schoolmaster, prepared to deliver first of a course of lectures to MIM members on 12th March at his own rooms. “Declines remuneration, but allows the Society to sell tickets if they think proper, to those who are not members, but expects that a sum of money will be expenses for models of machinery etc equal to the value of his services, which models shall belong to the Society, of which he will be entitled to the use until required.” Resolved that tickets to persons shall be 1/3 to each lecture. Members’ families and apprentices to be admitted gratis. Resolved, Public to be notified in the public journals of the city. 100 tickets to be printed. 19 March: Book purchase: History of the Power of Steam 26 March: Book donations from :William Holwell Howson; Clarke; and John Dougall, Cabinet of Art and Arcana of Science. 4 June: Book donation from Shannon Peet, late of Boston: Cobbett’s Grammar. 26 November: “The members present (as well also in concurrence with the recommendations of the Committee of Management) having taken into consideration the propriety of establishing a School for the Instruction of the sons and the apprentices of the members have resolved to commence the same (if possible) on Wednesday evening the 4th of December next at 7 o’clock in the evening, and that the members generally be apprised of the same through the public journals of this city. And a suitable advertisement be prepared by the sub-committee that was chosen on the 25th Inst. to devise the best mode of establishing the said school.” 3 December: A report from the sub-committee appointed for devising the best means for establishing an evening school was handed to the chairman and read. When some objections were made by the members present, a portion of the said report was returned to the said sub-committee for alteration. Moved that advertisements be immediately prepared and inserted in the Montreal Herald, Courant and Gazette newspapers, apprising the members generally that the School will open on Wednesday Evening, the 19th Inst at 7 o’clock. 10 December: John Durie, teacher, allowed use of one room for a day school, provided he will devote two hours of his time each of the evenings that he may be required in his capacity as Teacher to instruct the sons and apprentices of the Members of this Institution. The School sub-committee reported they had reason to hope there would be about 15 of the sons and apprentices of the members at the commencement of the Evening School tomorrow Wednesday evening at 7 oíclock. 17 December: Committee of Managers agreed last evening that the School sub-committee should be empowered to engage a competent master to teach the sons and apprentices the art of drawing. Thomas Mitchell, teacher, offers to teach a course of lectures on Political Economy. Members to be apprised by advertisements in the public journals, inviting them, their sons and apprentices. 24 December: John Cliff was appointed Drawing Master to the Institution. School shall be open Monday evening, 30th December, and shall continue open every Monday, Wednesday, Thursday, Friday evenings at 7 o’clock. James Allison and Joshua Woodhouse met with the Hon. Louis-Joseph Papineau. “The Honorable Gentleman proposed the following query which he requested should be recommended to the serious consideration of the Institution, viz. Could the united ingenuity of the Members of the MMI invent a method to warm houses and public buildings in Canada from a cheaper, cleaner and in all respects a better plan than the present mode with stoves.” 10 January: John Redpath donated 60 numbers of the Repertory of Inventions, published in London. Thomas Mitchell, Esq. gave lecture on Political Economy. 21 January: Horatio Carter (chemist/pharmacist) donated 13 numbers of the Verulum, six numbers of The Guide to Knowledge; and the Penny Magazine, all published in London. 28 January: Horatio Carter gave a preliminary lecture on Chemistry. Crowded audience. Next lecture scheduled for the following Tuesday. 11 February: Robert Cleghorn was thanked for his diary of the weather, 1832 and 1833. J. Rattray, tobacconist, donated book, Scientific Irrigation. Horatio Carter was thanked for two lectures: “General Principles of Chemistry” and the “Nature and Properties of the [Blow Pipe] Oxy Hydrogen Gas.” Thomas Mitchell gave lecture on Political Economy. 18 February: Horatio Carter to purchase, in London, books and apparatus for MMI. Money donated by Gillespie, Moffat & Co. 25 February: Dr. William Primrose-Smith gave 116 numbers of Dictionary of Mechanical Science, the completion of the work. George Bernard, livery stable keeper, submitted for inspection and approval or disapproval, models of two-wheel carriages, said to be an entirely new principle. Sub-committee appointed: Joshua Woodhouse, James Cooper, Teavill Appleton, John Cliff; George Garth, plumber. 18 March: Sub-Committee reported “somewhat favourably” on Mr. Bernard’s models of carts. 10 February: A committee of James Poet, John Cliff and James Allison were appointed to make enquiries for suitable rooms for the accommodation from May 1, 1835, and that they report to the Committee of Managers as early as possible. Alexander Stevenson donated two Indian spearheads found by him at the Seignory of Beauharnois. 17th March: Treasurer John White reported he had effected insurance at the Phoenix Fire Office on the property of the MM Institution to the amount of Two Hundred Pounds for one year, at ten shillings and sixpence per one hundred pounds. 24 March: (Annual Meeting): “Resolved that a permanent committee be appointed to take the necessary steps for establishing as soon as possible an elementary school in conformity with the constitution and bylaws.” 7th April: [Last item of business recorded in the minutes of the MMI]: “Resolved that the present meeting recommends to the Committee of Managers to consider of the propriety of having the letter S added to the word Mechanic to the engraved Plate used for striking off the Cards of the Institution.” Archives of the Atwater Library and Computer Centre Kuntz, Harry, The Educational Work of the Two Montreal Mechanics Institutes," (master’s thesis), Concordia University, 1993; Kuntz, Harry, “The Montreal Library 1796-1843,” unpublished essay. Mackey, Frank, Steamboat Connections Passfield, Robert, Building the Rideau Canall Tolchinsky, Gerald, River Barons Historical Atlas of Canada, Online Learning Project, “The Printed World 1752-1900” List of Canadian Patents from 1824 Dictionary of Canadian Biography Online History of the Book in Canada
<urn:uuid:272bedfb-1055-4ea3-95a9-796e626d8a40>
CC-MAIN-2017-17
http://montrealhistory.org/2010/02/montreal-mechanics-had-diverse-interests-1828-35/
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917120101.11/warc/CC-MAIN-20170423031200-00011-ip-10-145-167-34.ec2.internal.warc.gz
en
0.959274
7,408
3.390625
3
|Home | About | Journals | Submit | Contact Us | Français| Background. Tuberculosis (TB) is a leading cause of morbidity and mortality worldwide. In Armenia, case reports of active TB increased from 590 to 1538 between 1990 and 2003. However, the TB case detection rate in Armenia in 2007 was only 51%, indicating that many cases go undetected or that suspected cases are not referred for confirmatory diagnosis. Understanding why Armenians do not seek or delay TB medical care is important to increase detection rates, improve treatment outcomes, and reduce ongoing transmission. Methods. Two hundred-forty patients hospitalized between August 2006 and September 2007 at two Armenian TB reference hospitals were interviewed about symptoms, when they sought medical attention after symptom onset, outcomes of their first medical visit, and when they began treatment after diagnosis. We used logistic regression modeling to identify reasons for delay in diagnosis. Results. Fatigue and weight loss were significantly associated with delay in seeking medical attention [aOR = 2.47 (95%CI = 1.15, 5.29); aOR = 2.99 (95%CI = 1.46, 6.14), resp.], while having night sweats protected against delay [aOR = 0.48 (95%CI = 0.24, 0.96)]. Believing the illness to be something other than TB was also significantly associated with delay [aOR = 2.63 (95%CI = 1.13, 6.12)]. Almost 20% of the 240TB patients were neither diagnosed at their first medical visit nor referred for further evaluation. Conclusions. This study showed that raising awareness of the signs and symptoms of TB among both the public and clinical communities is urgently needed. Although tuberculosis (TB) is both preventable and curable, it remains a leading infectious cause of morbidity and mortality worldwide. In 2007, the World Health Organization (WHO) estimated 9.27 million new cases of TB, with 1.3 million deaths . To decrease the impact of TB, the United Nations included TB prevention and control among its eight Millennium Development Goals, with a proposal to reduce TB incidence to half the 1990' level by 2015 . To measure and achieve this, the World Health Assembly (WHA) highlighted two indicators: 70% global and in-country case detection rates and successful treatment of 85% of cases . Many countries, including the majority of those belonging to the former Soviet Union (FSU), have been unable to meet these targets . The breakup of the Soviet Union had devastating effects on public health infrastructure and the ability to deliver care. Severe economic and political turmoil made health system reconstruction difficult, creating favorable conditions for the rise of infectious diseases such as TB. In the Republic of Armenia—a landlocked country in the southern Caucasus with a population of approximately 3 millions—case reports of active TB increased almost three fold between 1990 and 2003, from 590 to 1538 cases . Although epidemiologic data are at times discrepant, in-country experts recognized that these data likely underestimated true TB morbidity . Armenia, along with most FSU countries, is one of the WHO European Region's 18 High Priority Countries for TB control . In addition to high levels of TB in many countries, the WHO European Region has the highest levels of drug-resistant forms of TB in the world . In 2007 in Armenia, an estimated 9.4% of new sputum-smear positive cases and 43% of previously-treated cases had multidrug resistant forms of TB . In-country experts attribute such high levels of drug resistance to treatment default and failure. Over the past decade, Armenia has made considerable progress in rebuilding its TB control program; however, it has been a slow process, made difficult by a lack of precedent (no national TB program existed in the country prior to independence), a lack of public health infrastructure, and unreliable communication systems . Therefore, it is not surprising that, as of 2007, Armenia had been unable to meet the WHA TB targets. A case detection rate of 51% for new sputum-smear positive cases for that year shows that a substantial proportion of TB cases were either not detected or not referred for confirmation . Many factors likely contribute to the low TB case detection rate in Armenia. Numerous countries have conducted studies [7–13] to identify factors related to diagnostic delays of TB and, in a systematic review by Storla, Yimer and Bjune of 58 such studies, the authors identified sociodemographic and economic factors as well as the amount of time it took to reach a health facility, and the type and number of healthcare providers visited as the most common determinants of delay worldwide . However, studies to identify specific risk factors for the Armenian population have never been conducted. Understanding why citizens do not seek medical attention, do not pursue specialized attention following referral, or do not receive a referral for additional diagnostic services is important. Delayed diagnosis or delayed initiation of treatment increases the risk of more severe and harder-to-treat forms of TB, and also increases the risk of ongoing transmission. Therefore, we conducted this study to understand and assess the barriers to proper and timely TB diagnosis and treatment. We conducted a systematic study of patients hospitalized from August 2006 until September 2007 at the Yerevan City and the Republican Dispensaries, the two reference TB hospitals in Armenia. Following focus group discussions and pilot testing to refine the survey instrument, patient interviews began in the fall of 2006 and were conducted by trained students in the public health program of the American University of Armenia. Students received training by the study designer (SO) on principles of interview ethics and interviewing processes. Most patient interviews were conducted at the Republican Dispensary in Abovian Marz, the national TB diagnostic and treatment facility to which all suspected cases throughout the country are referred for diagnostic confirmation. In the past several years, strong financial support has been provided to Armenia by a number of international organizations, such as the German Gesellschaft für Technische Zusammenarbeit (GTZ), the Global Fund to Fight AIDS, Tuberculosis and Malaria, and the Red Cross, which has enabled both microscopic and culture examinations for each TB case to be performed at the National Reference Laboratory located at the Republican TB Dispensary. All culture-confirmed TB cases are admitted as in-patients while undergoing the initial phase of therapy. Because all confirmed TB patients in Armenia are referred to one of these dispensaries for treatment, this population represents all known TB cases in the country. Therefore, all culture-confirmed TB in-patients present at the time of the interviewer visits were eligible for inclusion in the study (the study group included both new and relapsed patients). Our systematic sample of the inpatient TB population—in which interviewers visited every hospital room to identify patients willing to participate—yielded a participation rate of approximately 80% (a total of 240 patients). Interviewers collected demographic information and asked each patient to recall when they first began to feel ill, what symptoms they experienced, how long after symptom onset they waited to see a doctor, their reasons for delaying medical evaluation, the type of facility at which they first sought care, the initial diagnosis, if they had been referred for further evaluation, and their adherence to treatment once diagnosed with TB. In 2006, 129 of the planned 250 surveys were completed. Data collection resumed in the fall of 2007 and a total of 240 interviews were conducted. Descriptive statistics were calculated to assess participant socio-demographic characteristics which were then compared to the sociodemographic characteristics of the Armenian population. Frequency of symptoms that were experienced by participants prior to seeking medical care as well as reasons for delay in seeking medical care were also calculated. Logistic regression models were used to determine which factors were most associated with a delay in seeking medical attention after symptom onset. A review of the relevant literature failed to identify a standard definition for patient delay among persons eventually diagnosed with TB. After consultations with physicians and others knowledgeable in the field of TB, we determined that for the purposes of our study, a person was classified as a “delay” if he or she experienced hemoptysis or fever for more than three weeks, or if he or she experienced cough, fatigue, night sweats, or weight loss for more than six weeks before seeking medical attention. Crude odds ratios were calculated for the delay variable against all other variables of interest. Adjusted odds ratios were obtained using multivariate logistic regression modeling techniques. Since we wished to select only those variables that might be important predictors of delay for inclusion in the model, a backwards elimination strategy was employed. Because the number of participants was small relative to the number of variables we wished to assess, we divided up the variables and performed two separate backwards elimination tests. The first backwards elimination model contained only sociodemographic characteristics such as gender, age, marz (region), education level, marital status, and employment status. The second model included characteristics such as the type of facility at which participants first sought care, their symptoms, if they had knowledge of prior contact with a TB patient, the year the interview was conducted, and reasons for delay. Variables shown to be significant in the crude analysis along with variables selected using the backward elimination procedures were included in the final model and adjusted odds ratios were calculated. The final model included the following variables: fatigue, night sweats, weight loss, thinking one had something other than TB, and thinking that the cost of seeking a medical evaluation would be too expensive. Similar techniques were used to identify risk factors for failure to receive a referral for further evaluation if the participant was not diagnosed with TB at the first medical visit. Crude odds ratios were calculated for all variables and then backward elimination analyses were conducted. All analyses were conducted using SAS statistical software version 9.1 (SAS Institute Inc., Cary, NC, USA). Ethical approval for this study was obtained from the American University of Armenia's Institutional Review Board and was carried out in accordance with Armenian data collection and confidentiality regulations. Written informed consent was obtained from all participants prior to study enrollment. There were a total of 887 cases notified to the Republican Dispensary during our study period, August 2006–September 2007. Cumulatively, our study took place over approximately four months during this 14-month period, and our 240 participants represent 27.1% of these cases. Of those who were enrolled, 80% were male, 54% were married, 82% were unemployed, 10% had less than a junior-high education, and 30% were from Yerevan (Table 1). Ages of participants ranged from 8–77 (mean: 38.1; median: 38). When we compared a number of sociodemographic characteristics of the participants to those of the population of Armenia to assess how representative TB patients may be of the general population, we found that the regional distribution of participants, overall, was similar to the regional composition of Armenia; the distributions of other sociodemographic variables such as gender, marital and employment status, age, and education level showed a number of significant differences between our population of TB patients and the Armenian population. Of the 209 participants for which health-care seeking information was available, 179 (86%) first sought medical care in the public sector. This included polyclinics, TB cabinets (outpatient clinics located at the sub-marz level that serve primarily to detect suspected TB cases), TB dispensaries (hospitals that serve to diagnose, confirm, and treat TB patients), as well as regional, army and prison hospitals. The remaining 30 (14%) participants first sought care from the private sector, either at private, professional clinics or from “informal” clinics or neighbors. Ninety-five (40%) participants were diagnosed with TB at their initial visit. Of those, 83 (87%) received a referral for further evaluation, and 73 (77%) of those reported that they sought the referral as instructed. Of the 139 (59%) participants who were not diagnosed at their first visit, 113 (81%) received a referral for further evaluation. Ninety (80%) reported seeking the referral when told to. We detected no significant differences between participants who reported referral compliance and participants who did not. The symptoms most frequently reported by participants were fatigue (71%), cough (70%), fever (68%), night sweats (66%), and weight loss (64%). When asked why they did not visit their healthcare provider closer to the onset of symptoms, most (58%) reported they thought their illness was not serious and would go away on its own (Table 2). Thirty-four (18%) thought they had something other than TB. Of these, 10 (30%) thought they had the flu, 6 (18%) thought they had pneumonia, and 8 (24%) took antibiotics such as gentamicin, penicillin, or ceftrioxone. Of the 240 participants, information about time from onset of symptoms to the first medical visit was available for 218 (91%) participants. Of these, 89 (41%) met the criteria for being a “delay” patient. Results of the crude and logistic regression analyses revealed no significant associations between delay and sociodemographic factors (Table 3). Symptoms significantly associated with delay were fatigue and weight loss [aOR = 2.47(95%CI = 1.15, 5.29); aOR = 2.99(95%CI = 1.46, 6.14)] (Table 3). Patients with night sweats were less likely to delay [aOR = 0.48(95%CI = 0.24, 0.96)]. Patients who thought they had something other than TB were also significantly more likely to delay a visit to their physician [aOR = 2.63(95%CI = 1.13, 6.12)]. Beginning January 1, 2007, new national TB payment policies were implemented whereby all TB diagnostic and treatment services became free to the patient, regardless of ability to pay. One hundred twenty-nine (54%) surveys were conducted prior to the policy change, while 111 were conducted after. Because we had asked questions regarding cost as a potential barrier, we included a “prepolicy change” and a “postpolicy change” factor in the analysis. However, having been enrolled into the study prior to the policy change was not statistically associated with being a delay case. Twenty-six participants, representing 19% of the participant population not diagnosed on the first visit to a healthcare facility, were not referred by their physician for further evaluation. We attempted to identify risk factors associated with this failure using the same methods that were used to identify risk factors for delay. Variables considered were participants' sociodemographic characteristics (gender, age, marz, educational level, marital and employment status), the type of facility at which they first sought care, the type of symptoms they reported, and prior contact with a TB case. However, due to small cell counts for many of these variables, we did not have sufficient power to do these analyses. Once diagnosed, 230 (96%) participants were instructed to start medication. Of those, 226 (98%) said they initiated therapy when told to. Lack of knowledge as to where to get treatment was given as a reason for delay by two participants, including a man who waited one month to initiate therapy. This study indicates several risk factors for delayed action in the diagnosis of TB and also reveals some areas for further investigation. Fatigue and weight loss were significantly associated with patient delay, indicating that they arouse a low level of concern among patients. Because unexplained weight loss, as well as chronic fatigue, can be markers for a number of potentially severe illnesses, Armenians should be encouraged to seek medical attention as soon as possible after recognition of these symptoms. The possibility that some of the participants who experienced these symptoms may have already had other conditions to which they may have attributed their weight loss and/or fatigue was not specifically addressed in this study, but should be considered if future studies of this kind are conducted in this population. As noted by Storla, Yimer, and Bjune, definitions of delay are heterogeneous, as are the risk factors identified in studies of this type done elsewhere. Given the lack of a standard definition for delay, our cutoff of three weeks was determined somewhat arbitrarily following consultations with TB experts; it is slightly shorter than many others have used in the past . Differences in our definition of delay, along with possible variations in the definitions of other variables (i.e., onset of symptoms, first contact with health services, etc.) or in the methods used, may account for some of the findings that are inconsistent with many of the studies done in other parts of the world, which identified sociodemographic and economic factors as being associated with delay. Of course, there are important cultural, social, and health-care related differences between Armenia and many of the other countries we identified, only one of which, Estonia, was part of the former Soviet Union, that may be behind these differences as well. Over one-half (59%) of the participants were not diagnosed on their first visit to a healthcare provider, although the majority (81%) did receive a referral for further evaluation. However, the reasons why almost 20% of participants did not receive a referral remain unclear. Further study is needed in order to clarify and improve this problem. It would also be useful to know what factors pushed patients who did not receive a referral to pursue further medical evaluation. Although many of them likely did so because their symptoms did not improve or worsened, it is possible that their perseverance might be attributed to some protective factor against delay in seeking medical care. If so, the identification of this factor would be important to TB control efforts. Additional information such as the average number of patient visits to a health center before a correct diagnosis is made and the level of knowledge about TB among medical staff is important to know. In the past few years, as part of an overall plan to decentralize health services and strengthen local primary care services, the Ministry of Health closed all marz- and district-level TB cabinets . Local primary care physicians and laboratories became responsible for ensuring proper diagnosis of TB cases, although confirmation is still done at the national level. This presents new challenges for local physicians, who may not have much experience with TB, as well as challenges to existing primary healthcare facilities, which may not have adequate diagnostic capabilities. Gender was not shown to be a risk factor for delay, but the male-female ratio in this study is striking. This, however, does correspond to the average male-female ratio of patients seen at the Republican Dispensary in 2007, and WHO TB case notification reports show that substantially more men are detected in Armenia . Men may more often be subject to situations that place them at higher risk of TB exposure than women, such as congregate living conditions due to incarcerations or compulsory military service, and many men go to Russia—which has a very high TB burden—to find work. However, the possibility that women may systematically go undetected by TB surveillance systems should be investigated. Similarly, although concern about the cost of medical service was not shown to be a significant predictor of delay in our analysis, the high level of unemployment among patients relative to the Armenian population should be noted (Table 1). Although it is unknown if the unemployment preceded, or was the result of, the diagnosis of TB, it is recognized as a disease of poverty, which is why the government's willingness to cover TB-related services is important. It would be worthwhile to monitor the policy's effectiveness over the longer-term to see if it has any impact on case detection rates. There were several limitations to the study. Problems with recall are a likely source of bias. Although patients were newly admitted to the Republican Dispensary, they included both acute and chronic patients. Chronic patients were defined as those who had either relapsed or failed treatment. In general, the initial diagnosis of TB for chronic participants was more likely made further in the past than for acute patients. Therefore, their ability to recall onset of symptoms, who they went to see first, and how long they waited might be less accurate. No information was collected to determine the length of time from the participants' first presentation of symptoms to the time of the interview. This study did not collect any information on participants' HIV status or other relevant comorbidities. HIV or other immune-suppressing conditions often produce an atypical TB presentation that can impair a patient's, or their clinician's, ability to detect TB, thus resulting in a diagnostic delay. The impact on our study, however, is believed to be negligible, as the estimated HIV/AIDS prevalence in TB patients in Armenia as of 2002 was 0.2% . Lastly, as TB hospital in-patients, the participants had been detected—whether early or late—by the healthcare system. TB cases who, for whatever reason, had not yet been detected by existing surveillance methods were not represented in this study and the reasons for their lack of detection could not be assessed. Outreach efforts to raise awareness of TB as a public health concern in Armenia and increase knowledge of its signs and symptoms are needed. Given a general lack of recognition about signs and symptoms of TB, and the fact that some patients self-treat or are prescribed with inappropriate antibiotics, information about the consequences of misusing antibiotics (i.e., drug resistance) should be widely disseminated, not only to the Armenian public, but to the medical community as well. Physicians should be aware of the importance of performing appropriate diagnostic tests prior to administering therapy. Better local access to TB services may help to reduce patient delay due to distance factors and increase referral compliance, but continuing education of physicians at the local level such that they will be better equipped to recognize TB is also important, as many local physicians are general practitioners, not TB specialists. Physicians should have a higher index of suspicion for TB and refer patients for further evaluation more often. Ideally, improving local laboratory infrastructure and increasing the ability to diagnose locally—including allocation of liquid or solid media to do culture confirmations and resistance testing (currently done only at the National Reference Laboratory)—would make follow-up services easier to deliver and decrease diagnostic turn around time. These in turn would decrease the amount of continuing transmission in the community and lead to better patient outcomes. The case detection rate of 51% is an estimate obtained using a WHO algorithm . However, reliable estimates of the TB prevalence in Armenia are lacking. It is therefore hard to assess the true case detection rate. A population-based study to determine the prevalence of TB in Armenia should be done in order to obtain a more accurate picture of the TB problem as well as the proportion of active TB cases that are difficult to detect with current surveillance methods. This study showed that patient and provider factors resulted in diagnostic delays of TB. Identification of where these delays occurred and why will help decision makers target interventions more effectively. We strongly encourage decision makers in Armenia to thoroughly investigate all potential causes of delay—patient, provider, and laboratory—and improve TB surveillance systems accordingly. Better surveillance data will give a more reliable picture of the burden of TB in Armenia and can be used to assess how well current diagnostic, treatment, and control activities are working as well as identify ways in which they may be improved.
<urn:uuid:bbaadff8-9647-4429-b98a-646ceff552cc>
CC-MAIN-2017-17
http://pubmedcentralcanada.ca/pmcc/articles/PMC2875696/
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917122041.70/warc/CC-MAIN-20170423031202-00543-ip-10-145-167-34.ec2.internal.warc.gz
en
0.968653
4,779
2.921875
3
Dublin Lockout in 1913 with James Connolly Dublin Lockout 1913 Employment in the early 1900’s in Ireland was hard, dangerous and very badly paid. That is if the lower class could even get a job. They had very little or no rights at all. Unions were set up and the Irish workers joined them. The employers did not like this at all so threatened them with the sack unless they left the union. They refused. When the workers went out on strike they were sacked and locked out of their jobs. Getting a job and the opportunities for single women were even worse, so there were many Irish women who went on strike during the Dublin Lockout Working Conditions in Ireland before 1913 In Ulster the Linen Factories were the main employer of women and children. They were only paid twelve shillings a week compared to the men who received double that amount. In Dublin over 3,000 Women Worked at Jacobs Biscuit Factory. There had been strikes in the north of the country which had improved wages but this did not apply to most of Ireland and Dublin. Most men in Dublin had to work a seventy hour week for only fourteen shillings a week. The women had to work on average ninety hours a week for around six or seven shillings a week. These were the very poor conditions that the workers in Dublin were suffering every day. Those who did have permanent jobs in the factories all over Ireland had to endure very bad working conditions with strict rules designed to exploit the workers. The bosses used any excuse to fine them for breaking the rules, therefore reducing their already low wages even more. James Connolly in Belfast In 1910 in Belfast the working conditions were becoming unbearable. The owners of the factories believed that as employers they had the right to dictate the conditions their workers had to endure. They posted a list of new rules which if not adhered to, would result in fines or dismissal. These included, talking, singing, laughing or adjusting their hair during working hours. James Connolly told the Women to Stick Together. So they went to work singing songs and breaking the rules en masse. The employers gave in and relaxed the rules. Dublin had very few permanent jobs available. Dublin was a large manufacturing city, and had few permanent jobs on offer. Cheap casual labour was used. Getting work was hard enough but keeping that job was even harder. Those wanting work on the docks were at the employers’ mercy. They had to turn up everyday hoping to be picked for work.Most of them were paid in the pubs. If they were not seen to spend some of their money on drink they were not rehired the next day. They had to get work to feed their families so they were trapped in the system. James Larkin and the I.T.G.W.U On January 4th 1909 James Larkin formed the Irish Transport and General Workers Union. Liberty Hall became the headquarters of the I.T.G.W.U. in 1911. James Larkin edited the paper, The Irish Worker every week. He had gained 17 shillings for a sixty six hour week for agricultural workers. Larkin had led them out on strike during harvest time and the farmers had given in rather than see their crops rot. Jacobs Biscuit Factory Strike He had also been successful with a strike at Jacobs Biscuit Factory. On 22nd August 1911 over three thousand women packers walked out in support of the four hundred and seventy bakers who had went on strike the day before. The strike was settled with James Larkin gaining better conditions and wages for the workers. Rosie Hackett, eighteen years old and Lily Kempson, fourteen years old were two of those workers. Irish Transport and General Workers Union James Connolly ran the Belfast branch of the Irish Transport and General Workers Union. He was born in 1868 into a very poor family of Irish immigrants in Cowgate, Edinburgh in Scotland. At fourteen years old he worked fourteen hours a day at a bakery. He hated it that much that he often wished it would burn down. To escape the poverty he joined the First Battalion of the Kings Liverpool Regiment. He was transferred to Dublin in October 1885. This is where he met Lillie, his future wife In 1896 after leaving the army he got a job in Dublin organising the Socialist Club of Dublin. Four years later in 1902 he immigrated to America but later returned to Ireland in 1910 to work as Organiser in Belfast for the Irish Transport and General Workers Union. The workers lived in the tenement houses of the Dublin Slums. There were over four hundred thousand people living in Dublin in 1913. Approximately eighty seven thousand lived in the tenement houses in the centre of Dublin city. Most of these tenements had one water tap located in the back yard which had to be used by all in the house, the single toilet was also in the yard for all to use. 80% of those living in the city tenements lived with their families in one room Tenements in Dublin There was often adults and up to eight to ten children in each room with the rents as high as two to three shilling a week. It was usual for around seventy to eighty people to be living in one tenement house. Even those who were lucky enough to have permanent work were struggling to provide for their families because of the low wages and working conditions. James Larkin was a powerful speaker and when he stood on a platform waving his outstretched arms about, people listened to him. He was gradually increasing the membership of the I.T.G.W.U. which grew from four thousand in 1911 to ten thousand in 1913. William Martin Murphy William Martin Murphy was owner of Clery's department store and the Imperial Hotel. He also controlled the Irish Independent, Evening Herald and Irish Catholic newspapers. He told his workers on 19th July 1913 that if they continued to be members of the union they would be fired. He was ignored so on 21stAugust he wrote this letter to just under two hundred workers in the parcels office of the Tramway Company. "As the directors understand that you are a member of the Irish Transport Union, whose methods are disorganising the trade and business of the city, they do not further require your services. The parcels traffic will be temporarily suspended. If you are not a member of the Union when traffic is resumed your application for re-employment will be favourably considered”. So even those who were not in the Union were sacked. Five days later a strike began. Tram Drivers Strike In Dublin seven hundred tram drivers stopped work and walked off the job. It was the first day of the Dublin Horse Show on August 26th, a very busy time in Dublin. That morning at ten o'clock nearly seven hundred of the tram drivers took out their union badges and pinned them onto their jackets. Then they left the trams including the bemused passengers where they stood and walked off the job. The Union wanted the reinstatement of all parcels staff and the same hours and wages for the Dublin workers that those in Belfast received. James Larkin Arrested James Larkin and three others were charged with libel and conspiracy, then released on bail. Larkin organised a meeting for 31st August. It was banned. He hid at Surrey House, the home of Countess Markievicz during this time. Thousands of locked out workers on strike turned up for the meeting. Thousands of locked out workers turned up for the meeting in Sackville Street, (now O’Connell Street.) So did the police armed with batons. Rosie Hackett was there with the men and women from Jacobs Biscuit to support those locked out. At the Imperial Hotel a room was booked in the name of Reverend Donnelly and his niece. The hotel is now part of Clery's in O’Connell Street. Larkin was heavily disguised in a long black robe and beard when he appeared on the balcony, but he only spoke a few words before the police grabbed him and he was arrested. Fighting broke out between the crowds and the police. Two Police Forces At that time in Dublin there were two police forces. The Dublin Metropolitan Police and the Royal Irish Constabulary. The R.I.C wore different uniforms than the D.M.P and were all armed with guns. Both were in attendance in great numbers at the meeting. Forty five police and four hundred and thirty three men women and children were injured. James Nolan died from a fractured scull caused by a police Baton. Larkin was later released on bail. Dublin Metropolitan Police Station 42 Manor Street, Dublin 7. Dublin Metropolitan Police Station. 1913. The station is just a few minutes walk from the city of Dublin. In 1913 there were thirty one members of the DMP stationed there. Royal Irish Constabulary The other police force in Ireland at the time was the Royal Irish Constabulary. This were an armed force and both groups were out in force on 31st August 1913 in Dublin. In 1913 James Larkin could not understand why the DMP helped to attack the people of Dublin on strike. During one of his speech he said, “If I was doing dirty work I would expect dirty pay. The men who are keeping the peace are getting bad hours and meager pay. A Dublin Metropolitan Policeman had to work eight hour shifts seven days a week and also had to work night duty every second month. They received thirty shillings a week as constables, rising to thirty six shillings a week for a sergeant. In the same year a skilled artisan in the building trade received thirty six shillings a week for working only six days. Tenement Houses in Dublin A few days later on Tuesday 2nd September at about 8-45 pm two houses in Church Street, Dublin 7 collapsed without warning. The two tenement houses were four stories high with shops on the ground floor. There were ten families living in the sixteen rooms, over forty people at the time of the disaster. Because of the cramped conditions it was normal at the time for the people to sit outside the hall doors and chat to the neighbours. When the houses fell down the rubble buried them. Seven People were Killed and Many Injured. Rescuers spent all night getting them out. Mrs Maguire, who lived in one of the rooms, described what she saw. 'I was standing in the hallway of the house, looking at the children playing in the streets. Other women were sitting on the kerb so as to be out in the fresh air. Suddenly I heard a terrible crash and shrieking. I ran, not knowing why, but hearing as I did a terrible noise of falling bricks. When I looked back, I saw that two houses had tumbled down.' Workers Locked Out By September there were twenty four thousand workers who were locked out of their jobs. The next day on the 3rd September Murphy and four hundred and four other employers issued a statement. It said that no worker who belonged to the union could return to work and those who were still in their employment but members of the union would be sacked. The workers would only be allowed to keep their jobs if they signed this document. "I hereby undertake to carry out all instructions given to me by or on behalf of my employers and further I agree to immediately resign my membership of the Irish Transport and General Workers Union (if a member) and I further undertake that I will not join or in any way support this union." Three workers at Jacobs’s factory wore their ITGWU badges a few days later at work. They were sacked when they refused to remove them and denounce their membership of the Union. Rosie Hackett was one of the leaders and organisers of this strike. Jacobs Locked out all their Workers This was to be the pattern all over Dublin, with workers refusing to leave their Union, going on strike and the employers locking everyone out. Tens of thousands of men and women and their families were now without a job and any money to feed and look after their families. By September there were twenty four thousand workers who were locked out. They were finding it hard to survive. Unions in England The workers and their families of Dublin received sympathy and help from the Unions in England. The people of Dublin received sympathy and help from the Unions in England. On Saturday 27th September 1913, a ship The Hare, arrived in Dublin. The first of many shipments of food sent to help the starving workers and their families. There were sixty thousand boxes of food delivered. Money was also collected in England and Belfast. The headquarters of the Union at Liberty hall was now the centre for distribution of food and clothing for the strikers and their families. Irish Women’s Workers Union James Larkin’s sister Delia had co founded the Irish Women’s Workers Union in 1911 and she now set about organising this task. Also part of this volunteer workforce was Hanna Sheehy Skeffington and the women from the Irish Women's Franchise League. Also there were Helena Molony, Madeline ffrench Mullen, Fiona Plunkett, Margaret Skinnider, Charlotte Despard, Grace Neal and Dr Kathleen Lynn. Rosie Hackett and other women from Jacobs’s factory spent hours every day at Liberty Hall. Constance Markievicz who was already well known to the poorer people in the slums of Dublin was appointed administrator of the kitchen supplies. She could always be found at one of the large cauldrons stirring the soup with a wooden stick. Irish Boy Scouts She had formed Na Fianna Eireann, the Irish Boy Scouts in 1909. They had been taught how to drill and march at her large grounds in Surrey House. She organised them during the Lockout into working parties and they helped the men who collected wood and water for the cooking pots. Funeral of James Nolan in Dublin Two other young girls also very much involved in the relief work at Liberty Hall were sixteen year-old Lily Kempson and eleven year old Molly O’Reilly. Molly had been attending Irish dancing classes at Liberty Hall for the previous two years. She had become part of the group of children and young adults who attended social gatherings and meetings at Liberty Hall before this more serious event of the Lockout occurred. Now they were willing and enthusiastic to help in any way they could. The funeral took place on 3rd September of James Nolan in Dublin. He had been attacked and killed on 31st August during the Baton charge by the DMP and R.I.C. Over 30,000 people attended the service and James Connolly organised the men of the I.T.G.W.U. They made a show of force, lining the streets in columns with pick axe handles, hurleys and other sticks as a show of defiance and strength. It worked. The police did not attempt to interfere with the service or any of the people who attended. Just over twenty five thousand workers were locked out. Scab labour was brought in from England. By the first week in October another six thousand had been locked out of their jobs. Even the big Farming Employers sided with Murphy and instructed their workers to sign the document or be sacked. There were protests and violence all over Dublin. By this stage there were thirty two different unions involved in the strike. 25,000 Workers Locked Out Scab labour was brought in from England and this caused more anger and violence. The police were reported to be baton charging the protests and meetings and raiding the homes of those who dared challenge them. More people were injured and a further two people were killed on the streets of Dublin. John Byrne was Beaten to Death by the R.I.C A fourteen-year-old girl Alice Brady on the way home from Liberty Hall with a food parcel for her family was shot dead by a scab worker. James Larkin was sentenced to seven months imprisonment on 28th October but released on 13th November. There was a regular meeting outside Liberty Hall and other protests around the city where ‘scab’ workers were carrying out the jobs of the strikers. Many of the strikers were also arrested and imprisoned during the months of the Lockout including fourteen-year-old Lily Kempson. She received a sentence of two weeks for ‘trade union activities.’ The employers did not want James Larkin to gain any power and refused to meet. Just before James Larkin was imprisoned at the end of October he told the people at the meeting outside Liberty Hall in Beresford Place that he and others were discussing a plan to organise a Citizen Army in order to protect them from the violence. Just over two weeks later on 13th November, the day of Larkin’s release from prison, James Connolly announced at another meeting outside Liberty Hall that the plans had been finalised. Irish Citizens’ Army He told the people that Captain Jack White was to take charge of the military organisation of the new Irish Citizens’ Army and asked for volunteers. In Croydon Park, Dublin on November 23rd 1913, the Irish Citizen Army had its first official meeting and over fifty men and women turned up to join. Both James Connolly and James Larkin agreed that women would be welcome to join and would be treated equally. This was very unusual at the time when women did not have the right to vote. So it was that in the Irish Citizen Army, men and women drilled and trained together. Dr Kathleen Lynn It was Dr Kathleen Lynn who organised and gave the First Aid classes to the men and women. Some of the other women who joined that first day were Nellie Gifford, Madeleine Ffrench-Mullen, Constance Markievicz and Rosie Hackett. As the strike continued for months the food ships from England were getting scarcer and the workers and the Leadership knew they could not hold out much longer. They were willing to compromise, but the employers did not want James Larkin to gain any power and refused to meet. James Connolly. Executed in 1916 The Strike is Over The workers had to sign the Employers Document in ordered to be allowed to return to work. The government in Britain asked George Askwith to report on the situation in Dublin. He concluded that both sides were being unreasonable. He said the sympathy strikes which Larkin had encouraged were unfair to the employers who treated their workers decently. But the document that the employers wanted the strikers to sign before they were allowed back to work was also unfair. They concluded that if they signed, they would give up all their rights and dignity, and he stated that no worker should have to do this. The money and food stopped coming from Britain as the strike continued into the New Year. So on 18th January 1914, the leaders of the Irish Transport and General Workers Union met in secret. They knew they could not win the battle and decided to advice their members to return to work. Some of the workers were able to go back to work without signing the Employers Document. Unfortunately a lot more had to sign. Three thousand men from the Builders Labourers Union had to sign the Employers Document, and promise not to join the Union again. After that, the strike was over, most of the other workers drifted back to work. James Larkin made a speech on 30th January saying 'We are beaten, we make no bones about it; but we are not too badly beaten still to fight.' James Connolly was devastated by the defeat. In February there were still over five thousand workers on strike but eventually they all gave in. The workers of Jacobs Biscuit Factory were the last to return to work in March. Jacobs had identified the ringleaders and did not allow them to return. One of these was Rosie Hacket. She got a job as a clerk with the Irish Women’s’ Workers Union which was situated at Liberty Hall. She retrained as a printer while there. The Irish Citizen Army James Connolly took charge and became Secretary of the I.T.G.W.U. when James Larkin left for America a few months later. And the Union survived. The workers, even those who were told never to join a Union again, slowly drifted back. The employers did not want any more trouble, so they did not sack the employees. So most of workers got their jobs back but at a very high price. The Dublin Lockout in 1913 It caused chaos and death. For nine months there were strikes, starvation and death on the streets of Dublin. The bosses had won and getting a well paid job was now nearly impossible for the working classes. Out of the 1913 Lockout struggle came The Irish Citizen Army. James Connolly became the leader of the Irish Citizen Army which had also kept its members once the strike was broken. On 22nd March 1914 at a meeting in Liberty Hall it was decided to reorganise the Army on a more military basis. A Citizen Army uniform was created. This was to consist of the distinctive hat with the badge of the ITGWU pinned to it. Three battalions were formed, the City Battalion, the North County Battalion and the South County Battalion. Companies were set up in areas around Dublin with training was held twice a week in Croydon Park. Dr Kathleen Lynn Received the Rank of Captain She was appointed the Chief Medical Officer. Countess Markievicz was given the rank of Lieutenant. On 6th April 1914 the Dublin Trades Council officially recognized The Irish Citizen Army. Two years later the Irish Citizen Army was to play a very significant part in Irish history. All Images are the copyright of L.M.Reid unless otherwise stated
<urn:uuid:6f43a279-8e2a-4504-949f-fdf1f3127b85>
CC-MAIN-2017-17
https://hubpages.com/education/irish-history-of-employment-conflict-and-the-Dublin-1913-Lockout-strike-in-Ireland
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917123318.85/warc/CC-MAIN-20170423031203-00015-ip-10-145-167-34.ec2.internal.warc.gz
en
0.988716
4,518
3.625
4
Growing Sweet Potatoes, Cooking Sweet Potatoes, Medicinal Properties Of Sweet Potatoes Sweet Potato CasseroleClick thumbnail to view full-size Great Video On Growing Sweet Potatoes In Containers Sweet Potato Casserole Is So Very Delicious What Do You Know About Sweet Potatoes? Do You Know How Good Sweet Potatoes Are For You? Did you know that sweet potatoes were first domesticated in Central America around 5000 years ago. The sweet potato was also grown in the Polynesian region of the South Pacific before western exploration ever occurred. It has been pointed out that Polynesians had to have visited Central or South America probably thousands of years ago and brought the sweet potatoes back to the Polynesian area. It is interesting to note that the type of sweet potato grown in the Polynesian Islands like the Cook Islands and the Hawaiian Islands was the type of Sweet Potatoes that were started from slips or cuttings and not from sweet potatoes. If they were visiting Central And South America they were probably visiting North America. If so then Columbus did not discover the new world. He was about a thousand years too late. Huge Amounts Of Sweet Potatoes Are Grown In The American South In the American deep south the Sweet Potato is still grown today and in huge numbers. I know my Grandparents grew them in N.C. and I have seen them being grown across N.C., S.C. , Georgia, Florida, and Alabama for a number of years. In the mid 1990s I spent some time in central Mississippi and I learned that the Sweet Potato has been grown there for hundreds of years. In the summer months you can drive down the roads of Mississippi and see Sweet Potatoes being grown for miles and miles. Sweet potatoes thrive in the long, hot summers of the American South but you can raise sweet potatoes anywhere that you have 150 frost free days. Once planted sweet potatoes produce their sweet nutritious roots that have come to be known as sweet potatoes. You will hear them called yams in some parts of the American south. But in reality a yam and a sweet potato are two different things altogether. In the videos below you can learn how to grow sweet potatoes, how to make sweet potato pie, and how to make sweet potato fries. How To Make Pecan Sweet Potatoes For Your Casserole You Will Need 1. Three Cups Well Mashed Sweet Potatoes 2. Two Large Well Beaten Eggs. 3. One Half Cup Margarine Melted. 4. One Half Cup Whole Milk. 5. One Cup White Granulated Sugar. 6. One Teaspoon Vanilla Extract. 7. One Teaspoon Cinnamon. Mix all of the above ingredients together well and pour them out into a 13" X 9" baking pan and level the contents of the pan out smooth with a large spoon. For Your Topping You Will Need. 1. One Cup Light Brown Sugar. 2. One Half Cup Margarine. 3. One Fourth Cup Self Rising Flour. 4. One Half Cup Fine Crushed Pecans. Mix all of the above four ingredients together in a large bowl. Pour the well mixed topping ingredients out on top of the ingredients already in the baking pan and bake in a pre-heated oven set at 350 degrees for foury five minutes. Serve small servings in a bowl with a scoop of vanilla ice cream for a delightful treat. Sweet Potato Casserole My Grandma and my Mom made these delicious Sweet Potatoes almost every Sunday and they were always delicious. My Grandma called them Sunday Sweet Potatoes. But it was really a sweet potato casserole. 1. Seven Cups Peeled And Well Mashed Sweet Potatoes 2. One Cup White Granulated Sugar. 3. One Half Cup Butter. 4. Two Large Well Beaten Eggs. 5. One Half Cup Whole Milk. 6. One Teaspoon Vanilla. 7. Pinch Of Nutmeg. Mix all of those ingredients together very well. Once you have everything mixed up together very well pour out into a 13" X 9" well greased baking dish. Mash everything down level with a large cooks spoon. Bake for about forty minutes in a pre-heated 350 degree oven. Now once you pull the pan of cooked sweet potatoes out of the oven you want to apply a layer of miniature marshmallows over the top of the pan and put it under the broiler of your oven just until the marshmallows melt and turn brown but do not let them burn. Take out of the oven and set aside for about five minutes before you serve them. These are some of the best sweet potatoes you will ever eat. Growing & Rooting Sweet Potatoes Indoors Sweet Potatoes Are One Of The Healthiest Foods In The World In a lot of homes around the United States sweet potatoes are only served around the holidays but you really should be serving them throughout the year especially if you know the health benefits of sweet potatoes. In case you don't know the carotenoids that are found in sweet potatoes help to stabilize blood sugars. The nutrients, vitamins and other things contained in sweet potatoes helps to make cells more responsive to insulin. If you have a diabetic problem and your insulin resistant than you really need to add sweet potatoes to your diet on a regular basis. Sweet Potatoes Are Easy To Grow In Your GardenClick thumbnail to view full-size How To Grow Sweet Potatoes In Your Garden Don't confuse sweet potatoes with yams. They aren't even related to each other. Yams are related to lilies and grass while sweet potatoes are related to the morning glory family. About 95 percent of all yams are grown in Africa while most sweet potatoes are grown in the southern United States. Sweet potatoes are heat loving plants and that is why they grow so well in the southern United States. You Need Sweet Potato Slips To Grow Sweet Potatoes You need sweet potato slips to grow sweet potatoes. You can quite easily grow your own sweet potato slips from mature sweet potatoes. Purchase some nice large sweet potatoes from the local supermarket. You can plant your sweet potatoes indoors in moist soil. Plant the potatoes about halfway into the soil and keep the soil moist. In a few days you should see slips start to grow from the sweet potatoes. You will want to cut the slips from the mother sweet potato when they are six to eight inches long. Handle them very carefully at this point because they will be tender and fragile. I bet you didn't know it but you can grow bushels of sweet potatoes from a handful of sweet potato slips. Now comes the tricky part. You'll want to plant your sweet potato slips into the garden when the soil is 68 - 70 degrees. The soil must be warm and all danger of frost must be passed before you plant the sweet potato slips in the garden. Be Careful How You Plant Your Sweet Potato Slips Plant the sweet potato slips at least 24 inches apart. The sweet potatoes will need room to spread out and grow. When you plant your sweet potato slips plant them on their sides with about two thirds of the plant under the ground. Your sweet potato plants need plenty of room to grow and develop their sweet potatoes underground. As soon as the sweet potato slips start to grow put well rotted mulch around each sweet potato slip to keep the weeds and grass away and to keep the moisture in. You'll want to harvest your sweet potatoes after about four months before the first frost. Lay your harvested sweet potatoes out on the ground and allow them to dry out completely before you store them in a cool dark place. And now you know exactly how to grow your own sweet potatoes. Please Vote In Our Poll. How Often Do You Eat Sweet Potatoes? Candied Sweet Potatoes With Apples In his 1952 novel, Invisible Man, Ralph Ellison evokes memories of favorite sweet potato dishes. " Yes and we loved them candied, or baked in a cobbler, deep fried in a packet of dough, or roasted with pork and glazed with the well browned fat. 1. Three Medium Sweet Potatoes which will be about one pound. 2. Two Large Granny Smith Apples. 3. One Half Cup Brown Sugar. 4. One Tablespoon Butter. 5. One Half Teaspoon Ground Cloves. 6. One Fourth Cup Fine Chopped Pecans. You will want to start by washing and peeling your sweet potatoes. Cut your sweet potatoes up in very thin slices crosswise. Cook your sweet potatoes until just done. I usually steam mine. Peel your apples and core them. Cut up into small pieces. In a two quart casserole dish combine the apples and the cooked sweet potatoes. In a small saucepan combine your sugar, butter, water, and cloves and bring to a boil. Stir so everything comes together and then pour in the casserole dish over the apples and sweet potatoes. Bake for 30 - 35 minutes at 350 degrees and in the last ten minutes of cooking time sprinkle with the chopped up pecans. This is a dish that has been served in the American south for the last couple of hundred years or more. It's important to precook your sliced sweet potatoes for this dish. If you need to put them in a microwave safe dish with a little water and cover the dish with plastic wrap. Cook on high for 3 - 4 minutes and then put your dish together. Almost Heaven Candied Sweet Potatoes In this recipe you have two of my favorite foods. Sweet Potatoes and Granny Smith Apples. I guarantee you that once you ever try these sweet potatoes you'll make them again and again. They really are that good. 1. Six Large Sweet Potatoes peeled and cut into small cubes. 2. Six Granny Smith Apples Peeled, Cored and cut into small pieces. 3. One Cup Raisins. 4. Four Tablespoons Honey. 5. One Half Cup Apple Juice. 6. One Fourth Cup Butter. 7. Two Tablespoons Orange Juice. Mix all your ingredients together well and be sure to toss well to make sure all your ingredients get well coated. Pour everything out into a two quart casserole dish and cover tightly with tinfoil. Bake in a pre-heated 350 degree oven for 45 minutes. Set out of oven and raise one corner of tinfoil to vent steam for about 10 minutes before you pour into a serving dish and take to the table. This has always been one of my favorite ways to make sweet potatoes. Please Vote In Our Poll. Did you find this Hub Page on Sweet Potatoes Helpful? Vegetable Gardening : How to Grow Sweet Potatoes Sweet Potato Pie Recipe: How to Make Easy, Southern Sweet Potato Pie Healthy Sweet Potato Fries Oven Baked Sweet Potato Fries. Here Is A Really Healthy Delicious Way To Make Baked Sweet Potato Fries These sweet potato fries are oh so delicious. I use olive oil, sea salt, black pepper and pumpkin pie spice to make delicious healthy oven fries made out of sweet potatoes which are one of my all time favorite foods. The pumpkin pie spice is the real secret ingredient in this recipe. It makes these oven baked sweet potato fries into something really special. So be sure to try this wonderful recipe as soon as you can. Ingredients For Your Oven Baked Sweet Potato Fries 1. Four Large Sweet Potatoes Peeled And Cut Into Fry Size Pieces. 2. Three Tablespoons Olive Oil. 3. One Tablespoon Ground Sea Salt. 4. One Tablespoon Ground Black 5. One Tablespoon Pumpkin Pie Spice. You want your fries to be about the size of a mans finger. Now here comes how you make your sweet potato fries into the best sweet potato fries you'll ever eat. You want to peel your sweet potatoes and then cut them into the fry size pieces. Once you have them in the fry size pieces put your sweet potato fries into a plastic container and cover them with crushed ice. Cover them and let them set in the crushed ice and water for an hour. After a hour pour into a colander and take your sweet potato fries out. Pat them dry with paper towels and put them back into a large bowl. Put all your ingredients into the bowl with the potatoes and toss well to be sure all the sweet potatoes get well coated with the spices and olive oil. You'll find that the olive oil will help the spices to stick to your sweet potato oven fries. Pre-heat your oven to 450 degrees and put your sweet potato fries out on a baking pan. Cook your sweet potato fries for twenty minutes and then turn them over. Bake for twenty more minutes and remove from the oven. Some ovens cook faster than other ovens so keep a close eye on your sweet potato fries while they are cooking. These fries are really healthy for you and oh so delicious. Give them a try to see what you think. I hope sweet potato fries are in your future real soon. Please Vote In Our Poll Did you make the sweet potato home fries? Sweet Potato Fries How To Grow Sweet Potatoes Sweet potatoes are raised from the sprouts, or slips, that their tubers send out. One sweet potato suspended on toothpicks in a container and half covered with water will produce several sweet potato sprouts. Larger quantities can be grown by placing several potatoes on a bed of sand and cover them with a 2 inch layer of moist sand or soil. The sweet potatoes will produce sprouts at 75 degrees. But you can buy sweet potato slips at your local farm and garden center. You want to plant your sweet potato slips after any danger of frost has passed. During the first month your sprouts will grow 8-10 inches and they will put on leafs. You will find that sweet potatoes do best in loose sandy soil where all the rocks have been removed. You will want to prepare your soil by digging in lots of fully rotted compost or manure and about 2 pounds of 5-10-5 fertilizer for each 25 foot of row. Push your soil up into foot wide 6 inch high mounds. You will want to plant your sweet potatoes 15 inches apart in the center of the mound and you need to set them 6 inches into the ground. You will need to water them very well. Again set up your sprinkler or sprinklers and water after the sun has set. Never water when the sun is out. Sweet potatoes require very little care. You will want to keep the weeds out and your going to need a hoe and some hard work to keep the weeds out. Be careful that you don't damage your sweet potatoes just under the ground when you are chopping out the weeds. You want to let your sweet potatoes grow until the tops of the plants turn black from the first frost. The sweet potatoes are now ready to be dug up and made into pies, casseroles, and sweet potato fries. You will want to dig your sweet potatoes very carefully because their skins will bruise very easily. Lay your harvested sweet potatoes out on the ground for a couple of days and then carefully put them into newspaper lined boxes and leave them in a dry warm place for two weeks then store them in a dry cool place as close to 55 degrees as you can get. You can of course go ahead and peel sweet potatoes and cook them in a large pot. Add sugar and butter to them and freeze them in gallon storage bags. You can use these cooked sweet potatoes for sweet potato pies or sweet potato casseroles. What Can Go Wrong With Sweet Potatoes The Sweet Potato Weevil feeds on the leaves of the sweet potato vine and its larvae tunnel into the sweet potatoes. You need to keep the ground where sweet potatoes are grown clear of any debris. You will need to spray affected plants with methoxychor and be sure to destroy any infested potatoes. Be sure you read the label of the pesticide and follow the directions. Ask at your local garden center if you need to worry about sweet potato weevils in your area. Keep in mind that you will want to plant your sweet potatoes in loose soil with good drainage. I prepare the place where I plant my sweet potatoes by first digging two foot deep trenches and then I fill the trenches back with good quality dirt that I have mixed with well rotted manure or well rotted compost. I then put topsoil on top of everything so I have mounded up rows to plant my sweet potato vines in. This way I almost always end up with a huge crop of sweet potatoes and so can you. Have You Ever Been To Tater Day Ever been to Tater Day. The Town of Benton Kentucky celebrates Tater Day on the first Monday of April each year. In Mississippi each year about 150 farmers grow 8200 acres of Sweet Potatoes valued at 19 million dollars. The National Sweet Potato Festival is held each year in Vardaman Mississippi which calls it's self the Sweet Potato Capital Of The World. The festival is held each year the first week of November each year. I've been to it and if you ever get a chance to go be sure to try the thin cut sweet potato fries they serve there. They are truly some of the best sweet potato fries I've ever had. Oh man they are good. The Medicinal Properties Of Sweet Potatoes. Sweet Potatoes Are Really Healthy For You. Did you know that sweet potatoes are really the perfect food. And here is why. 18 Reasons Why Sweet Potatoes Are A Super Food 1. Sweet potatoes are high in fiber, fat free, low in sodium, and low in carbohydrates. 2. The sweet potato is slow to digest so it causes a slow and steady rise in blood sugars. 3. Just 4 ounces of cooked sweet potato contains 3.4 grams of fiber and lots of other great nutrients. If your a diabetic sweet potatoes are an excellent food choice. 4. If you smoke you should eat a lot of Vitamin A rich fruits or vegetables like the sweet potato. 5. Sweet Potatoes are a great antioxidant and anti inflammatory food. They are also great in stabilizing blood sugar levels and they lower insulin resistance. 6. The sweet potato is known as a great food for diabetics. The sweet potato helps to stabilize blood sugars though you should be careful with what you are adding to the sweet potato before you eat it. I like them with just a little butter. 7. The sweet potato has strong anti oxidant benefits and they are a wonderful source of Vitamin A, Vitamin C, Magnesium, and Vitamin B6. Most people know the health benefits of Vitamin C like warding off colds and flues but most people don't know that Vitamin C is great for your teeth and it may even help to prevent some forms of cancer. 8. The Vitamin B6 that they contain is really healthy for you especially if you are in danger of heart disease. Vitamin B6 helps to reduce the homocysteine levels in your body. Homocysteine has been linked to degenerative diseases in the human body and this can help to prevent heart disease. 9. Sweet potatoes are an important source of iron. Iron plays an important part in our bodies. Iron plays an important role in red and white blood cell development. iron also improves your resistance to stress and helps your body to break down proteins. 10. Sweet potatoes are an important source of magnesium. Magnesium helps to relieve stress and keep the body healthy. So yes you should be eating lots of sweet potatoes. 11. Did you know that sweet potatoes can help to prevent heart disease. Sweet potatoes are high in potassium and they can help to prevent strokes and heart attacks. 12. They are filled with all kinds of nutrients and anti oxidants and they can boost the bodies immunity system and help to make you well. 13. Sweet Potatoes are rich in anti oxidants and these work in the body to prevent inflammatory problems like cancer, asthma, arthritis, gout, and etc. Sweet potatoes are one of the best sources of anti oxidants. 14. Carbohydrates that are good for people with diabetics are found in sweet potatoes. Sweet potatoes are fibrous root vegetables that can help to regulate blood sugar levels and help to prevent things like insulin resistance. 15. Sweet potatoes are packed with nutrients and vitamins that can help to boost the bodies immune system. Sweet potatoes are excellent for alleviating muscle cramps and spasms. 16. Sweet potatoes are excellent to treat stress related symptoms. You may not know it but your body uses a lot of potassium and other minerals when it is under a lot of stress. 17. In case you don't know sweet potatoes are ranked number 1 of all vegetables so yes you should be growing and eating sweet potatoes as often as possible. 18. Sweet potatoes because of their high fiber content are really good for your digestive tract. It is believed that they may even prevent colon cancer so yes you should eat sweet potatoes as often as possible including the skins. I guess you could say that the sweet potato is really the perfect food and sweet potatoes should really be in your future soon. You should grow and eat some sweet potatoes real soon. I hope you enjoyed this Hub Page on Sweet Potatoes and I want to thank you for taking the time to read this Hub Page on Sweet Potatoes. Thank You for your time. © 2012 Thomas Byers
<urn:uuid:e58994d7-9994-42ef-875c-1663d8182828>
CC-MAIN-2017-17
https://hubpages.com/living/Gardening-101-Growing-Sweet-Potatoes-Cooking-Sweet-Potatoes-Medicinal-Properties-Of-Sweet-Potatoes
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917126237.56/warc/CC-MAIN-20170423031206-00428-ip-10-145-167-34.ec2.internal.warc.gz
en
0.931229
4,443
2.8125
3
Chapter 6 Making the Rules Hallmarks A strategy based on control The role of products Personality Three threats Sustaining growth Typical Rule Makers AT&T (until divestiture) Disney Harvard Business School(until challenged by Northwestern and Stanford) Intel Merck Microsoft Pennsylvania Railroad (1920s and '30s) U.S. Postal Service (before Federal Express, ubiquitous fax machines and the internet) Rule Makers excel at: - Determining the rules of competition - Setting the industry standards - Controlling the behavior of customers, competitors, suppliers and employees - Guiding the evolution of their industry - Providing a product for every customer Focus of attention Rule Breakers Products that embody the new vision. Game Players Products that beat the competition. Rule Makers Creating and maintaining an organization that continues to market products that dominate markets. Rule Makers' organizations: - Large and growing - Lead by a publicly visible CEO - Structured around key market segments - Flexible in accommodating new products - Full of many well thought-out procedures - Very deliberate about who they hire - Indoctrinate "our ways of doing things" - Rely more on implicit than explicit controls - Speak a different "language" than do their counterparts - Places of elan and esprit The Rule Maker's persona - Strong and self-confident - Slightly arrogant - Intense, preoccupied - Disciplined, always in control - More rigid than flexible - Always alert - Slightly suspicious - Big picture thinker - Very turf protective - Needs to lighten up - Everybody's role model - Self reliant Rule Makers stumble when: - Their success at market dominance exceeds the bounds of regulatory and societal tolerance - They are the only clear winners - Their organization blinds them to changes in the market - Blind sided by an emerging Rule Breaker - Their market matures and fragments - They come to believe they can make no mistakes Rule Makers sustain growth by: - Sharing their wealth - Geographic diversification - Avoid becoming prisoners of their organizations - Keep their gene pool fresh - Seek out devils advocates, value opinion diversity - Cut back on the mechanisms of socialization - Move into markets where they face tough competition - Favor competence over loyalty Making the Rules Excerpt from Go For Growth By Robert M. Tomasko Some companies are just too good. Maybe it was a matter of luck, being in the right place at the right time. Perhaps they were Rule Breakers whose innovativeness captivated their customers - and frightened-off all competitors. Or they were Game Players so effective at doing all the right things, just the right way, that they came to "own" their industry. Let's call these businesses Rule Makers. They set the standards for their markets. At times it seems an entire industry revolves around them. Rule Makers are at the center of an intricate business network rich in mutual dependencies. When they prosper, others - even some head-to-head competitors - thrive. But when they stumble, others collapse. These interrelationships are simultaneously resented and appreciated by their competitors. Rule Maker's savviest rivals may grumble, but they are also aware that Rule Makers define the turf in which mutual profitability and growth depend. Rule Makers are watched like hawks. Walt Disney is a Rule Maker. So is McKinsey in management consulting, Merck in pharmaceuticals, and Microsoft in software. Wal-Mart occupies this position today in mass market retailing. Intel's microprocessor chips run eight out of every ten PC's sold throughout the world - allowing this Silicon Valley giant to call many of the shots in this critical industry. The view from the top of the marketplace is glorious to behold. But staying at the pinnacle of a slippery slope is a challenge for even the best. IBM was once a Rule Maker. So was Pan American World Airways. In the first half of this century the top Rule Makers included then-elite companies: A&P Supermarkets, Pennsylvania Railroad (known as "the standard railroad of the world"), United States Steel and Western Union. The luxury travel industry was dominated by the names Cunard, Pullman and Wagon-Lits. The U.S. telecommunications business was defined by three words: The Bell System. The largest package mover - the Railway Express Agency - did not own a single airplane. All these companies - both contemporary and those now relics of business history - have many common characteristics. And, as with Rule Breakers and Game Players, they share a special set of strengths and weaknesses, as well as a unique persona. Rule Makers commonly have a very strong, and sometimes dominant, competitive position in the markets they serve. Hallmarks of a Rule Maker A Rule Maker's life cycle Rule Makers have had a variety of responses to this inevitable decline. Some exit their position of dominance with a bang, others with a whimper. Pan American Airways, the Pennsylvania Railroad, and Western Union experienced bankruptcies. A&P Supermarkets and Cunard remained in their industries, but became focused niche players rather than dominating juggernauts. A handful, such as AT&T and U.S. Steel adapted to the new order, and found other paths to growth. To really understand the dynamics of second half of a Rule Maker's life cycle, set aside this book and look over Edward Gibbon's The History of the Decline and Fall of the Roman Empire . This is a great story of how pressures from without combine with decay from within to crumble a once mighty empire. The Decline and Fall should be required reading for all aspiring Rule Makers. Staying on top Rule Makers are like a squad of soldiers in combat who have managed to seize the higher ground. While some may have aspirations to repeat their triumph on an even more challenging hilltop, most quickly realize their primary mission to to protect this hard won gain from other challengers. Seasoned combat veterans also appreciate that, ironically, the higher the summit they've seized, the harder it will be to descend and move on to a new mountain top. When atop one peak, they have to travel further to reach another than their rivals in the valley below. The pleasant view from where they sit may also contribute to wanting to dig in where they are. Regardless, this high ground still requires defense. A strategy based on control Rule Makers work hard to control the business environment in several ways. 1. Standard setters An industry with standards is not necessarily a bad place. Standards provide a baseline from which many types of competitors can emerge. It is hard to be a Game Player when the rules of play have yet to be established. Standards can be a precondition for growth. Specialists thrive in markets with a degree of stability. It is hard, though, to be a Game Player in a Rule Maker-driven industry and not feel at least a tinge of jealous resentment at what seem to be a disproportionate share of the rewards going to the setter of the ground rules. In some situations, the boundary between just rewards and unfair competition is a very fine line. This boundary requires continual policing and self-regulation, lest its demarcation become the job of the courts and regulators. 2. Always alert A Rule Makers greatest fear, although seldom publicly admitted to, is being overtaken. For them, fundamental research is done not so much to destabilize the basis of competition (why destabilize something that you make money dominating?), but to keep ahead of anyone else who may try to break the rules. Their R&D strategies, despite their mega-budgets, are fundamentally driven more by defensive, not offensive, aims. Likewise the goal behind their extensive use of market research is not so staying attuned to current customers requirements - a topic their sales force is usually well wired into - but anticipating new customer needs. This provides them with the lead time necessary to create new products and services before more short-term oriented Game Playing competitors have an opportunity to wean away customers. 3. Playing the game several moves in advance Hamel and Prahalad implore companies to set stretch goals based on an exhaustive analysis of trends and technologies. Their favorite strategic objective is "global preemption," a decade-long cultivation of core competences that - if done well - results in unquestioned worldwide market leadership. A worthy goal? Absolutely - as long as your company has the lead time, resources and patience of a Rule Maker to stay focused on the future. Where does all the money come from: the fly-wheel effect The best kind of market to be in - if you like profit maximization along with growth - is one that ignores the law of diminishing returns. In some industries, the more you produce, the less money you make. Counter intuitive - and depressing - but true. Ask any American farmer; look at what happens to gasoline prices when a major new oil field is discovered. (This is why the best path to growth for commodity-oriented companies is not that of the Rule Maker.) Some industries operate under very different rules. Increases in supply stimulate even greater increases in demand. These are sometimes called "network" markets. Telephone service is a network market, so is computer software. The value of these services or products increases as more people use them - what's the point in having a picture phone if few of the people you communicate with also have them? Or of having a cutting edge word processor whose files can't be read by anyone else? When the number of users of products like these gets beyond a certain critical mass, a snowball effect ensues, and more and more are bought. In the software market, most of the costs are related to developing the program and establishing its reputation. If those tasks are done well, a nearly-free ride results - additional copies of software cost very little to produce. Success leads to greater success, larger market share and higher margins. What makes Rule Makers so different? High performers in Rule Makers are members of what has the look and feel of an elite corporate cult. Their employers have worked deliberately to institutionalize the corporate philosophy the same way a church propagates a system of beliefs. Working in a Rule Maker is like joining the Jesuits, rather than remaining an ordinary parish priest. It is closer to the Marines, less like the slogging foot soldiers of the Army. A place to start a career Learning a new language Disney doesn't have customers, it has guests . The status of its employees is evaluated by calling them members of the cast , each performing a part , not holding a job. Wearing a uniform can be demeaning, putting on a costume sound more like fun. It's a lot easier to complete a performance than endure an eight hour shift. Job descriptions are dull and confining, but few mind memorizing their lines and movements in a script . By shifting the vocabulary of the theater and screen to the work life of the amusement park, a different - higher value -experience is created for Disney's customers (oops, "guests"). And a high degree of control is exercised over the individual employees whose encounters with customers can either reinforce or destroy the Disney magic. No surly "carnies" allowed. No tattooed, cigarette-smoking, foul tempered ride operators. This language, and the special world-view it evokes, is taught in tightly scripted training programs at Disney University. Admission is offered only to those who pass a battery of screening interviews. After orientation training, new hires are assigned peer mentors - also carefully selected for their "role modeling" potential. Contrast this with the typical hit or miss, sink or swim approaches most businesses - especially Game Players - take to selection, training and socialization. Why is all this attention to creating and maintaining a uniquely cohesive organization so important to Rule Makers? To understand this it is helpful to appreciate how this kind of growth company is really different from the other's we have described. A key difference has to do with the role its products play in the scheme of things. Products exist to perpetuate the Rule Maker For Rule Makers, the product exists more as a vehicle for the company and its perpetuation, not the other way around. Disney does not exist to create new animated movies - the movies exist to perpetuate Disney. Rule Makers seldom fall in love with their products. In Rule Makers, the whole is always more than the sum of the parts - it include's both today's product line and the ability to remain ingrained in the fabric of the market by developing tomorrow's big hits. Microsoft's capability to build and market an ongoing stream of products that control a portion of the marketplace is more important than any individual product. Critics of the software industry have faulted Microsoft for producing inelegant products, behind schedule, that are often inferior to its competition. They may be right, but they are also, from a strategic perspective, missing the point. It better serves Microsoft's interests to invest in developing an electronic product registration card (that also reports back to Microsoft all the types of software it finds on its customer's computers) than to put the same effort into adding more features to a word processor or spread sheet Microsoft sells. Having this information about the configuration of its customer's machines - obtained while "speeding up" the product registration process for the customer - provides valuable intelligence about what customers are using what products, and which might be ripest to consider an upgrade to an offering of Microsoft. In this case, Microsoft's product is also a Trojan horse, generating additional information about customer's future needs as well as immediate revenues. Obsessive attention to organizational architecture For them, the secret of controlling the market is to control themselves. When a person exaggerates a behavior - such as an extreme need to control the surrounding environment - it is often more useful to ask why the person needs to behave that way, rather than just branding the person as a "control freak" or "obsessive." The same principle holds for organizations, as well. What is it about Rule Makers that makes them so control-oriented? The Rule Maker's personality Tall. Strong. Confident. Bright. A person with a plan. True ... but more accurately: very tall, very strong, very confident,very bright, a person with a very well thought-out plan. Superlatives seem to naturally flow in descriptions of Rule Makers. Superlatives provide the clue The personality of a Rule Maker exudes control. The flip side of control is frequently fear. Bright people who need to continually demonstrate how extremely bright they are may well have an inner worry that they are actually not so smart after all. Overly-aggressive bullies may really fear, unconsciously, they are weak. These fears or concerns, ironically, may have no basis in reality. The self proclaimed genius might well be very smart, the tough bully may actually be very strong. But neither of these people are comfortable enough with their abilities to take them for granted. Instead, they exaggerate them. This aspect of a Rule Maker's persona is worthy of examination. It may provide some clues about why so many stumble, and what can be done to avoid what otherwise seems to be an inevitable decline. - gave considerable attention to warding off attacks and personal threats - both real and imaginary - was hypersensitive about minor mistakes and disorderliness - insisted on unwavering loyalty - became personally over involved in controlling their businesses through great attention to rules and details - had unsatiable appetites for more and more information, and - several were known for their vindictiveness and overreaction. These traits can apply to organizations as well as individuals. Companies of this sort, like some who thrive on the Rule Making path to growth, are always vigilant, always ready for a fight. They are like a muscle so taut that it springs when only lightly touched. They do not like the unexpected, they are not at home in rough and tumble, go-with-the-flow markets (places that Game Players and Improvisers thrive). They are very intense places to work. All activity is expected to serve some business purpose. The price of eternal vigilance Strong vigilance can help defend a strong competitive position. Unfortunately, it tends to inhibit, not facilitate growth, leading the business away from its customers and the marketplace. Fearful, suspicious companies run the danger of self-delusion. Rule Makers are prone to several such "cognitive" errors. They can tend to find what they are looking for when examining market research data - in part because they ignore facts that disconfirm their biases. These Rule Makers miss taking things at face value, because they are too busy searching for some hidden meaning. They can loose a sense of proportion, too easily taking things out of context. Missing more than fun Successful Rule Makers though, just like successful people, have found a way to make their habits, their personality characteristics, payoff. Intel's former CEO, Andy Grove, is famous for his belief that - no matter how successful - only the paranoid survive. Bill Gates, of Microsoft, is reputed to be driven by fear of the time (which ultimately will come) that sales will slow down. Gates is likely to takes such a downturn personally, not philosophically. He is more prone to attribute the decline to a mistake someone made years earlier that was just not caught quickly enough. These chief executives may admit to some corporate paranoia, but neither seems crippled by it. At least, not yet. Few Rule Makers are harmed by their rigidities during times of rapid growth. Their competitive position seems impregnable, too many things are going right Incipient problems are too easy to miss or deny. But during this period, seeds are frequently planted that can accelerate later decline. Not facing up to failure Becoming the "next" Microsoft All of this "hero worship" can be very puzzling. So many managers go to such great lengths to learn the "secrets" behind the Rule Maker-of-the-moment's great success. But they probably could learn much more by examining the causes of Rule Maker's almost-inevitable decline. While Rule Makers offer many useful ideas, they offer equally as many cautions. Their strong competitive positions are unique. What works for a company with near-domination of its market, may be inappropriate for an aspiring Rule Maker, or a company that would do better on a different growth path. The triple threat 1. Abrupt shifts in the surrounding environment These three often combine to undermine, or even destroy, many once dominant competitive positions. Kodak, a company that was synonymous with the photography industry, lost its ability to control its marketplace after suffering two serious legal defeats many years ago. In 1921 it was kept out of the growing private-label market for photographic film. All Kodak products were required to bear the Kodak name. Later, in the 1950s, Kodak was prohibited by the courts from tying the sale of film to the processing of film. Eventually Kodak was able to convince a federal judge to overturn these rulings, but not until the company's competitive position had seriously eroded. Throughout this period of decline, Kodak clung to its Japanese-like system of ingrown management. It hired most employees directly out of school and expected them to remain until retirement. It managed to keep them busy by doing internally many things other companies relied on suppliers and business partners for: chemical feed stocks, electric power, and even the yellow cardboard boxes for it film. Not wanting to trust the local municipality, Kodak also created its own fire department to protect its main plant in Rochester, New York. Letting the organization drive the strategy Britannica's problem was not new technology, but its old organization. The company had become a prisoner of the organization that provided its past successes, its large, commissioned sales force. These sales people quickly realized that putting the contents of Britannica (whose volumes weigh over 100 pounds and require four and a half feet of bookshelf) onto a music album-sized disc would result in a product priced much lower than the traditional hardcopy encyclopedia. This would cut deeply into their sales commissions. It might even eliminate marketing jobs as CD-ROMs are easier to sell in computer stores and by mail than by more costly door-to-door, one-on-one customer calls. Encyclopedias are sold, compact discs are bought. Britannica was following the well worn path traveled by many once-Rule Makers. Western Union, a one-time communications giant in the age of the telegraph, had no use for the patents a young inventor, Alexander Graham Bell, tried to sell for the telephone. He was forced to go elsewhere. IBM's slow start embracing the new microprocessor technology happened not from technology-blindness, but by the fear of demotivating its powerful mainframe computer sales force. Rule Makers are very adept at accommodating evolutionary change, but their keen ability to map the marketplace into their organization becomes a dangerous millstone when the environment makes an abrupt shift. Making rules without Rule Makers In other situations companies that compete fiercely with each other - like IBM, Sun Microsystems and Hewlett-Packard in the market for computer workstations - also realize the value of using the same underlying software (Unix). They distinguish themselves by the special features they add, while staying uniform enough so their customers are not required to start from scratch each time they buy a new computer terminal. The net result: faster market growth for all by practicing one of the New Rules for Growth: assist your rivals in making the overall market bigger. The fear of changing a winning formula 1. Share the wealth Don't give others a compelling reason to destabilize a situation that no longer works for them. 2. Go where others are not Management consulting Rule Maker McKinsey is following a similar strategy to keep its global professional partnership growing. Well over half its revenue, and even more of its profits, are earning outside the U.S. Its managing director is Indian-born, and its growth plan targets the emerging consulting markets of Russia, China, India and Eastern Europe. These companies constantly keep in mind that increasing revenues in a segment of the market that has stopped growing eventually leads to an dead end. 3. Cultivate humbleness Cultivating an image of greatness can become another form of a millstone. Eventually everybody in a successful Rule Maker seems to believe everything they do is great because of who they are, rather than what they do. Keep the internal applause and self congratulation to a minimum; a Rule Maker's best cheerleaders are its customers. 4. Bite-the bullet Hallmark is mindful of Rule Maker Britannica's stumbles. Hallmark's chairman, Donald Hall, intends to sustain its success. "If the competition's catching up with you," he maintains, "its not that they're getting better, it's because you're not staying ahead. We just have to do what it takes to stay ahead." 5. Loosen-up on the socialization Because I was there on a benchmarking assignment for another Fortune 100 company, one of IBM's best customers, they were very willing to share a great deal of detail about this confidential project. One of its conclusions - one the IBM staff executives seemed proudest about - alarmed me when I heard it. Considering IBM's problems since, it explains a key internal driver of that Rule Maker's subsequent market slippage. IBM wanted to gauge how strong its corporate culture was, how effective its extensive orientation, training and employees communications programs were given its polyglot work force of almost 100 nationalities. The results of the study indicated these culture-building tools were highly effective, maybe even too effective. IBM's research found that their employees were much more willing to believe what they were told through IBM's "official channels" than they were through outside sources, newspapers, government officials or friends who worked for other companies. The IBM executives I interviewed thought this was a great triumph. They had created a corporate culture with more influence over IBM's employees than did the net impact of all the individual national and regional cultures from which these employees had come. IBM employees trusted their managers more than anyone else for information about markets, technologies and even broader societal trends. That kind of trust and loyalty may be admirable, but it put an unrealistically high burden on these managers to be right, and be right all the time. This is impossible, but what is very likely in companies with such a high degree of employee alignment is that a minor misperception or faulty estimate is much more likely to be reinforced and amplified than it is to be challenged and corrected. Rule Makers wanting to stay on that path will do well to monitor the strength of their corporate cultures. They should recognize what IBM seemed to miss, that it is possible to have too strong, as well as too week, a buy-in to the company's core values. Or, in other words, loosen up a bit on the socialization! Cultivating diversity of opinions is as vital as cultivating diversity of demographics. 6. Find honest mirrors Where are the potential "journalists" in Rule Making corporations? They, if the business is to maintain its position over the long haul, need to be those in middle management. They are closer to outside information about customer needs, market trends and emerging technologies than many of the senior executives to whom they report. First and second level managers also have less at stake in rationalizing past strategic decisions. What is needed are Rule Maker managers with the courage to tell, when necessary, the emperor that he is not wearing any clothes. Senior managers in intentionally ingrown companies need lots of help facing the truth about customers and competitors. Resist temptation to tell higher ups what they want to hear - important in all businesses, but a matter of long term survival in Rule Makers. Doing this requires managers with the ability to maintain some "psychological" distance from their employer. This is the ability to be "in," but not "of" the Rule Maker. It is the skill of not checking in your personal "antenna" just because you have logged on to the corporate E-mail system. Discover and maintain independent channels of information, sources that go beyond those officially monitored. And then build networks within the Rule Maker to spread what is learned. Information-sharing must be a two way street. Senior officers need to carefully audit how they spend their time, keeping interactions with peers and bosses to no more than a quarter of their day . Spend half the remaining time with people outside the company (customers, suppliers, industry gurus, technology oracles), the other half with those inside, several levels below, closer to the firing line. These executives also need the self confidence to seed all levels of the hierarchy with respected, listened-to, devils advocates. These are habits that executives of evergreen Rule Makers, like Levi Strauss, put into use daily. "I want to maintain a close enough feel for the business so that, when I'm receiving reports, I can validate or challenge them from my own experience," says CEO Robert Haas. 7. Manage peopleflow The chief executive who presided over much of IBM's decline, John Aikers, joined right after college and a stint as a Navy carrier pilot. IBM was the only company he had ever worked for. He worked up its ranks, with promotions almost annually, from sales trainee to marketing rep to branch manager to director of data processing to vice president to the very top jobs. When his old college hockey coach asked him how he made such an impressive climb, Aikers attributed it all to his ability to be very nice to everyone he met on the way up. He was also promoted so frequently that, he was seldom in a position long enough to be held accountable for an organizational unit's long term performance. Kodak's fall from market dominance occurred during the time its key leaders were Colby Chandler and Kay Whitmore. Both joined, as did IBM's Aikers, the business they were to lead directly upon graduation. Each had almost 30 years experience at Kodak before become chief executive; each had the technical background common to all of Kodak's leaders since George Eastman founded the company. This combination of an inward career orientation - in an inward-looking company - coupled with educational backgrounds more attuned to achieving astute technical perfection than market timing, can almost certainly guarantee difficulty in maintaining a strong market position. Now, both IBM and Kodak are, slowly, rediscovering growth. Both have been headed by chief executives, Lewis Gerstner and George Fisher, recruited from outside their industries. If a Rule Maker is to remain a top changing markets, it needs to never forget to manage its peopleflow as well as cashflow: keep the gene pool fresh! Each type of company has its special set of strong and weak points. Each thrives in some industries, languishes in others. Uncharted territory requires one type of vehicle to cross it, a well-worn path, another. No one form of organization is necessarily better or worse than any other. It is more useful to distinguish companies by how closely or not they are adapted to the logic of their particular industry. The core message is very simple. To the victor belongs many of the the spoils. But not forever. The rewards of rule making can be very sweet. Just keep in mind that they, too, will pass at some point. Does this imply that decline is inevitable, that growth only leads to no-growth? Sometimes, but not always. Consider two other types of growth companies, sometimes maligned, often overlooked. Both offer fine possibilities for ongoing increases in sales and profits. They can provide welcome respite for a Rule Makers whose times has past. In some markets they are clearly the best choice for growth. They are the Specialists and the Improvisers. © Robert M. Tomasko 2002
<urn:uuid:cb40416d-2f1b-47a6-8ea7-54d937cd15eb>
CC-MAIN-2017-17
http://roberttomasko.com/Growth.Ch6.html
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917126237.56/warc/CC-MAIN-20170423031206-00427-ip-10-145-167-34.ec2.internal.warc.gz
en
0.964733
6,178
2.65625
3
The United States is peopled by the displaced and exiled, and divided by belonging. Who is inside and who outside; whom the government recognizes and whom it rejects, have been basic questions through its history. The ramifications reach into the realms of intimacy. When two people fall in love and plan to live the rest of their lives together, they may depend on the state to acknowledge and safeguard their union: never more so than if they have different nationalities. United States policy is to help foreign spouses and fiancé(e)s immigrate and live with their U.S. partners. But not if that partner is of the same sex. Binational same-sex partnerships are lesbian and gay couples where one partner is a U.S. citizen or permanent resident, the other a foreign national.1 In 2000, the U.S. Census, investigating household makeup, estimated 35,820 such couples lived together in the United States. This represented some 6% of all lesbian or gay couples counted in the country. These couples dwell in every state, make their way at every income level, represent a mosaic of American diversity. The foreign-national partners come from almost every nation in the world. Their relationships have no recognition in federal law, and no rights. These figures only suggest the issues scope. They do not count couples who hide the fact that they are partners, lest the one applying to stay face homophobia in the immigration or asylum process. They do not count couples who avoid the census, because the foreign partner lives here illegally to maintain the relationship, or fears being forced to do so after a visa expires. They do not count couples who do not share a homeor who live in different countries because U.S. immigration law, and marriage policy, will not permit them to share their lives together within its borders. They do not count couples where the U.S. partner has chosen exile, so that they can lead common lives in another, friendlier country than this one. (At least nineteen countries have acknowledged lesbian and gay relationships in immigration law and policy, while the U.S. still refuses. See Appendix B for more information.) Undoubtedly the more than 70,000 members of such families whom the last census counted are only a part, perhaps a very small one, of the whole. This report documents the crippling barriers such families face in pursuing a goal enshrined in Americas founding documenthappiness. Those barriers center around a simple fact. With only rare exceptions, a heterosexual couple where one partner is foreign, one a U.S. citizen, can claim the right to enter the U.S. with a few strokes of a pen.2 They need not even marry: they need only show to a U.S. consulate abroad that they intend to do so and have met at least once before in their lives. (Waivers of the latter rule are possible.) In practice, U.S. immigration is filled with obstacles for many who seek to enter. Any binational family may encounter injustices and bureaucratic barriers on the road to reunification. A flawed and irrational system demands overhaul. But a lesbian or gay couple cannot even claim basic rights. Their relationshipeven if they have lived together for decades, even if their commitment is incontrovertible and public, even if they have married or formalized their partnership in a place where that is possibleis irrelevant for purposes of entering the United States. Instead, they face a long limbo of legal indifference, harassment, and fear. Couples told us stories of abuse by immigration officials, and even deportation. They described the devastating impact not only on their partnerships but on their careers, homes, children, livelihoods, and lives. An American man, faced with the expiration of his Venezuelan partners tourist visa, wrote us: I am very proud to be an AMERICAN. We are trying to find other options to allow Jorge to stay in the countrywe do not know what options we have but with our faith in Godwe believe we will find the answers. I respect the laws of the United States and will continue to do so if Jorges visa expires. . We have no intention to break up or separatethis is not an optionit has never been an option for the heterosexual couples. Jorge dreams about being an American citizen, celebrating the incredible freedom afforded to Americans, and to once again be proud of a country he strongly believes in.3 Some couples find such stubborn confidence impossible. A woman in Iowa, living with her partner from New Zealand, wrote that immigration laws: do not allow my partner to live a free life, she is in constant fear of being deported and removed from this country and her family. We live a struggle every day as there is only one income. Together we are raising a twelve-year old son. Nadia, my partner, is my sons mother also, and losing her would destroy that little boys life, she is just as much a part of him as I am. She keeps this family together and whole. I am also a veteran of the United States Navy and have done my time and service to my country. It breaks my heart that for all Ive done with this country it will not see the person I love who has strength to hold me up when life is badshe cannot remain even after the commitment we have put into each other and our sons life. I cannot imagine life without her. How could anyone live without their heart.4 Many couples are separated, many families broken up. A woman in North Carolina described how her Hungarian partner and the children they were raising together were forced to leave the country. Even though the children went to school here and grew up here and this is Home! Its just not right. No family should be forced to be apart no matter what the sex is. Its all for love. No one should determine how to live your life like this, no one. This is how immigration laws have affected us. We are separated, and without each other We just want to be together, thats all. No harm in that.5 Over and over couples spoke of the contradiction between what they thought were American values and the reality they know. Liz, divided from her Jamaican partner Carly, said, I have a right to pursue happiness and Carly makes me happy. We dont hurt anyone. Thats all.6 Many U.S. citizens go into exile to preserve their families and stay with their life partners. One man, living an ocean away from his Portuguese partner, said: The U.S. government does not want to acknowledge that homosexuals are entitled to be happy, just as any human beings Now that I have finally found my soul mate, the U.S. government wants to tell me that I do not have the right to be with him. If immigration laws dont change in the near future, I will be leaving the United States, even if that means being unemployed and living in misery. At least Ill be with the one I love.7 A U.S. woman who has moved to Denmark to be with her partner of almost twenty years told us, It was a lot of letting go. I had to give up my career; I had to give up my country. But I gained a lot too. I gained the recognition of our union here. I would never go back on a decision that allowed us to have and to raise our two wonderful kids.8 Family reunification is an express and central goal of U.S. immigration policy, and has been for more than fifty years. Immigration law puts priority on allowing citizens and permanent residents to sponsor their spouses and relatives for entry into the U.S.9 A commission appointed by Congress to study immigration policies in 1981 concluded: Reunification of families serves the national interest not only through the humaneness of the policy itself, but also through the promotion of the public order and well-being of the nation. Psychologically and socially, the reunion of family members promotes the health and welfare of the United States.10 But, lesbian and gay peoples families do not count. Their partners are excluded from the definition of spouse. Such couples are trapped between two ferocious panics sweeping the U.S. One is over equality in civil marriage. Amid rancorous debate about whether to recognize lesbian and gay peoples partnerships at any level, some distort the demand for simple fairness into a claim for special rights, and portray the principle of non-discrimination as a bid for privilege. Some opponents of gay marriage openly define lesbian, gay, bisexual, and transgender people themselves as second-class citizens. One makes clear that homosexuals are not only unequal but unqualified to participate in societys basic benefits: Homosexual marriage will devalue your marriage. A license to marry is a legal document by which government will treat same-sex marriage as if it were equal to the real thing. A license speaks for the government and will tell society that government says the marriages are equal. Any time a lesser thing is made equal to a greater, the greater is devalued. Granting a marriage license to homosexuals because they engage in sex is as illogical as granting a medical license to a barber because he wears a white coat or a law license to a salesman because he carries a briefcase. Real doctors, lawyers, and the public would suffer as a result of licensing the unqualified and granting them rights, benefits, and responsibilities.11 The fear of what one writer called ceremonialization of anal sodomy12 led in 1996 to the so-called Defense of Marriage Act. Limited local recognition of same-sex partnerships already had no effect on immigration policy, which is a federal concern. The Defense of Marriage Act, however, declared that for all purposes of the federal government, marriage would mean only a legal union between one man and one woman as husband and wife. The exclusion of lesbian and gay couples from U.S. family-reunification policy was written unequivocally into law. Binational couples, along with tens of thousands of other non-citizens, also face the rising panic over immigration in the U.S. That exclusionary impulse is nothing new. A conservative who calls immigration the most immediate and most serious challenge to Americas traditional identity13 echoes, perhaps unwittingly, nativist rhetoric more than a century and a half old. After the September 11, 2001 attacks, cultural difference was increasingly seen as criminal threat. Foreign visitors and immigrants became the single greatest threat to the lives of Americas 280 million people.14 Polemicists dubbed the Mexican border Terrorist Alley. Politicians complained that taxpayers had to pay to bury undocumented immigrants who expired trekking across the desert (saying immigration imposes incredible financial strainssometimes in the least likely ways)15 yet also objected to systems allowing those aliens to signal for help before dying of thirst (Could there be a more blatant slap in the face of American taxpayers than to have them fund such disgraceful boondoggles?)16 Lesbian, gay, bisexual, and transgender foreigners share the spreading stigma and, like other non-nationals, encounter locked doorsand cells. In December 2005, the House of Representatives passed the Border Protection, Antiterrorism & Illegal Immigration Control Act. The bill, and similar proposals, would criminalize undocumented immigrants and those who help them. Unlawful presence, now a civil immigration violation, would become a crime subject to state and local police pursuit. An undocumented immigrant would be barred from seeking asylum, and their detention would be mandatory. 17 An immigrant could fall victim to this provision one day after a visa expires. Student visa holders would be at risk if they dropped below required course loads. And anyone who knowingly tries to help a foreigner in this predicament could become a criminal. Many binational lesbian and gay couples could be injured. A U.S. citizen whose same-sex partner became undocumented could be convicted of smuggling themand imprisoned, and stripped of home and property. Freedom from discrimination is a human right. The hardship, harassment, and pain that same-sex binational couples endure in confronting and trying to conform to U.S. law show the discriminatory consequences of denying a class of people the recognition their relationships need and deserve. Equally important, the losses and separations also reflect a broken immigration system: inconsistent standards, processes ridden with arbitrariness and delay, a ramshackle set of often conflicting rules which encourage discrimination and abuse. Innumerable families negotiating the U.S.s reunification system find enormous impediments to living together in this country. The problems of lesbian and gay couples are only one aspect of the systems failures. As one gay Argentinean and his American partner told us, ruefully: Bureaucracy doesnt move at the pace of peoples lives.18 Once again, though, while heterosexual families can elicit a measure of public and political sympathy, the animus against lesbian and gay families is embodied in law. Even their claim to family status is foreclosed from the start. The United States urgently needs to enact comprehensive immigration reformensuring adequate and fairavenues for immigrants to enter the United States both temporarily and permanently and offering reasonable roads to legal status for undocumented immigrants already living and working in the country. Ending the egregious discrimination that excludes lesbian and gay families from reunification policies must be part of that. Traditionally, the Supreme Court has accorded Congress wide scope to regulate entry to the U.S., holding it is part of the plenary powers given the legislature by the U.S. Constitution. This power is not absolute, though, or completely immune from scrutiny for discrimination and injustice. The Court has acknowledged cases in which the alleged basis of discrimination is so outrageous that denial of entry may be challengedincluding denying people entry solely because of their race or religion.19 Moreover, it is important to stress that all immigrants on U.S. soilincluding those here illegallyare guaranteed the same rights as citizens, with only a few exceptions, such as the right to vote. The U.S. Constitution grants to the people or personsnot just to citizensthe rights to due process and equal protection of the law, to be free from arbitrary detention or cruel and unusual punishment. Yet U.S. citizens (and permanent residents) are equally victims along with their foreign-national partners. Solely because of their sexual orientation or gender identity, they find their relationships unrecognized, their families endangered, their lives shadowed by separation and dislocation. Often, their relationships are wrecked, or driven underground. The philosopher Tzvetan Todorov (writing in an altogether different context) has tried to define dignity, vital among the panoply of values that make up human rights. He finds it connected to the human ability to make meaningful decisions about ones own life and to make these decisions known. The important thing is to act out the strength of ones own will, to exert through ones initiative some influence, however minimal, on ones surroundings. It is not enough simply to decide to acquire dignity: that decision must give rise to an act that is visible to others (even if they are not actually there to see it). This can be one definition of dignity.20 Denying recognition to one of the most important choices a human being can make, forcing the relationship consequent on that decision into terrified invisibilitythese assault human dignity in an essential way. Human Rights Watch and Immigration Equality both strongly support full equality in civil marriage, allowing same-sex couples the same recognition under law that heterosexual couples enjoy. Together we regard discrimination in the legal recognition of relationships as a gross violation of human rights.21 However, repairing the inequity in the immigration system that tears same-sex binational families apart is an issue distinct from the debate over same-sex marriage. Many other countries which have accorded immigration rights to such couples have done so separately from enacting civil partnerships or opening marriage status. Acknowledging this discrimination as a remediable failure of the immigration system is the aim of a bill now before Congress. The Uniting American Families Act (UAFA) would add the category permanent partner to the classes of family members entitled to sponsor a foreign national for U.S. immigration. The UAFA would not grant couples recognition or rights for any purposes other than immigration. Nor is it likely to open the gates to waves of newcomers. The figure of almost 40,000 binational lesbian and gay couples whom the census discovered represents a significant population suffering serious harmbut it hardly suggests that legal recognition would add more than minimally to the number of immigrants (between 700,000 and one million) whom the U.S. already admits yearly.22 People claiming permanent partnership would have to prove the fact, and undergo the same rigorous investigations that authorities already impose on binational married couplesmeaning the bill would not open new possibilities for marriage fraud. Rather, the bill would address an egregious inequality. It would protect dedicated families and their children. It would prevent the drain of talented people to other countries. Its passage is urgent. (A full description of the UAFA is found in Appendix A.) Human Rights Watch and Immigration Equality call on the United States Congress to: Human Rights Watch and Immigration Equality call on the United States Department of Homeland Security, the Attorney General of the United States, and the U.S. Department of State to: o determines eligibility for bond on post-order custody reviews; o considers cancellation of removal applications, extreme hardship waivers, and similar applications and decisions; o recognizes the status of a couple entering the United States on the I-94 customs declaration; o makes consular decisions on visa eligibility based on family relationships. Human Rights Watch and Immigration Equality call on the United States Department of Homeland Security to: The term "lesbian and gay" is frequently used in this report to refer to people whose identities--or behaviors and desires--could be variously described as lesbian, gay, bisexual, or transgender. This term is used to minimize reducing people's identities to an alphabetical acronym, "LGBT", and is used for simplicity and convenience. Its use should not imply that the couples whose stories are told here do not include bisexual or transgender people. Most exceptions involve cases where U.S. law applies special rules to nationals of a particular country. For the consequences of one such instance, see Families Torn Apart: The High Cost of U.S. and Cuban Travel Restrictions, A Human Rights Watch Report, October 2005, vol.17, no. 5 (B). E-mail to Immigration Equality from Shaine (last name withheld at his request), November 6, 2003. E-mail to Immigration Equality from Dara and Nadia (names changed at their request), September 13, 2003. E-mail to Immigration Equality from Sandra (last name withheld at her request), October 29, 2005. Human Rights Watch interview with Liz and Carly (names changed at their request), New York, February 10, 2005. E-mail to Immigration Equality from Rafael (last name withheld at his request), undated, 2003. Human Rights Watch/Immigration Equality telephone interview with Gitte and Kelly Bossi-Andresen, December 20, 2005. Immediate relatives of U.S. citizens are exempt from quotas and generally processed quickly through the immigration system; these include spouses and minor children of U.S. citizens, and parents of U.S. citizens who are over twenty-one. There are also family preference immigration categories. These include adult children and siblings of U.S. citizens, and spouses, minor children, and adult unmarried children of lawful permanent residents. In these cases there are severe backlogs, and waiting lines of years. See U.S. Select Committee on Immigration and Refugee Policy, U.S. Immigration Policy and the National Interest (1981), p. 112, quoted in Chris Duenas, Coming to America: The Immigration Obstacle Facing Binational Same-Sex Couples, Southern California Law Review, vol. 73 (2000), pp. 811841. See also Linda Kelly, Preserving the Fundamental Right to Family Unity: Championing Notions of Social Contract and Community Ties in the Battle of Plenary Power Versus Aliens Rights, Villanova Law Review, vol. 41 (1996), pp. 725, 729. Jan LaRue, Talking Points: Why Homosexual Marriage is Wrong, Concerned Women for America, September 16, 2003, at http://www.cwfa.org/articledisplay.asp?id=4589&department=LEGAL&categoryid=family (retrieved January 10, 2005). John Haskins, 'Conservative' Romney buckles and blunders, World Net Daily, December 24, 2005, at http://www.worldnetdaily.com/news/article.asp?ARTICLE_ID=48056 (retrieved January 4, 2006). Samuel P. Huntington,The Hispanic Challenge, Foreign Policy, No. 142 (March/April 2004), pp. 30-45. John Perazzo, Illegal Immigration and Terrorism, Front Page Magazine, December 18, 2002, at http://www.frontpagemag.com/Articles/ReadArticle.asp?ID=5147 (retrieved January 5, 2005). Webpage of the House Immigration Reform Caucus, at www.house.gov/tancredo/Immigration/WYB.2004.09.29.html (retrieved December 15, 2005). John Perazzo, Illegal Immigration and Terrorism, Front Page Magazine, December 18, 2002, at http://www.frontpagemag.com/Articles/ReadArticle.asp?ID=5147, (retrieved January 5, 2005). See Oppose the Border Protection, Antiterrorism, and Illegal Immigration Control Act: Letter to House Judiciary Committee Members opposing HR 4437, Human Rights Watch, December 7, 2005, at http://hrw.org/english/docs/2005/12/09/usdom12188.htm. Human Rights Watch/Immigration Equality telephone interview with Fabian and Robert (last names withheld at their request), October 6, 2005. Reno v American-Arab Anti-Discrimination Committee, Supreme Court of the United States, 5525 U.S. 471 (1999) at 491. Tzvetan Todorov, Facing the Extreme (New York: Henry Holt, 1996), p. 61. See Non-Discrimination in Civil Marriage: Perspectives from International Human Rights Law and Practice, a Human Rights Watch briefing paper, September 3, 2003, at http://hrw.org/backgrounder/lgbt/civil-marriage.htm. Recent annual figures number 849,807 for 2000; 1,064,318 for 2001; 1,063,732 for 2002; and 705,827 for 2003. See the Fiscal Year 2003 Yearbook of Immigration Statistics, online at http://uscis.gov/graphics/shared/aboutus/statistics/IMM03yrbk/IMMExcel/Table01.xls (retrieved December 15, 2005). For the national origin aspect of this recommendation, see U.N. Committee on the Elimination of Racial Discrimination, General Recommendation 30, CERD/C/64/Misc.11/rev.3, March 2004.
<urn:uuid:f1ef2e3e-4f11-49a4-9cca-3e15727762fb>
CC-MAIN-2017-17
https://www.hrw.org/reports/2006/us0506/3.htm
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917122739.53/warc/CC-MAIN-20170423031202-00072-ip-10-145-167-34.ec2.internal.warc.gz
en
0.951666
4,788
3.0625
3
Michael Faraday (September 22, 1791 – August 25, 1867) was an English physicist and chemist who is one of the most influential scientists of all time. His most important contributions, and best known work, were on the closely connected phenomena of electricity and magnetism, but he also made very significant contributions in chemistry. Faraday was principally an experimentalist; in fact, he has been described as the "best experimentalist in the history of science". He did not know any advanced mathematics, however. Both his contributions to science, and his impact on the world, are nonetheless vast: his scientific discoveries underlie significant areas of modern physics and chemistry, and the technology which evolved from his work is even more widespread. His discoveries in electromagnetism laid the groundwork for the engineering work in the late 1800s by people such as Edison, Siemens, Tesla and Westinghouse, which brought about the electrification of industrial societies, and his work in electrochemistry is now widely used in the field of chemical engineering. In physics, he was one of the first to explore the ways in which electricity and magnetism are connected. In 1821, shortly after Oersted first discovered that electricity and magnetism were associated, Faraday published his work on what he called electromagnetic rotation (the principle behind the electric motor). In 1831, Faraday discovered electromagnetic induction, the principle behind the electric generator and electric transformer. His ideas about electrical and magnetic fields, and the nature of fields in general, inspired later work in this area (such as Maxwell's equations), and fields of the type he envisaged are a key concept in today's physics. In chemistry, he created the first known compounds of carbon and chlorine, helped to lay the foundations of metallurgy and metallography, succeeded in liquifying a number of gasses for the first time, and discovered benzene. Perhaps his biggest contribution was in virtually founding electrochemistry, and introducing terminology such as electrolyte, anode, cathode, electrode, and ion. - 1 Biography - 2 Faraday's science - 3 Further reading The Faraday family came from the North of England; before Michael Faraday was born, his father James, a blacksmith, took his wife Margaret and two small children to the South, in search of work. The family settled briefly in Newington Butts, a borough in South London (it was then a separate village, but is now part of Southwark), where Faraday was born. The family, which eventually included four children (two boys and two girls), soon moved into London itself, living over a stables. His father was in poor health (he died in 1810), and unable to provide well for his family; as a result, Faraday grew up in poverty. The family was close, and obtained strength from their orthodox faith, the Sandemanians, a small dissident spin-off from the Presbyterian church. Faraday would stay faithful to that religion for the rest of his life. Very little is known of Faraday's early life, but he apparently received only an elementary education, being taught how to read, write, and do simple arithmetic. In 1804, at the age of thirteen, out of economic necessity he began work, as a delivery-boy for the shop of the bookseller and bookbinder George Riebau, a French émigré. At fourteen he became an apprentice with Riebau, and moved in with Riebau's family. The easy familiarity with mechanical activities which he picked up in this job no doubt stood him in good stead in his later life as an experimentalist. In a bookseller's household there were always books around for him to read, and Faraday was quick to take advantage of this. For instance, the third edition of the Encyclopaedia Britannica was one of the shop's bookbinding assignments, and his fascination for electricity was first stirred by reading the encyclopedia article about it. His first simple experiments at this time, which included a crude electrostatic generator and a weak voltaic pile, were performed as a result of reading it. From 1810, encouraged by Riebau, Faraday began to attend well-attended public lectures given by John Tatum which covered the entire range of 'natural philosophy', as science was then known; his older brother Robert paid his entrance fees. These covered a number of different topics, but those on electricity, galvanism and mechanics were of particular interest to Faraday. He made detailed notes of Tatum's lectures, which he later bound into a set of four volumes, and presented to Riebau, inscribed with a dedication thanking him for his encouragement of Faraday's interests in the sciences. At this time, Faraday also joined the City Philosophical Society, which had been founded in 1808, and consisted of a number of (youngish) people devoted to self-improvement, who met every other week at Tatum's house to hear and give lectures on scientific topics and to discuss them. It was there where Faraday delivered his very first lecture. Initial professional career At twenty-one, nearing the end of his apprenticeship, he was given tickets for a series of four lectures on chemistry delivered by Sir Humphrey Davy at the Royal Institution, a gift from a customer of the bookshop, who was a member of the Royal Institution. These lectures, in the spring of 1812, were recorded by Faraday in careful lecture notes, which he neatly bound into a book. Faraday later sent a copy of his lecture notes to Davy, who was no doubt pleased by the attention to his lectures; he interviewed Faraday, but at that point could do nothing for him. By the fall of 1812 Faraday was a fully-qualified bookbinder, and moved to another firm. He did not particularly like his new job, but within a few weeks his life took a sudden turn. Davy needed help for a few days at the Royal Institution, from someone with a rudimentary knowledge of chemistry, and Faraday got the job (probably as the result of a recommendation from a customer of his first employer). After the temporary job ended, Faraday had to leave the Royal Institution. However, as luck would have it, soon afterwards Davy's laboratory assistant was fired after becoming involved in a fight at the Royal Institution (apparently because of a drinking problem), and the position became vacant. Not surprisingly, Faraday got the job, and started work as Chemical Assistant at the Royal Institution on March 1, 1813, where he would stay for fifty-two years, until 1865. Half a year later, Faraday was invited by Davy to accompany Lady Davy and him on a tour of continental Europe. Davy was a renowned "natural philosopher" (the name "scientist" for this profession was only coined later, by William Whewell), and had unquestioned entry to scientific circles there. Faraday accepted the invitation, but during the tour sometimes regretted that decision, because Lady Davy (who is known to have been very snobbish) treated him as a lowly servant. Professionally, however, the tour was a great success for Faraday, because he had the chance to converse with many of the leading scientists in France, Switzerland and Italy, including such figures as Ampère and Volta. In addition, he saw the Alps and the Mediterranean, learned French and Italian, and became Davy's collaborator, not just his assistant. The chemist John Hall Gladstone (1827–1902), who knew Faraday well, wrote: This year and a half may be considered as the time of Faraday's education; it was the period of his life that best corresponds with the collegiate course of other men who have attained high distinction in the world of thought. But his University was Europe; his professors the master whom he served, and those illustrious men to whom the renown of Davy introduced the travelers. Faraday's improved abilities were recognized upon his return to the Royal Institution; he was promoted to superintendent of apparatus, and was given better rooms (he was living at the top of the Royal Institution building in Albemarle Street). For most of the 1810s and 1820s, his direct supervisor was Davy's replacement as Professor of Chemistry, William Thomas Brande. Until early in 1820, Faraday's work was mostly in chemistry, and he earned an international reputation as a good, solid chemist—but not yet a brilliant one. His reputation as a chemist received a significant boost when he managed to produce the first known compounds of carbon and chlorine in 1820. He produced these by substituting chlorine for hydrogen in what was called 'olefiant gas' (ethylene), these were the first chemical substitution reactions demonstrated. During the early 1820s he also investigated steel alloys, work which helped to lay the foundations of metallurgy and metallography. First major discovery He work at the Institute had initially been almost entirely on chemistry, but electricity had always been one of his interests. In 1820, several significant discoveries on the Continent (notably by Oersted, Biot and Ampère) which began to establish a connection between electricity and magnetism interested Davy; Faraday was therefore able to begin work in the area. His first major achievement was to show that electricity could be used to force a magnet into continual rotational motion, as long as the electricity continued to flow. He also decided that if an electric current in a wire could move a magnet, then if the magnet were held fixed, and the wire allowed to be mobile, then a current flowing in the wire should cause it to move. In the experiment he created to investigate these concepts, he managed to demonstrate both of these effects (see the section below on electromagnetic rotation for more details). In one part of the experiment a steady direct current moved a magnet in circles; in another, a similar current caused a wire to steadily rotate. These discoveries, made in September, 1821, are the basis of all electric motors, and brought Faraday instant world fame. Unfortunately, the achievement started a rift between Davy and Faraday, which a later incident over work on liquifying chlorine was to exacerbate. Davy was apparently of the opinion that Faraday had relied on some work done by Davy and William Wollaston, without properly acknowledging their contribution. (In April 1821, Wollaston, after hearing of Oersted's discovery, had visited the Royal Institution, and in collaboration with Davy had tried—in vain—something similar to what Faraday managed to do later that year.) Also in 1821, Faraday received his first promotion at the Royal Institute (to Superintendent of the House), and married Sarah Barnard, another Sandemanian, on June 12; they never had children. Sarah was a steadying influence in Faraday's life; she was a warm and charming person filled with maternal feelings which, in the absence of children, she lavished upon her husband and her nieces. The couple lived "above the store", at the top of the Royal Institution building, until 1862. Further scientific accomplishments Following a suggestion of Davy, Faraday managed to liquify chlorine in March 1823 (but only after he was lucky to escape serious injury in several unexpected laboratory explosions). This achievement further displeased Davy, who felt that he deserved partial credit for this discovery, since he had suggested the problem to Faraday. Faraday also succeeded in liquifying a number of other gasses, including carbon dioxide and sulphur dioxide. Faraday became a Fellow of the Royal Society in 1824, with one vote against him; it is believed that this was Davy's. The usual explanation was that this resulted from their prior disputes, but it may have been simply because of Davy's public stance against nepotism. Faraday never let Davy's opposition affect his respect for Davy, although he acknowledged that their relationship had become strained. Possibly as a result of his disputes with Davy, he was directed to spend much of the 1820s working on less important problems, but he still managed a few significant discoveries. In 1825, during research on illuminating gasses, he discovered benzene, which he called bicarburet of hydrogen; he isolated it from a liquid obtained in the production of oil gas. Some decades later, benzene would be one of the keys in the development of organic chemistry. During the late 1820s, he was also directed into research on glass, intended for producing better optical glass for telescopes; this did not have much result, although he created the recipe for heavy glass, with a very high refractive index. In 1825, Faraday became Director of the Laboratory of the Royal Institution and, beginning in 1826, he revived the tradition of popular lectures at the Institution, giving many himself. For many years, around Christmas he and others delivered a short lecture series especially for children, which attracted an audience from the upper social classes of London. The most famous of the Christmas lecture series, from 1848, called The Chemical History of a Candle, was published, and has since gone through innumerable editions in many languages. Another well-known series directed to young persons is about various forces in nature, from 1859. These Christmas lectures continue to this day, and are now televised, thereby reaching a much larger audience than the originals. Most significant scientific achievement After Davy's death in 1829, Faraday moved back to important areas of research, returning an area which he had first investigated in December 1824, which was the use of a magnet to produce electricity. (At this point in time, the only ways to produce electricity were with rubbing glass and amber and—since Volta's discovery of 1800—by primitive chemical batteries.) In August, 1831 he made what some consider his most important discovery, which is that a changing magnetic field (either from a moving magnet, or a wire moving through a magnetic field) can 'induce' an electric current in a wire (see the section below on electromagnetic induction for more details). He named this phenomenon electromagnetic induction, and it is used today in almost all production of electricity, as well as AC motors. In 1833 Faraday was honored by being appointed Fullerian Professor of Chemistry at the Royal Institution; the chair was especially created for him, and still exists today. (One of Faraday's biographers, J. H. Gladstone, later held the Fullerian professorship for three years.) He was to receive a number of other honours over the years, such as the Royal Medal and the Copley Medal (both from the Royal Society). Starting in 1832 Faraday also began an investigation which fortuitously turned into the very important work he did in electrochemistry. He started out with a desire to show that the various forms of electricity (static electricity, electricity produced by a battery, electricity in biology, e.g., of electric rays, and electricity produced by his induction methods) were all the same thing. In the course of this work, he discovered that it was the actual passage of electricity through a conducting liquid which decomposed the chemicals therein, not some sort of action at a distance of electricity, as had previously been believed. His extended investigations in this area laid the groundwork for electrochemistry, and he formulated two laws in that area which now carry his name. He also devised the terminology used in this field, which was derived from classical Greek, with the assistance of his friend William Whewell of Trinity College, Cambridge, who knew the language. During this entire period, and continuing on through the 1840s, he was developing the idea of electric and magnetic force lines. Because of his background, Faraday knew hardly any mathematics, and his intuitive ideas were qualitative and non-mathematical, so he was not able to put them into formal terms. Some details of his concepts contradicted the then widely-held belief that electromagnetic effects involved instantaneous action at a distance. Most of the contemporary physicists, who generally were well versed in the mathematical formulation of Sir Isaac Newton's mechanics, in which instantaneous action at a distance plays an important role, frowned upon Faraday's ideas. They looked upon electricity as an immaterial fluid that flows through matter; Faraday took a different point of view. He thought of it as a vibration, which was transmitted from place to place by intermediate contiguous particles. (Later physicists revived Descartes' idea of the ether to carry the vibrations.) By 1850, Faraday's thinking had produced a radically new view of space: instead of being 'nothingness', a mere void in which were located various material objects. Rather, he saw it as a medium capable of supporting electric and magnetic forces, through collections of what he called lines of force. The forces were not localized in the particles which were the source of them; rather, their manifestation, the force lines, were to be found throughout the space around them. This marked the birth of field theory, which today is a key concept in all of physics. The collection of all the force lines forms a field of force, a term coined by William Thomson (later Lord Kelvin), who advocated and extended Faraday's ideas; initially, Faraday felt that Thomson was the only scientist who really understood his field ideas. Thomson's ideas were later taken up and extended and refined by James Clerk Maxwell, who maintained that the basic ideas for his mathematical theory of electromagnetic fields came from Faraday, and that his contribution was to turn them into a concise and elegant mathematical form. Maxwell's characterization of his contributions, while admittedly first-hand, may be overly modest. 'Maxwell's equations', as they are now known, are today still the accepted form of the theory of electromagnetism. Among many other important results, they show that visible light consists of electromagnetic waves, as are radio waves, microwaves, infrared and gamma radiation, which are collectively known as electromagnetic radiation. Last major discovery Faraday was strong physically, but suffered occasionally from headaches, memory lapses, and bouts of depression. These symptoms increased in severity and frequency until in 1840, at the age of forty-nine, Faraday had a major breakdown, the exact nature of which is not certain. For four years he was hardly able to work, and his health never fully recovered. However, by 1845 he was well enough to resume work, and he started his research activities again. Because he believed strongly in the unity of forces, he again investigated the effect of magnetic fields on light, an area he had previously investigated starting in 1822, but without success; in September, 1845, he made another major discovery. Acting on a suggestion by William Thomson, whose mathematical work on the Faraday's field ideas had produced a prediction that a magnetic field should affect polarized light, he discovered such a connection; the polarization plane of polarized light is rotated by a magnetic field. This ability of magnetism to affect light is now known as the Faraday effect. In the same series of experiments, he also discovered diamagnetism. Although these results did not have such important direct practical applications as some of his earlier work, there were of considerable importance in the development of electromagnetic theory. True to his Sandemanian principles, and his indifference to honours and fame, Faraday turned down the offer of a knighthood, and twice declined to become president of the Royal Society. In 1861, when he was seventy years old, he resigned from the Royal Institution, but he was asked to stay on in a nominal post, which he did until 1865. In 1862, he and Sarah moved out from Albemarle Street into a house at Hampton Court provided a few years earlier by Queen Victoria, at the suggestion of her husband Prince Albert. He died there on August 25, 1867. In a characteristic display of his lifelong modesty, he had turned down an offer to be buried in Westminster Abbey, as he preferred a simpler funeral and grave (although he does have a memorial plaque there); he is buried in Highgate Cemetery. Faraday is memorialized in a number of ways: in addition to the farad, an electrical unit named after him, statues of him stand at the Royal Institution, and outside the Institution of Electrical Engineers in London; a number of university buildings also bear his name. In perhaps his most notable honour, his image appeared on the British 20 pound banknote (see image at right) for some years—an honour given to only a very few scientists. His successor in giving popular lectures on science at the Royal Institute, John Tyndall, who in 1853 became a professor at the Royal Institution, said of him: Taking him for all and all, I think it will be conceded that Michael Faraday was the greatest experimental philosopher the world has ever seen; and I will add the opinion, that the progress of future research will tend, not to dim or to diminish, but to enhance and glorify the labours of this mighty investigator. Time has fully confirmed the accuracy of Tyndall's estimation. This section contains some more technical detail on his most important results in physics and chemistry. When Faraday heard of Oersted's 1820 discovery that a steady electric current in a wire generates a cylindrical magnetic field (with the current-carrying wire as the axis of the cylinder), it occurred to him that a magnetic pole would be pushed around a circle by such a field. Hence, it would rotate forever, or at least as long as the current is flowing. He also reasoned that if a current in a wire can move a magnet, a magnet should be able to move a current-carrying wire. In 1821 he designed the apparatus shown in the figure on the right: it includes two distinct mechanisms, one for each of the two basic concepts he was working on. In one, electricity propels a moveable magnet, and in the other, a fixed magnet causes a mobile wire to move when electricity flows through it. In the vessel on the left, a strong bar magnet floats on end in a mercury bath, held in place only by a thread at its bottom. (Recall that mercury is a heavy, liquid, and metallic element that is a very good conductor of electricity.) A fixed copper wire dips into the mercury at the top of the bath; at the bottom of the vessel, another wire also projects into the mercury. In the vessel on the right, another bar magnet is fixed in an upright position in the middle of another bath of mercury. A conducting (copper) socket extends into the bottom of the bath, and a copper wire which hangs from a flexible joint above the mercury bath dips into the top of the bath; the joint allows the top wire to pivot relatively freely around the joint. When a direct current is switched on in either vessel (running in through one wire, through the mercury bath, and out through the other wire), it produces motion of the mechanism in that bath. The wire in the one on the right rotates around the magnet so fast that—as described by Faraday—the eye can scarcely follow the motion; the magnet on the left rotates around the fixed wire. Note that Faraday's setup is such that only one pole of each of the two poles of the magnets is employed. If either of the magnets is turned around (i.e., the North and South poles are interchanged), the rotation which is observed in that vessel changes direction. The same happens if the direction of the current is reversed. Faraday, who coined the term electromagnetic rotation for this effect, had in fact invented a primitive precursor to the electric motor. In a series of experiments performed in August, 1831, Faraday, knowing that electricity can create magnetism, investigated whether the converse effect is also true; in other words, whether magnetism has an effect on an electric current. For this, he wound two coils of insulated wire around either side of a soft iron ring of six inches external diameter; the ring itself was 7/8 inch thick. (See the illustration on the left; the original apparatus for this, and other of his induction experiments, is preserved to this day in the Royal Institution.) The coils were not connected to each other. The coil A on the right-hand side could be connected to a battery; a copper wire attached to the coil B was led over a magnetic needle a few feet away from the ring. At the moment the battery was connected to coil A, and the current began to flow, the needle oscillated, and after a while settled into its original position. When the battery was disconnected from A, a disturbance of the needle was again observed—a result which surprised Faraday, who had not expected to see a pulse from both stopping as well as starting the flow of electricity. When Faraday performed this experiment, it was already known that the part of the iron ring covered by coil A becomes a magnet—an electromagnet—when current runs through the coil. (The positioning of the test magnetic needle a few feet away was necessary to ensure that the electromagnetism of the powered coil A did not affect the magnetic needle.) The magnetic North and South poles of this electromagnet are at the beginning and the end of coil A—which end is North depends on the direction of the current in the coil A. The magnetic field produced by the electromagnet is effectively channeled from one end of it around to the other end through the iron of the ring, which is more hospitable to magnetic fields than air, a property known as magnetic permeability, thereby turning the entire ring into a magnet. Since the iron ring passes through coil B, then as long as the current is flowing through coil A, B is wound around a magnet as well. Faraday had discovered that changes in the strength of the magnetic field of the electromagnet (which occur when switching it on and off) produces a current in the wire connected to coil B (which, remember, is not in electric contact with A). The proof that a current was running through B was the movement of the magnetic needle; it was known from Oersted's work that a current flowing in a wire will move a nearby magnetic needle. Within a few weeks he further investigated this effect by setting up a cylindrical coil of about 4 centimeters diameter and about 16 centimeters long, made with 70 meters of wire. He then rapidly moved a permanent cylindrical bar magnet of about 22 centimeters length up and down inside the coil. He then was able to observe an electric current produced in the coil, 'induced' by the moving magnet. The direction of the current produced in the coil when the magnet was moved into the coil was the reverse of that produced when it was then pulled out. This important experiment proved that moving a wire through a magnetic field produced a current in the wire (because it was equivalent to holding the magnet still, and moving the coil instead). He later succeeding in generating a current by rotating a copper disk between the poles of a large horseshoe magnet; the disk had one wire fixed to its center, and another made sliding contact along the edge of the disk. This was the first generator, and all generators today are its descendants. It also pointed the way toward the realization of an electric motor, since reversing the operation (by feeding an electric current into the disk) would make it rotate. Faraday called this effect electromagnetic induction, and he was fascinated by the symmetry revealed by the effect that he now had discovered. Previously, it had been known that a moving electric charge (i.e. a current) produced a magnetic field; now it had been shown that a moving magnet produced an electric field (it is this field which causes the current to flow). Magnetic induction is of extreme importance in modern industrial society, because it is the principle behind electric generators and transformers. The basic concept behind electrochemistry is quite simple: it consists of a vessel containing an electrolyte, which is a solution of charged particles (ions). A direct electric current is run through the electrolyte, introduced via electrodes which are dipped into it; the ensuing decomposition of chemicals in the solution is known as electrolysis. Depending on the electrolyte's chemical composition, and the makeup of the electrodes, a large range of useful chemical reactions result at the electrodes. Since the solution is electrically neutral overall, the total charge carried by the positive particles (cations) is the same (in absolute value) as the total charge carried by the negative particles (anions). The current runs from the negatively charged electrode (the cathode) to the positively charged electrode (the anode). Inside the vessel, cations move to the cathode, pick up electrons—which carry negative charge—so that the cations are neutralized, and are deposited on the cathode. If the neutral product is gaseous, it escapes from the cathode in the form of gas. At the anode, the opposite happens: anions lose their excess electrons, and are deposited on the anode. Outside the vessel, the electrons run from the anode to the cathode. Faraday's first law of electrochemistry states that the amount of substance deposited on the electrodes is proportional to the total amount of current that has passed through the electrodes. In the case of a steady current, this is equal to the time that the current has been running, multiplied by the amperage (flow rate) of the current. A constant proportional to this amount is named after him—Faraday's constant. Faraday's second law of electrochemistry is a recognition of the fact that anions and cations may carry more than one elementary charges (although their number is always a small integer). Therefore, it takes a corresponding number of electrons to neutralize the cation, while for the anion to become neutral, it must lose a like number of electrons. These differently charged ions will all produce the deposition of only one atom from the requisite number of electrons, so that twice as much electricity is needed to generate an atom from a doubly charged ion as from a singly charged ion. Faraday was the first to see these relationships clearly; moreover, all the terminology in the field (electrolysis, electrolyte, electrode, anode, cathode, ion, anion, and cation) was created by him. Dielectrics and Faraday's cage In his work on electrolysis, Faraday had noticed that many liquids which conduct an electric current become non-conducting when frozen. For instance, slightly acidic water is a good conductor, but when turned into ice is an insulator. Moreover, two plate electrodes with opposite electrical charge on them attract each other, even if a non-conducting substance—called by Faraday a dielectric—is in between the plates. For Faraday this meant that the electrostatic attraction which produced that attractive force is not an action at a distance—as was generally assumed at the time—but a force conveyed by contiguous particles. In the second half of the 1830s he started a research program intended to prove these ideas about lines of force which are carried by intermediate particles (a concept later becoming important as ether), trying to disprove instantaneous action at a distance. To prove his point about the nature of electrostatic forces, Faraday constructed capacitors of different shapes and sizes, and experimented with all kinds of dielectrics. To honor his work in this area, the SI unit of capacitance (the amount of charge that can be stored in a capacitor) is called a farad. In the course of this research program, in November 1837, Faraday had a large wooden cube built, big enough to hold a person and some scientific apparatus, and had the sides completely covered with a network of conducting wires. He gives the following description of it: I went into the cube and lived in it, and using lighted candles, electrometers, and all other tests of electrical states, I could not find the least influence upon them, or indication of anything particular given by them, though all the time the outside of the cube was powerfully charged, and large sparks and brushes were darting off from every part of its outer surface. The results of his experiments in the cube enabled him show that electricity was in fact a force rather than an imponderable fluid, as was argued by some physicists at that time. We would now call the conducting cube he constructed a Faraday cage; they are notorious among people trying to use cellular phones in buildings, and are a life-saver for car occupants in thunderstorms. - Cantor, Geoffrey N. Michael Faraday: Sandemanian and Scientist Macmillan (1991) - Explores the potential connections between Faraday's religious beliefs and his science. - Cantor, G. N., David Gooding, and Frank A. J. L. James Michael Faraday Humanity Books, New York (1996) - A slim paperback, it still provides a good overview of Faraday as a person (including his religious beliefs), his scientific career and discoveries, and his influence; it also contains an interesting, brief, historiographical note. - Hamilton, James A Life of Discovery: Michael Faraday, Giant of the Scientific Revolution Random House, New York, (2004) ISBN 1-4000-6016-8 - Contains extensive quotations from original documents, and is well footnoted, but focuses more on Faraday the person than the details of his scientific work. - Hirshfeld, Alan W. The Electric Life of Michael Faraday Walker (2006) - A modestly-sized volume that is more weighted toward the actual science than the Hamilton volume; includes an interesting chapter on the relationship between Faraday and Maxwell. - James, Frank A. J. L. "Faraday, Michael (1791–1867)", Oxford Dictionary of National Biography, Sept 2004; online edn, Jan 2008 - Williams, L. Pearce Michael Faraday, A Biography Basic Books, New York (1965) - The first modern scholarly biography, this lengthy work covers his scientific career in considerable details, and contains extensive quotations from Faraday's original writings; each chapter ends with extensive source notes. - Simmons, John G. The Scientific 100: A Ranking of the Most Influential Scientists, Past and Present | Complete chapter on Michael Faraday, pages 59-63. Google Books preview. - Press Release, University of Bath, 25 October 2006 - As a result he has been likened to Moses, in that he brought the scientific fields he worked in to a place he himself could not enter: an age when advanced mathematics became the language of science. See Simmons, John G. The Scientific 100: A Ranking of the Most Influential Scientists, Past and Present, pp. 59-60. - Simmons, John G. The Scientific 100: A Ranking of the Most Influential Scientists, Past and Present, pg. 62. - Some works give the name as Ribeau, but this seems to be incorrect. - Hamilton, James A Life of Discovery: Michael Faraday, Giant of the Scientific Revolution Random House, New York, (2004), pp. 10-12. - Hamilton, James A Life of Discovery: Michael Faraday, Giant of the Scientific Revolution, pg. 12. - Faraday Heritage, Royal Institution - J.H. Gladstone, Michael Faraday, (3rd ed, 1874). online edition - Faraday biography, Institute of Chemistry at the Hebrew University of Jerusalem. - According to Discovery as Invention: Michael Faraday, this was theorized before by others, but he was the first to actually demonstrate it. - Frank A. J. L. James, "Faraday, Michael (1791–1867)", Oxford Dictionary of National Biography, Sept 2004; online edn, Jan 2008 - Hamilton, James A Life of Discovery: Michael Faraday, Giant of the Scientific Revolution, pp. 186-188. - Farndon, John et al. The Great Scientists: From Euclid to Stephen Hawking, Metro Books, New York (2007) pg. 82 - M. Faraday, The Chemical History Of A Candle (1908) online edition - M. Faraday, On the Various Forces in Nature and their relations to each other: a course of lectures delivered before a juvenile audience at the Royal Institution, (six lectures)online - Faraday biography, School of Mathematical and Computational Sciences at the University of St Andrews - A modification of Plate IV in: M. Faraday, Experimental researches in electricity, vol II, Richard and John Edward Taylor, London (1844). On line - Bence Jones, The Life and Letters of Faraday vol. II, online - M. Faraday, Experimental researches in electricity, vol I, 2nd edition, Richard and John Edward Taylor, London (1849). p. 366 On line
<urn:uuid:cfccaf59-fe09-44e0-b7f6-3c87a349dc9e>
CC-MAIN-2017-17
http://en.citizendium.org/wiki/Michael_Faraday
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917122720.81/warc/CC-MAIN-20170423031202-00367-ip-10-145-167-34.ec2.internal.warc.gz
en
0.97582
7,631
3.484375
3
Conscience itself does not create norms but discovers them in the objective order of morality. When Cardinal Wojtyla became Pope John Paul II in 1978, he was well prepared to teach the Catholic faithful about ethics. As a young man Karol Wojtyla thought about a career in acting, but he felt a call to the priesthood and soon found himself immersed in the study of philosophy and theology. He was particularly attracted to the study of moral philosophy. After his ordination Father Wojtyla pursued doctoral studies, writing his dissertation on one of the foremost moral philosophers of the twentieth century, Max Scheler. He then joined the faculty at the Catholic University of Lublin in 1954. He was appointed to the prestigious Chair of Ethics at that University in 1956. During these years, Wojtyla offered popular seminars and he wrote extensively about ethical issues, often focusing on the intimate connection between ethics and anthropology. One of the moral themes that pre-occupied Wojtyla in these pre-papal writings was conscience. It is no surprise, therefore, that he would return to this theme many times in his magisterial teachings. The Pope recognized the need for a proper understanding of conscience, and he was concerned with those who sought to undermine the orthodox doctrine of conscience with more subjectivist notions. Not only has this doctrine been distorted by some revisionist theologians, who diminish the moral law’s decisive role in human development, it has also been corrupted in modern culture. In recent centuries the notion of authenticity has displaced the traditional conception of conscience. The person is supposedly guided by an “inner voice” to make authentic moral choices that are consistent with his or her particular value system. Conscience is also equated with a Freudian superego, which makes us aware of superficial and conventional social standards. The pre-cursor of this idea was Nietzsche, who reduced conscience to the sublimation of instinct. In the face of all this confusion, conceptual clarity about the nature of conscience is essential. Thus, one of the aims of the Pope’s writings was to re-affirm the Church’s traditional understanding of conscience and to elaborate on Vatican II’s concise presentation on this theme. While the Pope’s treatment of conscience is generally consistent with the philosophy of Aquinas, there is a deeply spiritual dimension to his reflections that sets them apart from the tracts on moral theology used in the pre-conciliar Church. The Second Vatican Council had emphasized the need for a renewal of moral theology, which should be properly “nourished” by scriptural sources. The council, however, had little to say about this pivotal issue of conscience. Perhaps the Council Fathers would have elaborated on this matter in more precise language had they known what was looming for the Church in the wake of Paul VI’s Humanae Vitae and the claims that conscience was the ultimate arbiter of sexual morality. Some theologians have maintained that Vatican II distanced itself from moral legalism while supporting the autonomy of conscience. They have also argued that John Paul II reversed Vatican II’s revised understanding of conscience. One theologian contends that the Pope has “re-contextualized” Vatican II’s presentation of conscience in Gaudium et Spes into a “framework of law.” Where the council had highlighted “the law of love” and a “communal search for truth,” John Paul II puts unwarranted emphasis on adherence to objective norms of morality.1 Moreover, the Pope has refused to acknowledge the apparent stipulation in Gaudium et Spes that conscience is only guided by moral laws that must be flexibly applied to concrete situations. According to this interpretation, Gaudium et Spes claims that through conscience the “objective norms of morality” function only as a “guide” for “persons and groups” in their search for truth.2 But a careful reading of the definitive Latin text of Gaudium et Spes says otherwise. First, the council is not referring to a general “law of love” in paragraph 16 but to the natural moral law which bids us to “do good and avoid evil,” a clear allusion to the first principle of the natural law. The council’s intentions in this paragraph should be apparent by its citation of Romans 2:15-16, which is the principal scriptural text for discussions of natural law. According to Gaudium et Spes, the “voice of this law…speaks to [a person’s] heart when necessary: do this, avoid that.”3 The natural law is fulfilled in loving God and our neighbor but it has a precise delineation, as evidenced by its expression in the Decalogue. Thus, Gaudium et Spes’ powerful image of conscience as the voice of the moral law strongly implies that a properly functioning conscience must be in harmony with a specific moral law which directs us to “do this” and “avoid that.” Second, the search for truth referred to in the document is not some subjective, existential quest that leads people in contradictory directions to find their own customized version of the truth. Rather, we find the general moral truth specified in the objective norms of morality, such as the moral precepts revealed in or derived from the Decalogue. These specific norms allow us to work out true solutions to specific moral problems.4 The quest for moral truth might be a communal process, but this doesn’t imply that moral truth is arbitrary or that there is room for some type of “creative acceptance” of that truth. Finally, these objective norms are far more than a “guide,” which is a poor translation offered for the Latin word,conformari (conform).5 According to Gaudium et Spes, “to the extent that a correct conscience prevails, persons and groups are turning away from blind choice, seeking to conform to the objective norms of morality.”6 This same view of conscience bound by precepts of the natural law is confirmed in other Vatican II documents. In Dignitatis Humanae, for example, we are informed that “man truly perceives and understands the imperatives of the divine law through the mediation of conscience.”7 God makes man a “sharer” in this divine law through the natural human law so that “he can recognize more and more the immutable truth.”8 It is difficult to get the sense from these passages that the council has disavowed the natural moral law as the foundation of morality in favor of some vague “law of love.” Nor does the council suggest that the norms of morality are merely formal standards, a useful compass for conscience that can be applied with some pliancy depending upon the circumstances. On the contrary, the dignity of conscience lies precisely in its ability to discern these unchangeable and objective norms of morality so that particular actions will conform to those norms. According to John Paul II, “it is always from the truth that the dignity of conscience derives.”9 Moral principles take the form of universal natural laws that “flow from human nature itself.” They direct us to unconditional respect for intrinsic human goods such as life, truth and marriage. These principles or laws are rules for individual cases. When the law is fully understood and has relevance for a particular case, that individual case is bound by this norm. For example, the Church teaches that direct abortion is always wrong because it is contrary to the basic human good of life. This norm applies to every individual case regardless of an individual’s set of circumstances. It is evident from these Vatican II teachings that an immutable moral truth, objective moral criteria, serves as the foundation for the judgments of conscience. And the primary role of conscience is to apply these criteria so that one’s actions will be compatible with the moral law. Negative precepts of this universal law (such as “it is always wrong to kill an innocent person”) permit no exceptions or flexibility in their application. On the other hand, the application of affirmative precepts (such as “one is obliged to help the poor”) is more open-ended, so the person can conscientiously discern the most prudent course of action consistent with his or her particular circumstances. If one carefully assesses the council’s pronouncements on conscience it becomes apparent that John Paul II is not re-interpreting Vatican II, but making sure that Vatican II is being interpreted correctly. At the same time, he provides a welcome amplification of its pronouncements on human conscience. So what does the Pope teach about conscience and why should it interest us today? Why does he devote so much attention to this issue? Misinterpretations over the role of conscience are often at the source of the laity’s confusion about moral issues. Some Catholics still have the impression that conscience itself is the final criterion of sin. A sovereign conscience is seen as the triumph of human subjectivity and freedom. But conscience actually represents the overcoming of subjectivity because it brings us into direct contact with the moral truth revealed by God. Hence the need to clarify the notion of conscience which was not modified by Vatican II, despite these claims to the contrary. The Pope’s exposition provides a fresh perspective that is theologically profound but also imbued with a new spiritual depth. Wojtyla on conscience Before we explore John Paul II’s papal teachings on conscience it is instructive to briefly consider how he treated this issue in his copious pre-papal writings. We cannot do justice to this subject in this article, but an overview will help us put his later writings in context. The Lublin ethics professor’s discussion has a secular overtone as he tries to describe how conscience functions within the inner life of the person. Wojtyla undoubtedly advanced this secularized version of conscience for two reasons: he was looking at conscience as a philosopher and not as a theologian and he had to be cautious about references to God and divine law within the political milieu of Communist Poland. In his philosophical treatise, The Acting Person, Wojtyla imputes several inter-related roles to conscience. Conscience, which “judges the moral value of an action,” represents the human person’s capacity to become aware of the moral goodness or values at stake in his or her prospective actions. For example, as a person considers whether or not to seek out mortal revenge in the face of a slanderous insult, his conscience recognizes the value of human life, which rests on the truth that life is a choice-worthy, objective good. Second, conscience converts this recognition (or judgment) that “human life and health is truly good” into a duty: “I am obliged to respect the life and well-being of all persons.” The moral truth or value grasped by our practical intellect impresses itself upon us as a duty that we are obliged to follow. Finally, in its “complete function” conscience “surrenders” the will to the truth about the good which is experienced by conscience as a moral duty. This presentation is generally consistent with the Thomistic view that sees conscience not as a separate faculty or power within the self but as each person’s practical intelligence at work in the moral sphere, which takes the form of judging the rightness or wrongness of actions. Aquinas stressed that conscience is the person’s last and best judgment of what one should choose, and the person can decide to follow that judgment or ignore it. Wojtyla makes it clear that conscience does not have the power to make its own moral laws, for “conscience is no lawmaker.” Conscience itself does not create norms but discovers them in the objective order of morality. Any suggestion to the contrary distorts the proper order determined by the Creator. The function of conscience is not to shape moral norms but to recognize the normative power of moral truth and to submit the will to that truth. According to Wojtyla, truthfulness is “the keystone of the whole structure.” Once conscience apprehends the truth of moral norms the person appreciates that these norms are not alien to him or imposed from the outside. Rather they orient the person to his or her own good and personal fulfillment. For example, when a person’s conscience perceives the intrinsic value of marriage, that person subscribes to the norm forbidding adultery not because it is a burdensome external command, but because following this norm is in harmony with his or her nature and the objective order of goodness willed by God. If conscience does have a “creative” role, it is in shaping our moral convictions. Conscience enables us to internalize and personalize the moral truth. Through conscience we appreciate the specific bearing the moral norms have on our lives for our own personal growth and emotional stability. Thanks to conscience we can determine how to best follow those affirmative norms and give them prominence in our lives. Conscience molds the moral norm into the unique and distinctive form which it takes within the experience and life of each person. In summary, conscience, for Wojtyla the philosopher, is the exercise of practical intelligence as it makes judgments about the moral value of our actions. Through conscience unencumbered by the darkness of sin the human person constantly strives to grasp the moral truth. Conscience also enables us to realize the normative power of these truths (such as “life and health is an intrinsic human good”), which are transformed into duties that take the form of specific moral norms. Conscience surrenders the will to the moral truth apprehended by our practical intelligence, and so it is always subordinate to that truth. Conscience in the encyclicals Pope John Paul II spoke frequently about conscience in his homilies and talks. He exhorted his listeners to purify and form their consciences according to the teaching of Jesus Christ. He also wrote extensively about conscience in two of his fourteen encyclicals: Dominum et Vivificantem and Veritatis Splendor. In these papal writings he departs from the strictly philosophical approach used in his early writings, but he continues to articulate the same issues about conscience as the working of practical intelligence whereby the person discovers and submits to the moral truth. In keeping with the teaching of Gaudium et Spes, he also insists that this truth is objective and based on the natural law known to some extent by every human person. These writings also bear some affinity to Cardinal Ratzinger’s treatment of conscience, since they emphasize the transcendence of conscience, which brings us into contact with the divine. For John Paul II, the moral life represents a powerful interfusion of the human and the divine, and this notion is pervasive in the Pope’s discussions on conscience. Following Aquinas, he maintains that the natural law is a participation in the divine law. The unique authority of the natural law, inscribed in our nature, cannot be questioned because its source is the Creator himself. Similarly, conscience is a way for God to speak to all men and women in the depth of their hearts through his law. The Pope’s elaborate account of conscience in Veritatis Splendor (§ 54-64) begins with an assessment of the erroneous teachings of some theologians who have reinterpreted the function of conscience by emphasizing its “creative character.” In their view, conscience does not make judgments but “decisions.” The norms of morality provide only a “general perspective,” which should influence but not necessarily determine a person’s decision. Hence, according to these accounts, the person has considerable leeway to interpret all of the moral norms within the context of his or her concrete situation. According to this perspective, personal moral decisions must be made “autonomously” so that “man can attain moral maturity.” This specious conception of conscience, however, challenges its very identity and certainly conflicts with well-established Church doctrine on this matter. On the contrary, the Pope once again insists that the proper function of conscience is twofold: making us aware of moral truth and making judgments which apply that truth to resolve specific moral questions. But conscience is more than the exercise of our practical intelligence in applying universal rules to specific actions. The Pope turns to a passage from St. Paul’s letter to the Romans (2:1-16) to deepen our understanding of conscience so that we can appreciate its transcendent characteristics. In the passage cited by the Pope, Paul explains that what the moral “law requires is written on their hearts, while their conscience bears witness and their conflicting thoughts accuse or perhaps excuse them” (Rom. 2:14-15). Thus, conscience is also a “witness” for man, a witness of God’s caring love that directs a person’s activities toward his or her own flourishing (and ultimately toward union with God). The dialogue of man with himself is indirectly a dialogue with God, since he is the author of the moral law. Even the non-believer who hears the moral law echo within hears God’s Word and implicitly engages in a dialogue with God as he ponders what to do. Conscience, therefore, is a judgment by man and about man. Conscience makes evident what a person must do or not do, and so it is prospective. A person can choose to follow this practical judgment made by conscience or ignore that judgment and act otherwise. Second, when conscience assesses an act already performed by the person, it has a retrospective role to play. The person may have a “guilty conscience” because he knows that he made the wrong moral choice and should have done otherwise. In its retrospective capacity, conscience is “a moral judgment about man and his actions, a judgment either of acquittal or of condemnation” depending upon whether one’s action was “in conformity with the law of God written on the heart.” This verdict of conscience remains within the person as a “pledge of hope and mercy.” The transcendence of conscience is a further sign of its unimpeachable dignity, since conscience conveys to us a message from God who reveals himself through his moral law inscribed in our hearts. Moral reason’s awareness of this law is a sign of God’s immanence. God is with all persons, whether they realize it or not, through the work of conscience which brings them face to face with the divine Logos in the form of the moral law. According to the Pope, conscience is “the most secret core and sanctuary of a man, where he is alone with God, whose voice echoes in his depths.” The faithful need to be taught about how to form their consciences, but they also need to be continually reminded about the proper role of conscience. Conscience should never be confused with a superego, misconstrued as the source of repressed guilt, or conflated with some mysterious “inner voice” that calls us to greater authenticity. Nor should conscience be invoked to validate society’s retreat from universal principles in favor of subjective or variable standards. Conscience expresses itself in informed judgments, not independent and arbitrary decisions. According to the Pope, “conscience is not an independent and exclusive capacity to decide what is good and what is evil.” Conscience must act in conformity with the objective moral norms given to us by God, or it is in error. Conscience is always subordinate to moral truth. It is crucial, therefore, to dispel the myth that conscience is a lawgiver or represents the final subjective determination of what is good or evil. It is equally vital to repudiate any notion that Vatican II dramatically altered the Church’s teaching on conscience. Admittedly, the council’s exposition is thin, but Pope John Paul II has enriched our understanding of the doctrine of conscience, and his authoritative teachings cannot be casually dismissed. What’s distinctive about Wojtyla’s treatment of conscience is its emphasis on the moral truth that directs the well-formed conscience. Conscience cannot be properly understood apart from the moral laws or duties derived from our apprehension of values. In the encyclicals, which give prominence to the spiritual and transcendent dimension of conscience, we learn about the divine origin of this law. When conscience functions correctly it reveals the moral law, which acts like a “herald and messenger” from God himself. We must bear in mind that when we hear the voice of conscience within us we hear the voice of God who is calling us to be true to ourselves as human beings made in his image. Conscience, therefore, is far more than a rational judgment process. Through conscience man opens himself to the demands of moral truth and the voice of God who speaks to us through that truth. Conscience provides each person with the capability to transcend his own ego so that he can grasp those objective goods that perfect all persons and lead ultimately to the fullness of life. Above all, pastors and teachers must re-emphasize that the moral life is no place for ungrounded freedom and experimentation. One of the modern heresies boldly confronted by John Paul II in many of his encyclicals is humanity’s inflated notion of freedom coupled with a Pelagian confidence in the powers of the creative self. On the contrary, the Pope emphasizes man’s receptive freedom that first receives the Word and acts accordingly. We do not spontaneously create the moral law, and thereby assume responsibility for our own moral or social evolution. Rather, we receive this law as a gift from God. Conscience should help each person to mirror his or her moral life on the fiat of Mary, which is anterior to all human activity. According to the Pope, Mary “became the model of all those who hear the Word of God and keep it.” Moral creativity should image the “perfect” creativity of Mary’s humble Magnificat, which calls us to first acknowledge the great things God has done for us by giving us the Word. Through the mediation of conscience we accept the moral doctrine that is an integral part of this Word as a gift of truth. All human persons are called to make their best effort to submit their will to that truth. Accordingly, conscience must always be receptive and submissive to universal moral values before it is ever “creative.” The creativity of conscience hinted at in The Acting Person can take several valid forms, as conscience determines the optimal means of following affirmative moral norms. Conscience can also move the person to have strong moral convictions and to be proactive. For example, someone might be inspired by his conscience to devote his life so that a particular norm such as “respect for all innocent human life” is better protected by law and esteemed by the state. Finally, thanks to a mature conscience any person can become acquainted with the universal moral law whose author is God himself. Conscience is the mysterious place where even the non-believer is cognizant of a compelling moral truth, even as he is tempted to take flight from that truth and seek refuge in the false security of satisfying his subjective whims. Although this person is unaware that he hears God’s voice above the din of secular culture, perhaps the way is now open for his evangelization. With God’s grace, conscience can gently lead someone on the first steps of a spiritual journey that ultimately points homeward to the fullness of truth abiding in the Incarnate Word and his Church. - Mary Elsbernd, “The Reinterpretation of Gaudium et Spes in Veritatis Splendor,” Horizons 19 (2) (2002): pp. 233-234. ↩ - Ibid., p. 234, quoting Gaudium et Spes, par. 16. ↩ - Gaudium et Spes, Acta Apostolica Sedis 58 (1966), 1025-1115, par. 16 (my translation). ↩ - While the Abbott translation used by Elsbernd says that “Christians are joined with the rest of men…in the search for genuine solutions” to moral problems, the Latin text is “in veritate solvenda” more accurately rendered as “true solutions” (my emphasis). See Abbott’s translation of Gaudium et Spes in Walter Abbott, S.J. (ed.), Documents of Vatican II, New York: Guild Press, 1966. ↩ - See the Abbott translation of Gaudium et Spes, par. 16. ↩ - Gaudium et Spes, 16, (“Quo magis ergo conscientia recta praevalet, eo magis personae et coetus a caeco arbitrio recedunt et normis objectivis moralitatis conformari satagunt;” my emphasis and translation). ↩ - Dignitatis Humanae, 3, Acta Apostolica Sedis, 58 (1966), 929-946. ↩ - Dignitatis Humanae, 3. In a footnote to the Latin text (Dignitatis Humanae, n.3) the Council Fathers cite St. Thomas Aquinas’ classic treatment on the natural law, which explains how everyone knows the “unchangeable truth.…the common principles of the natural law.” See Summa Theologiae, I-II, q. 93, a. 2. ↩ - Pope John Paul II, Veritatis Splendor, Boston: Pauline Books, 1993, 62. ↩
<urn:uuid:343551a7-66d1-4660-8c42-230d495b0cdd>
CC-MAIN-2017-17
http://www.hprweb.com/2009/08/pope-john-paul-ii-on-conscience/
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917124371.40/warc/CC-MAIN-20170423031204-00252-ip-10-145-167-34.ec2.internal.warc.gz
en
0.949172
5,356
2.921875
3
ABSTRACTS - STTR This project is producing inexpensive, direct reading biosensors, capable of real-time measurement of bacterial contamination in sensitive areas, such as food and pharmaceutical facilities and water treatment plants. During Phase I, Protein Solutions demonstrated the feasibility of detecting ATP in quantities as small as 10-9 g (2xl0-12 moles) both visually and on photographic film. The technology employs the firefly luciferase catalyzed light producing reaction between luciferin and ATP. They are able to produce a spatial light pattern which indicates the concentration of ATP present in a sample. In this project they focus on increasing the sensitivity of the direct reading ATP sensor to levels more suited to bacterial detection (10-12 to 10-13 g). This increase in sensitivity is being achieved by three primary means: (1) increasing the intensity and/or duration of the luminescence for a given quantity of ATP; (2) increasing the amount of light that reaches the detector (film or otherwise); and (3) optimizing the non-instrumented detector for the system. Each of these objectives should contribute at least one order of magnitude increase in absolute ATP sensitivity. The potential commercial applications as described by the awardee: Rapid, simple, inexpensive, and reliable measurement of bacterial contamination will facilitate industrial compliance with safe food and dairy practices. The sensors can also be applied to a range of medical and pharmaceutical environments and products. Eventually, marketing efforts will lead to consumer use in the home for the sanitary monitoring of kitchen and bathroom surfaces. The chemical analysis of mixtures is commonly required in chemistry, materials science, biotechnology, and environmental science. If the application demands detection, identification, and quantification of more than a very few substances in a mixture, then chemical separation is usually required. Most current methods of separation are either very slow or are limited to only a small portion of the mixture. The objective of this research is to develop much faster instrumentation for the chemical analysis of moderately complex mixtures. The method employs two independent chemical separations applied in series to a mixture in such a way that the whole sample passes through both separations generating a separation in two-dimensions rather than one. The second of the two separations operates so quickly that it can be applied repeatedly to each small portion of the mixture emerging from the first separation. This method, comprehensive two-dimensional-gas chromatography, has been demon-strated in the research laboratory. The potential commercial applications as described by the awardee: This research project will develop a rugged instrument for commercial application. The instrument will detect, identify, and quantitate volatile organic substances in mixtures containing about 100 components within five minutes or less. This project is developing a capacitively coupled microwave plasma (CCMP) as a microsample excitation plasma for multi-element determinations with the detection limits, accuracy, precision, and sample size requirements of modern graphite furnace atomic absorption spectrometry, using atomic emission spectrometry. Currently, there are no widely applicable analytical methods for multi-element determinations on microsamples. In the unique CCMP approach the helium plasma supported on an electrode in the microwave field envelops a tungsten cup containing the liquid (2-10 microliters) or solid (ca. lmg) sample. The transient emission is detected with an echelle spectrometer using a Charge Injection Device (CID) detector. Detection limits are in the low pg range with 10 microliter liquid samples (0.5-5 ppb). Studies using the Thermo Jarrell Ash IRIS echelle spectrometer indicated that the CCMP is well suited for multi-element detection. Phase II will include the design of an optimum optical interface between the plasma and the echelle spectrometer, optimization of the CID data acquisition for transient signals, refinement in the electrode, redesign of the power supply to allow more reproducible computer control of the microwave field, and a comprehensive study of potential interferences in a variety of matrices, using both liquid and solid samples. The potential commercial applications as described by the awardee: The unique combination of multi-element capability, high sensitivity, low cost, and ease-of-use results in substantial potential markets in clinical research, medical diagnostics, and routine laboratory analysis. The design, fabrication, and testing of an innovative chemiluminescence (CL) based prototype analyzer capable of continuous chlorine monitoring in water is under development. Such an instrument permits precise control of chlorination processes reducing the health risks caused by the formation and release of chlorinated by-products associated with over-chlorination, particularly in drinking water. Current analytical instruments are poorly suited to this task due to interferences, unstable reagents, time-dependent responses, operator-dependent results, poor reproducibility, and expense. The CL analyzer will meet the need for a sensitive, accurate, highly selective, and inexpensive instrument that is easy to use with a minimum of operator intervention. This innovative new technology combines the intrinsic selectivity and sensitivity of CL with the reproducibility and ease of operation afforded by the use of solid phase modules for control of the pH, luminol concentration, and as a means to calibrate the analyzer. Solid phase beds eliminate the need for reagent preparation, and can be quickly and easily exchanged after long periods of operation (>30 days). Monochloramine, the most common interference in calorimetric analysis for chlorine, produces no CL response in the chlorine analyzer. Operation conditions, component hardware, solid phase bed design, and CL detection efficiency will be refined based on performance and commercial potential. Methods will be developed for quantitation of total chlorine and reagentless calibration. A prototype will be built and tested on real water samples forming the basis for commercialization of the technology. The potential commercial applications as described by the awardee: Demonstration of the Reagentless CL Chlorine analyzer prototype will directly lead to the commercialization of this readily marketable technology. There is a need for this type of analyzer for process control in drinking water distribution systems and municipal wastewater systems throughout the country to mitigate the formation of chlorinated organic by-products due to over-chlorination. This project addresses the technical merit and feasibility demonstration of novel integrated optical waveguide devices using the emerging technology of electro-optic polymeric materials. Such devices would offer significant benefits for optical communication, signal processing, computing and sensing applications. The overall objective of Phase I is to determine the technical merit and feasibility of incorporating an electro-optic polymer on a silicon substrate to form integrated optic waveguide devices. The program investigates both electro-optic polymers with established properties and electro-optic polymers at the forefront of development which have potential for improved performance. The polymers developed in the program in polymer research laboratories at the University of Cincinnati will be incorporated in the later stages of the device fabrication process. To demonstrate feasibility in this Phase I effort, new electro-optic organic polymers and novel opto-electronic device technology are being combined to demonstrate a commercially viable electric field sensor. The potential commercial applications as described by the awardee: Applications include external modulation of lasers for communication and CATV, modulator arrays for data networks, optical network units in Fiber-to-the-Home, hybrid integration of silicon ICs and photonic circuits, A/D and D/A converters, voltage and electric field sensor. The development of the polarization insensitive liquid-crystal Fabry-Perot (LC-FP) optical filter for wavelength-division-multiplexing (WDM) communication systems is investigated. It is well known that LC-FP is a high-performance filter due to its large tuning range, high finesse, and low voltage operations. However, because the modulation is based on the optical anisotropy, only the extraordinary wave can be modulated in the FP resonator. This results in a polarization sensitive filter that greatly reduces its "field application" in the WDM networks. Although polarization-diversity has been proposed and demonstrated to overcome this problem, it required polarization splitting/combining at the input/output ports and resulted what effectively constitutes two FP cavities. This increases packaging and production difficulties, making high-volume commercial device manufacturing impractical. This project, based on a new invention by the principal investigator, uses polarization optics to manipulate the arbitrarily polarized light in the FP cavity. By positioning the liquid crystal phase modulator in between crossed quarter-waveplates within the FP resonator the filter becomes polarization independent. The device structure is simple and mass producible. Successful completion of this STTR program can resolve the polarization-sensitive drawback of the LC-FP filter and widen its application in the WDM networks. The potential commercial applications as described by the awardee: The primary commercial applications for this filter are the cable TV broadcasting, fiber-in-the-loop WDM networks, long distance telecommunication, and in the future all-optical networks. It can also be applied to spectroscopic analysis and fiber-based environmental sensor applications. PD-LD, Inc. in cooperation with the ATC/POEM (Advanced Technology Center/Princeton Opto-Electronic Materials) of Princeton University, has recently developed a totally new method for the growth of thin films of complex organic compounds, called Organic Vapor Phase Deposition, OVPD. This new method was successfully used to deposit thin films of chemically pure DAST, a material with optical nonlinear properties among the best reported. On the basis of these results, PD-LD, Inc. with its President, Dr. Vladimir S. Ban as the PI and ATC/POEM with its Director, Professor Stephen R. Forrest as the Chief Adviser, are developing a prototype fiber optic modulator, based on thin films of DAST deposited by OVPD on a silicon optical bench configured for efficient coupling to optical fibers. This addresses for the first time the integration of waveguides of highly non linear organic materials with silicon optical benches, which might be populated with standard fiber optic components, such as laser diodes, detectors, optical fibers, etc. Thus, this project represents an important step toward the fully integrated opto-electronic integrated circuits (OEIC), where different materials will be combined for the optimal performance of various functions, such as light generation, light guiding, light modulating and light detection. The potential commercial applications as described by the awardee: Markets for high performance fiber optic modulators are growing rapidly, and since DAST-based devices should have superior performance and lower prices than the competing products typically employing LiNbO3, PD-LD hopes to convert developments of this STTR into a very successful business. This project demonstrates the feasibility of a continuous process of producing rugged, low moisture content, high bandwidth, gradient index plastic optical fiber. This type of fiber is expected to have great utility in high-speed local area networks. Existing Japanese production methods result in bandwidths which are three to ten times less than theoretically possible, and the bandwidth of the fiber is not stable and reproducible. The production rate is intrinsically limited by the batch nature of the process and/or multistep procedure. In the U.S., the High-Speed Plastic Network (HSPN) consortium was formed in 1994, and is supported by a $5M ARPA grant. A member of that team has licensed one of the Japanese batch technologies to fabricate the above type of fiber. The present project is to develop a novel, low cost, continuous production process for the fiber. The fiber will have a stable bandwidth of >2 GHz.km, be stable from -40oC to +150oC, have low moisture uptake, attenuation of less than 150 dB/kilometer and be stable to radiation exposure up to 103 Rad. The potential commercial applications as described by the awardee: Local area networks have inadequate bandwidth for the anticipated needs in the coming years. Gradient index plastic optical fiber will be the most cost effective solution with adequate bandwidth. At present, there is no satisfactory production process for a rugged version of this fiber. The market is anticipated to grow to at least one billion meters per year. Opto-electronic devices require materials with controlled environments surrounding individual active ions. The inability to achieve precise control lowers efficiency, creates damage sites, and reduces the overall reliability of the system. Inhomogeneities which cause the environmental changes result from two sources: impurities and self contamination. In many crystals this second factor is more significant than impurities. This project attempts to measure this self-contamination on a submicron basis. Crystals will also be grown to demonstrate control of this defect at the minimum levels possible. Scientific Materials has the ability to generate process control to the near millisecond level which relates to crystal growth on the atomic level. The techniques for evaluation of crystals are currently limited to the one micron scale. The work investigates the possibility of developing a submicron characterization system using NMR, EPR, or ODNMR imaging. The potential commercial applications as described by the awardee: Commercial application is in the area of solid-state laser optical memories, holography, telecommunications, semiconductors, and nonlinear materials. Network capacity is expected to grow more than an order-of-magnitude by the year 2000. To support this explosive growth new networking technologies are required. All-optical wavelength conversion and switching are two functions that have been identified as key building blocks for future high capacity networks. An integrated all-optical wavelength converter for use in all-optical switches and other applications is being developed. The effort will leverage Optivision's extensive experience gained from building and deploying optical crossbar switches based on semiconductor optical amplifiers and USC's cutting edge research in all-optical wavelength converters. Recently, wavelength conversion has been demonstrated with semiconductor optical amplifiers based on cross gain compression, cross phase modulation, or four wave mixing. The effort will begin by developing a set of switch requirements based on interaction with all-optical testbeds and high end users, then evaluating the various wavelength conversion techniques against these requirements. To demonstrate the feasibility of the approach, Optivision will perform both detailed modeling of the expected performance and a proof-of-concept experiment. They will also investigate the complexity and tradeoffs of fabricating integrated all-optical wavelength converters. The Phase II effort involves the fabrication, laboratory evaluation, and deployment into a testbed of integrated all-optical wavelength converters. The potential commercial applications as described by the awardee: Integrated all-optical switching wavelength converters will be required in high capacity wavelength division multiplexed networks such as future cable television, telecommunication, and data transmission systems. The performance of virtually all modern opto-electronic devices depends critically upon the ability to grow multi-layered, thin-film structures whose composition and thickness is precisely controlled. In production-scale deposition processes, these cannot now be monitored and adjusted in real time. Ion Optics will overcome the problem with a dual-light-source, fiber-optic, thickness and composition monitor operating simultaneously as an interferometer (measuring growth rate) and as a reflectometer (measuring composition and total thickness). A major advantage of such an instrument is its ability to provide the data needed to make instantaneous composition changes during growth to compensate for the effects of diffusion. Diffusion is an important factor when very thin adjacent layers of dissimilar composition must be deposited at relatively high temperature, as is the case for multiple quantum well devices. Phase I combines laser and white-light sources in a single diagnostic instrument compatible with high temperature processes, immune to electromagnetic noise (a by-product of rf wafer heating), and free of the requirement for precise optical alignment. The basic concept of the fiber-optic monitor has been demonstrated in rudimentary pulsed laser experiments on a silicon nitride reactor at Brown University; the principal investigator has successfully extracted real-time layer composition from white-light reflectance spectra. Phase I extends this earlier work to a practical dual-source configuration fast and accurate enough to track thin-film growth at rates typical of advanced devices. The potential commercial applications as described by the awardee: Feedback control is key as new opto-electronic devices place greater demands on growth processes; the annual market for reasonably priced, easy to use thickness and composition monitors will be several million dollars. A less capable version would compete with quartz deposition gauges, with a market of tens of millions per year. The research will develop thin film electroluminescent displays on plastic substrates. The zinc gallate (ZnGa2O4) phosphor host will be grown by metallorganic chemical vapor deposition (MOCVD) at £425oC on a high-temperature Kapton substrate. Luminescent centers will be introduced by ion implantation. Phase I will demonstrate that oxide phosphors can be deposited by MOCVD at temperatures compatible with Kapton, a plastic usable up to ª450oC. Because Kapton, and other polymide materials are not optically clear, structures having light emission from the top (nonsubstrate) side will be developed. Substrate temperature will be controlled by water cooling during ion implantation of luminescence centers. Glass substrates will be included as experimental controls. Spire will compare the quality of MOCVD-grown ZnGa2O4 films on plastic and glass substrates by x-ray diffraction and electron microscopy, and photoluminescence and cathodoluminescence tests will be performed on annealed ZnGa2O4 films. The University of Florida Department of Materials Science will then fabricate and test electroluminescent devices on both glass and plastic substrates. The potential commercial applications as described by the awardee: This research will result in the capability to fabricate flat-panel EL displays on plastic substrates for rugged, bright, lightweight, hand-held information terminals to provide better graphical communication to personnel in demanding environments. An imaging system based on ultra-wideband electric-field sensors is being investigated and developed. The device is based on a new opto-electronic design which is capable of time-domain far-infrared spectroscopy across a frequency range extending from near DC to several THz. Fundamentally, the electric-field sensor system is based on the linear electro-optic effect (Pockel's effect) in electro-optic crystals where a pulsed microwave signal acts as a transient bias to induce a transient polarization in the sensor crystal. This polarization is then probed by a synchronously pulsed laser beam, and the spatial and temporal electric-field distribution is projected onto a CCD camera by the laser. Previous studies of these sensors has demonstrated a sub-wavelength spatial resolution, femtosecond temporal resolution, near DC-THz bandwidth, sub-mV/cm field sensitivity, up to 100 Hz scan rate, and a signal-to-noise ratio better than 10,000:1. The electro-optic detection has a flat (nonresonant) spectral responsivity (from near DC to several THz), and an extended dynamic range (> 1,000,000). The simplicity of the detection geometry, capability for optical parallel processing, and excellent signal-to-noise ratio make this system suitable for real-time, 2-D coherent far infrared imaging applications. The potential commercial applications as described by the awardee: Commercial applications of this research are in the areas of FIR spectroscopy, electric field sensor, and medical imaging. This project describes two novel designs for diode-pumped, compact, efficient, blue laser sources. Such sources are needed in many applications including high density optical data storage, laser printing, and free space optical communication. Presently, such compact and efficient sources are not commercially available for these applications. The designs described in this project are based on upconversion lasing in rare-earth doped fluorozirconate (ZBLAN) glass fiber. The first is a Pr/Yb co-doped fiber which has previously been demonstrated as a laser operating at red, orange, green and blue wavelengths. The blue laser output power, however, was severely limited due to competing transitions from a common upper laser level. The solution to the problem is described and is being tested and developed for this project. The second upconversion blue fiber laser to be developed in this project is based on Tm doped ZBLAN fiber. This laser has also been demonstrated, but lacks a convenient, scalable, diode-based pump source for commercialization. In this project, a novel and efficient pump source will be developed and demonstrated. Prototype laser systems for each of the two designs mentioned will be constructed and tested for future development. The potential commercial applications as described by the awardee: Applications of this research are in the areas of optical data storage, printing applications, free space optical communications, optical display, spectroscopy, flow cytometry, molecular biology, high performance imaging, and semiconductor inspection. A novel approach for the fabrication of highly efficient Faraday-active optical waveguide structures is planned. Faraday-active waveguides are presently of great interest as their development is the critical enabling technology associated with the demonstration of all-fiber optical circulators. Such devices, which may be characterized as multipart nonreciprocal polarization rotators, provide a means by which the telecommunications rate may be immediately doubled on the existing optical fiber carrier infrastructure. Successful implementation will lead to full-duplex operation over "long haul" fiber carriers. The suggested circulator approach is passive, and does not therefore require external clocking controls. Separation of the signals is based only upon propagation direction; no additional losses are imposed on transmitted signals, as in the case of conventional directional couplers. The investigators use advanced thin-film techniques in the development of optical fiber segments. These processes have been previously shown to promote the introduction of photonically-active dopant species at dopant levels which are orders-of-magnitude greater than may be produced by conventional means. The technology will replace conventional bulk optics technologies; as waveguide structures with dramatically improved figures of merit will be developed and characterized during the initial Phase I period. The Phase I program also includes analysis of integrated permanent magnet structures, and a design assessment for optical circulator prototype development. The potential commercial applications as described by the awardee: The development of high Verdet constant materials offers the potential for the achieving optical elements which double the throughput of fiber communication links. In addition, numerous other commercial uses exist for these materials. This project investigates metal-organic chemical vapor deposition (MOCVD) to improve the performance of the blue-emitting layer used for full-color thin film electroluminescent (TFEL) displays. TFEL displays have demonstrated performance advantages compared with active matrix liquid crystal displays (AMLCDs) including wide viewing angle, wide operating temperature range, fast response time, and inherent ruggedness. Commercialization of TFEL displays has been impeded by the insufficient luminance and efficiency of the blue-emitting EL phosphor films. MOCVD has demonstrated the capability of growing crystalline binary and ternary sulfide phosphors at temperatures less than 600oC, eliminating the need for costly high-temperature substrates. Two approaches are being pursued for improving the blue EL material performance. The MOCVD process for the SrS:Ce phosphor is optimized for luminance and emission intensity in the blue spectral region. The addition of codopants are being investigated to compensate for SrS lattice defects. Secondly, MOCVD of cerium doped gallium sulfide is being investigated as a potential new blue EL phosphor. The best performing blue-emitting material will be selected for process scale to commercial size TFEL display panels in Phase II. The potential commercial applications as described by the awardee: Potential commercial applications include full-color emissive flat panel displays for use in portable computers, industrial process control, instrument and medical electronics, and telecommunications. Growth of highly electro-optic nonlinear materials can provide the key for enabling the production of efficient optical components such as tunable filters and modulators for optical communications. In addition, these devices can be used as a building block for systems including optical sensors and interferometers. Currently available nonlinear materials such as LiNbO3, BaTiO3, and PLZT do not have the necessary physical constants to make efficient devices in terms of power consumption and driving voltages required for operations. Strontium Barium Niobate provides materials with extremely large electro-optic coefficient (1380 pm/V for SBN:75 compared to 30 pm/V for LiNbO3) that can greatly improve the performance of exiting nonlinear optical devices and components. CoreTek, in conjunction with the University of New Mexico, is developing the technology needed to produce efficient SBN thin films based devices and components that would be directly useful in applications such as optical communications. The potential commercial applications as described by the awardee: The research results will be applied in areas such as components for optical communication systems, sensors and interferometers for applications in industrial and environmental sensing, and opto-electronic switching devices. Wavelength-selective filters in the integrated-optic embodiment are the most promising candidates for deployment in Wavelength-Division-Multiplexed net-works. A waveguide grating filter appears to be very promising on account of its extreme wavelength sensitivity and compactness. However, the performance of practical devices has thus far been severely limited by the lack of high-reflectance waveguide gratings. The unique feature of the described effort in achieving high-reflectance gratings is the use of a Si overlayer to substantially perturb the mode index of an optical waveguide underneath it and, yet, without adding substantial mode loss. Si has been selected on account of its large refractive index, processibility, and low cost. Within this framework, Advanced Photonics Technology envisions the development of a waveguide grating filter device with enhanced performance that can meet the needs of most complex WDM systems. This filter will then be coupled to an external laser source to form a hybrid optical module with tunable wavelength for a number of practical applications. This new technology, developed in conjunction with the University of Florida, will then be transferred to the company for further refinement to a precommercial level. The potential commercial applications as described by the awardee: The primary commercial application of waveguide grating filters is in Wavelength-Division-Multiplexed networks. Another most important application is the development of compact and efficient waveguide lasers and amplifiers. Recent major advances in Praseodymium (Pr)-doped fluoride glass fibers have made them the most desirable medium for fiber amplifiers and lasers at 1.3mm wavelength. The availability of prototype Pr-doped fiber amplifiers for 1.3mm wavelength-division-multiplexing (WDM) has heightened the need for 1.3mm lasers. Commercially available widely-tunable laser sources are typically based on grating-tuned external-cavity semiconductor lasers suitable only for laboratory environment. The requirement for 1.3mm tunable lasers that are fiber-compatible, robust, and cost-effective makes an all-fiber based tunable laser structure the best candidate. The focus of this program is the research and development of novel fiber Fabry-Perot tunable-filters (FFP-TFs) based on fluoride fibers (specifically ZBLAN fibers), and the application of FFP-TFs in the generation of high-power, narrow-linewidth, and wavelength-tunable Pr-doped ZBLAN fiber lasers in the 1.3mm wavelength region. These lasers will allow telecommunication systems to access an additional l00nm of frequency spectrum using standard 1.3mm single-mode fibers which have been widely installed throughout the world. In addition, the resultant work will enable broad-based ZBLAN fiber device technology development that can be extended to other wavelength regions, e.g., to tunable upconverted blue-green lasers and mid-IR lasers using appropriate dopants in ZBLAN fibers. Additional applications using such new tunable lasers span from holographic data storage (tunable blue-green lasers) and free space optical communication, to spectroscopy and environmental sensing (with tunable mid-IR lasers). The potential commercial applications as described by the awardee: The research addressed will be applied to 1.3mm wavelength-division-multiplexed optical communi-cations, holographic data storage, and environmental sensing. The feasibility of a novel architecture integrating light sensing and image processing functions on the same chip is being investigated. Specific image patterns can be learned by the circuits within microseconds. This information is stored in the synaptic connections on-chip and can be recalled at later processing stepsæeven from incomplete or noisy data. The system will be a co-processor by design. The intent is to assure not only compatibility but also high performance within a conventional computer architecture framework. Commercially available microprocessors and digital signal processors can thus be used to add on symbolic data manipulation capability for comprehensive systems. The work involves investigation of a locally connected neural architecture pioneered at the research institution and experimentation with prototype chips with on-chip CMOS light sensing diodes. The potential commercial applications as described by the awardee: The project aims for technology insertion into conventional computers, which allows for applications in the near future in security systems, manufacturing automation, quality inspection, and smart sensors.
<urn:uuid:15821912-c75c-48aa-9f99-1e94bbe62ffd>
CC-MAIN-2017-17
https://www.nsf.gov/pubs/1997/nsf97151/sttr.htm
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917119225.38/warc/CC-MAIN-20170423031159-00482-ip-10-145-167-34.ec2.internal.warc.gz
en
0.919758
6,129
2.515625
3
Hey there, time traveller! This article was published 26/7/2014 (1001 days ago), so information in it may no longer be current. Winnipeg had, for most of its 45 years, been a city that was self-absorbed, boosting and pushing and striving to make itself into a great metropolis. By 1914, Winnipeg seemed to be well on the way to realizing all the ambitions of its founders. With the coming of war, Winnipeggers began directing their energies outward, organizing for victory. During the month of August they stepped up, volunteering for the army and for the groups that would provide the major support services for the war effort over the next four years. And they began to give money with the sort of generosity Winnipeggers still show today. By war’s end, the city had donated or loaned through the purchase of Victory Bonds many millions in today’s dollars to help the Allied cause. In 1914, Winnipeg was Canada’s third-largest city with 136,035 people, reported to the 1911 census. The makeup of Winnipeg’s population differed in some ways from the other two large Canadian cities: 75 per cent of its people — about 103,000 — had been born either in Canada or in some other part of the British Empire, compared to 91 per cent in Toronto and 90 per cent in Montreal. Winnipeg had a strong Scottish heritage, more so than the eastern cities. And the Protestant denominations were primarily led by Presbyterians whereas in the east, Methodists were usually the most numerous. Almost all the nations of the world were represented in the 25 per cent of the population born outside the British Empire — the largest groups were about 10,000 people from the Austro-Hungarian Empire, about 8,600 from the Russian Empire, 6,000 from the United States, about 1,800 from Germany, 1,360 from Iceland and 1,400 from Sweden. This cosmopolitan population would have important implications for the way in which the war affected the city. In 1914, Winnipeg was still suffering from the slowdown in economic growth that had ended a decade-long boom in 1913. All over the Prairies, people lost their jobs and large numbers of unemployed men headed for Winnipeg, the great labour clearinghouse for Western Canada. When they ran out of money, the city was obliged to feed and house them. On May 26, around 2,000 of these men gathered in Market Square behind city hall to listen to speakers and to protest their hopeless condition. Fighting broke out; the police used their nightsticks on the unemployed, and the men fought back. The local Social Democratic Party wrote to city council, accusing the police of starting the riot and of being too free with the use of their sticks. This event and the threat of more to come moved city council to take action. Councillors sent a letter to Prime Minister Robert Borden requesting immigration be curtailed until the situation improved and calling on the Dominion government to push forward public works as a means of creating jobs. Immigration Minister W.J. Roche responded the federal government was already discouraging immigration and the numbers of new arrivals had fallen by 50 per cent. The city also made relief payments and provided some employment on public works. Unemployment continued to be a problem in the city until later in the war when there was some improvement in the economy, but even then, those who had jobs were affected by inflation. The failure of wages to keep up with prices would be one of the causes of the General Strike. There had been signs of recovery in some areas of the economy in Winnipeg in early 1914 — by August, the value of building permits had already reached $12.1 million, and some new apartment blocks and houses were under construction. The outbreak of war on Aug. 4 put an end to any hope of a resumption of growth as the province ceased all expenditures in the capital account, including work on the new legislative building, and private firms laid employees off or reduced their wages. British capital, the lifeblood of Western Canadian development, was needed for the war effort and became unavailable at reasonable rates. Local businessmen were forced to postpone or abandon plans such as those local businessman R.T. Riley was making for a mortgage company that would loan money to farmers in the West. He and his partners had been successful in selling stock in Canada, and they opened an office in London in July 1914 to sell debentures to investors there. They closed the office a few weeks later when, Riley wrote in his memoirs, "we realized that there was little opportunity of getting four per cent money in England, or anywhere else, for some time to come. We never sold any debentures, as we could not afford to pay a higher rate." Manitobans went to the polls on July 10 to vote in a provincial election. The campaign, reported the Canadian Annual Review for 1914, had not been "a satisfactory or pleasant one." The Conservative premier, Rodmond Roblin had been in power since 1900. The Annual Review reported he "was not a conciliatory opponent nor a courteous fighter," and the Liberals, led by Tobias Norris, "accepted the gauge with true western heartiness" in a campaign that was full of "charges of corruption and bitter personalities." Nellie McClung, who had participated in the famous Women’s Parliament at the Walker Theatre in January 1914, took a large role, speaking all over the province for the Liberal cause. Conservative speakers reminded voters of the many accomplishments of the Roblin years, speaking about balanced budgets, the huge expenditures on infrastructure and public buildings and the Manitoba Government Telephones system among other things. The Liberals had formed a coalition of reform movements that campaigned for prohibition, abolition of Manitoba’s bilingual school system and votes for women, all things to which Roblin Conservatives were opposed. By 9 p.m. on election day, when Roblin mounted the platform before a cheering crowd outside the Conservative Telegram newspaper on McDermot Avenue, it was clear Manitobans had given him his 4th majority government. In the summer of 1914, those middle-class families that could afford it left Winnipeg, boarding trains headed for "the lake." Like their contemporaries in other parts of Canada and the rest of the British Empire, they did not suspect they were enjoying what were to be the last few weeks of peace before what one writer has called the "greatest catastrophe the world had seen" broke upon their privileged world. Early in the summer, the international news on the front pages of the Winnipeg papers was not from Europe but Ireland, where disagreements over the Home Rule Bill passing through Parliament seemed to be pushing the country toward civil war. Then, as the Rev. Charles Gordon recorded in his memoirs, Postscript to Adventure, "on Thursday, July 30th, our boat returning with supplies brought back a newspaper with red headlines splashed across the page. Austria had declared war on Serbia." Suddenly, the peaceful Lake of the Woods community where he and his family spent their summers was talking of nothing but the war. There had been signs of recovery in some areas of the economy in Winnipeg in early 1914 — by August, the value of building permits had already reached $12.1 million Irene Evans, the wife of former Winnipeg mayor Sanford Evans, was also at Lake of the Woods, with their two children. Her husband was working in Ottawa at the time, and writing to him about the outbreak of war she said, "I dread the return to the city... The moon almost full — such heavenly peace — the world beyond in a nightmare." Nellie McClung also spent time at the family cottage, in her case at Matlock on Lake Winnipeg. She later wrote, in her book The Next of Kin that: "When the news of war came, we did not really believe it! War! That was over! There had been war of course, but that had been long ago, in the dark ages, before the days of free schools and peace conferences and missionary conventions and labor unions!" McClung described how war news gradually invaded the calm of life at the beach. The men coming out from the city "brought back stories of the great crowds that surged through the streets blocking traffic in front of the newspaper offices reading the bulletins, while the bands played patriotic airs." As the family drove away from the boarded-up cottage at the end of their vacation, she wrote that "instinctively we felt that we had come to the end of a very pleasant chapter in our life as a family; something had disturbed the peaceful quiet of our lives; not a word was spoken, but Jack put it all into words, for he turned to me and asked quickly, ‘Mother, when will I be 18?’ " At midnight, London time, Tuesday, Aug. 4, the ultimatum the British had given Germany, demanding that she withdraw from Belgian territory, expired. At that moment, a state of war existed between Germany and the British Empire, including the Dominion of Canada, and Winnipeggers joined millions of others as they crossed into the strange new wartime existence. People had been milling in front of the city’s newspaper offices for days, and since late afternoon, the crowds had been growing larger, anxiously awaiting information about the ultimatum. Many were busy calculating what time it would be in Winnipeg when it was midnight in London. Suddenly, bulletins were rushed outside and posted on boards on the wall of the Telegram newspaper office at Albert and McDermot. At the Manitoba Free Press on Carlton Street, a man armed with a megaphone climbed onto a wooden platform in front of the building and shouted to the crowd that war had been declared. The people, like their fellow Canadians all over the country, immediately broke into Rule, Britannia! God Save the King and even La Marseillaise and the Free Press reported: "strong voices took up the strain with a will and a volume of glorious sound roared forth and set the blood of the British crowd racing at top speed." Many in the noisy crowd that blocked traffic in front of the Free Press joined in a spontaneous parade, which surged down Portage and up Main to city hall following a young man who had jumped up on the platform and shouted for everyone to follow him. The newspaper described a crowd of 6,000 people, men and women, striding along five and six abreast in the street: "In the van walked half a dozen young men carrying a great Union Jack, which for want of a pole, was carried spread out over their shoulders." For some, the outbreak of war was greeted in a more thoughtful way. A Free Press reporter visiting the Royal Alexandria Hotel talked to a guest of Austrian birth, "now a loyal British subject." He said Canada had been good to him and he had married a Canadian woman. "Naturally I wish the Empire well," he said, but he could not help but feel a natural sympathy for the land where he was born, "not sympathy for the diplomats and those who brought on the war, but for the people." Outside on the streets, such honest sentiments had suddenly become sufficient cause for a beating. At least one man who admitted to being a German was set upon by a crowd and had to be carried home. The Free Press reported, "There were several fights as a result of the war spirit... Everything was English, Canadian and French last night. Not a German dared show his head and proclaim his nationality." In the city’s North End, it was noticeably quiet: "The foreigners in the city, many of whom belong to nations now enemies of Great Britain, showed good common sense in keeping well out of sight. So far as is known, they refrained entirely from tactless demonstrations." The extreme patriotism and extreme suspicion of people of non-British extraction would continue throughout the four years of war. The first of many Great War military parades took to the Winnipeg streets almost as soon as war was declared. The members of the 90th Winnipeg Rifles militia regiment had been summoned to the drill hall at the corner of Broadway and Osborne. The Winnipeg Rifles was the oldest militia unit in Winnipeg, formed in 1885 at the time of the Métis resistance in Saskatchewan. A Métis fighter, referring to their black uniforms, had named them the Little Black Devils. They crossed to the university grounds on the north side of Broadway, formed ranks and marched up Kennedy Street to Portage Avenue and then on to city hall. Their band played the regimental march, Old Solomon Levi and Soldiers of the King, and the crowds cheered all along the way, many rushing into the street to march beside the militiamen. Downtown hotel bars were packed with men toasting the beginning of the war and one of the drinkers strutted along behind the Winnipeg Rifles, "… and strove valiantly to imitate the military bearing of the officer before him," said the Free Press report. Another amused the crowds by marching along the sidewalk on Main Street with a broomstick for a rifle. "Every once in a while he would stop and mark time. Then he would give himself the order ‘forward march’ and would start off again. He created many a laugh along the street." When the men of the 90th Regiment arrived back at the drill hall, Maj. W.A. Munro addressed them, saying the regiment’s office would be open in the morning for those wishing to sign up for duty overseas. Ten men immediately pushed forward and handed in their names, the first of many thousands of Winnipeggers who would volunteer to fight. It is difficult now to know why many of the young men joined up. In January 1915, a Canadian officer, quoted in J.L. Granatstein’s book, Broken Promises, offered some possible answers when he said men joined "because they thought they would like it, because they were out of work, because they were drunk, because they were militia men and had to save face... but being in they have quit themselves like men." It is clear men also volunteered because their brothers or cousins did so, or because the men they worked with or were going to school with were volunteering. Capt. S.H. Williams, who joined the Fort Garry Horse Regiment in August, later described in his book Stand to Your Horses the scene when members were asked who wanted to have the various injections necessary before going overseas. "Greatly to my surprise there were quite a few who sidestepped the inoculations and declared themselves for "home defence" service only. That, of course, was their business and it was not for the others of us to criticize. We fellows who had taken the inoculations felt nevertheless a bit of a self-righteous feeling." The patriotic demonstrations of the first week climaxed on Saturday, Aug. 8 when the city’s veterans of the South African War, men who had fought Riel in 1885 and others who had served in the British Army and Navy, gathered in Market Square and marched to the 90th Regiment’s drill hall on Broadway. Lt.-Gov. Douglas Cameron and Premier Roblin addressed the crowd. Hugh John Macdonald, a veteran of 1885, had marched in the parade. As a former premier and son of Sir John A. Macdonald, he often played a symbolic role at such times and he, too, spoke, saying "… it is time for the sons of Britain over the seas to show they are true sons of the race." He said all the veterans were ready to fight but the men who had served in South Africa would make the best recruits: "They have the youth and experience and none could be better coming from a fighting race as they do." By "race," Macdonald would have been referring to the "Great Chain of Race" idea popular at the time. The "Anglo-Saxon race" — the people of the British Isles and their relatives in the British Empire around the world — was supposed to have a fighting spirit superior to that of other "races." People were valued according to how closely they were related to the Anglo-Celtic population of the British Isles. White Americans, Scandinavians and, until the war, Germans, were considered to be almost the equals of the Anglo-Saxon. Others, such as southern Europeans, Africans, Asians and the aboriginal people of Canada were hardly worth considering. These biases informed Canadian recruiting in the first years of the war. Federal Militia Minister Sam Hughes set out to create an army of sober, upright Protestant volunteers. He left the decisions about which volunteers would be accepted up to the individual battalion commanders, who often excluded ethnic minorities. The Canadian Expeditionary Force eventually had over 50,000 members of the Orange Lodge in its ranks. With the exception of the 22nd Battalion, French volunteers usually found themselves serving in English-language units, and French Canadian officers were not given commands at the front. The long traditions of Quebec militia regiments were ignored and discounted by Hughes. Nevertheless, some non-British Canadians were successful in joining the army in the first months of the war and their numbers increased as the terrible attrition of trench warfare forced the army to open its doors to a broader cross section of volunteers. In August 1914, there were men in Winnipeg with obligations to various European armies. All the European nations except Britain had adopted conscription, and in most cases this meant that, after serving a mandatory number of years in the army, a man would spend another period of years as a reservist, expected to report for duty in time of war. There were estimated to be about 1,000 Austro-Hungarian reservists in Winnipeg, for the most part men of Ukrainian or Polish ethnicity. There was a smaller number obligated to serve in the German army. On July 30, the Austro-Hungarian consul in Winnipeg, George Reininghaus, announced all reservists must return home to join the army and they would be reimbursed for the cost of the trip or given passage money if they did not have it. If they did not go, they would be charged with desertion, a capital offence. Some reservists did leave although it is impossible to say how many. Two men — Stefan Bertnak and Tphemius Lupul — told the Telegram newspaper, as they boarded the train for the United States on July 30, they were going home early so they would have a chance to visit their families before they were called up. They explained they had to go or they would never be able to return home again. The consul asked the churches to help spread the word about the war. At Sunday mass on Aug. 2, Father Kowalski of Holy Ghost Polish Catholic Church read out a message from Consul Reininghaus about the call to arms issued by the Austro-Hungarian government. The Ukrainian Catholic Bishop in Winnipeg, Bishop Nikita Budka, issued a pastoral letter that was printed in the Canadian Ruthenian newspaper on Aug. 1 and read out in churches the next morning. In the letter, reprinted in the appendices of Frances Swyrypa’s Loyalties in Conflict: Ukrainians in Canada During the Great War, he called on all Austrian subjects who "… are under military obligation to return to Austria… to defend our native home, our dear brothers and sisters, our people" from the Russian invaders. He said Emperor Franz Joseph had "… ever striven to avert and postpone… " war but the murder of his son, Franz Ferdinand, was an event to "… try the patience of the most peace-loving of men… " Almost immediately, Budka was embarrassed by the entry of Canada into the war against Germany and Austria. On Aug. 6, he issued a second pastoral letter that began: "In the course of a few days political relations have changed completely." He said that "… we Canadian Ukrainians have a great and holy obligation to join the colours of our new fatherland, and if necessary to sacrifice our property and blood for it… as loyal sons of Canada, faithful to the oath have sworn to our Fatherland and our King, we should unite under the flag of the British state." Unfortunately the bishop’s first pastoral letter was the one people remembered and it would haunt him for the next four years in spite of the fact he consistently encouraged his flock to support Canada’s war effort in every way. Once Canada was officially at war with Austria and Germany, Lt.-Gov. Cameron ordered the German and Austrian consuls to leave the city and by Aug. 7 a proclamation had been issued saying any enemy reservists trying to return home would be arrested. Many young Ukrainians were eager to let their neighbours know they wanted to do their part in the war effort. On the evening of Sunday, Aug. 9, at the Industrial Bureau, there was a meeting of 3,000 "Ruthenians" as Ukrainians were often called at the time. The Telegram reported the assembly voted to reject Austria-Hungary and pledge allegiance to Canada, where they had found "true liberty." They passed a resolution "that we hereby express our loyalty to the British flag and declare our readiness to stand by the colours whenever called upon." This resolution was sent to the Governor General in Ottawa. One of the speakers, J. Arsenycz, a young university student who would later be a Winnipeg judge, said Ruthenians had established themselves as useful citizens and were ready to serve their adopted country in any way asked. On the same day, about 400 Poles gathered at Holy Ghost School on Selkirk Avenue and passed a resolution "that the Polish men of Winnipeg give their aid as far as possible to England as soldiers and especially as fulfilling our duties as good citizens." The Home Front organizations that would support the troops began to organize. The Provincial Red Cross organization in Manitoba was launched at a meeting on Aug. 10. It was led by an executive committee headed by grocery wholesaler George Galt and included business leaders such as Augustus Nanton and R.T. Riley, Lady Aikins, Annie Bond and Mrs. R.T. Moody. Women played strong leadership roles in the Red Cross in Manitoba and across the country. Mrs. Bond was an experienced army nurse and one of the founders of the Winnipeg Children’s Hospital, and Mrs. Moody, among other things, had been superintendent of the Nursing School at Winnipeg General Hospital. By November 1914, 49 Red Cross branches had been formed in communities all over Manitoba in response to a circular letter sent out by the Winnipeg organization — $27,000 had been raised, and over half that amount had been disbursed to buy blankets for the troops and to make a donation to Red Cross Headquarters in London. Winnipeggers and Manitobans rose to the challenge of funding the new organization and their support never faltered throughout the war. Galt and Edward Drewery, owner of the Redwood Brewery, both donated $5,000. Hundreds of others gave what they could. The Chinese community in Winnipeg raised $453, the largest amount donated by any organization in the early weeks. From outside the city came donations of $400 and $500 from communities such as Melita and Gretna. A group of Ukrainian men working in the quarries at Stony Mountain sent in $50. Special collections in churches, theatres, banquets, concerts and lectures all brought in donations. In Winnipeg, booths were set up in department stores and office buildings. Nellie McClung, in her book The Next of Kin, wrote about fundraising in Western Canada: "... the giving was real, honest, hard, sacrificing giving. Elevator boys, maids, stenographers gave a percentage of their earnings, and gave it joyfully… one enthusiastic young citizen, who had been operated on for appendicitis, proudly exhibited his separated appendix, preserved in alcohol, at so much per look, and presented the proceeds to the Red Cross." The women of the Manitoba Red Cross would begin operating a major production facility on the top floor of the Keewaydin Building on Portage Avenue East in 1915. They purchased tens of thousands of dollars worth of raw materials and distributed them to local groups all over the province. Volunteers made bandages and all types of hospital supplies, following strict Red Cross standards, and shipped everything back to the Keewaydin building where it was crated and dispatched to hospitals in England and France. The Manitoba Patriotic Fund, like the Winnipeg Red Cross, was born on Aug. 10 when the Board of Trade passed a motion to organize a committee that would begin the work. Patriotic Funds were private charities that had been, since the Napoleonic wars, a way soldiers and their families were supported during wartime. The Manitoba Patriotic Fund was to be independent of the national Canadian Patriotic Fund organized at the same time. The Manitoba organizers had decided they needed this autonomy so they could not only look after the relatives and children of soldiers, but to also support men thrown out of work by the war. Beginning in August, fund board members banker Augustus Nanton, Judge Robson, printer John Bulman and grocery wholesaler George Galt met personally every day with men thinking of going overseas. The assurances of the committee that their families would be taken care of and the knowledge that large amounts were being raised made it possible for many men to enlist with some peace of mind. By April 1, 1915, the Manitoba Patriotic Fund had raised $909,000. Of this, $485,000 was paid out to 1,839 soldiers’ families. The fund had also given unemployment relief money to 2,329 cases. During the first year of the war, the fund paid out $70,000 to support unemployed civilians. One of its initiatives was a wood camp, set up and supervised by hardware merchant James Ashdown and Mayor Richard Waugh. It provided paid work sawing firewood for 357 unemployed men at a cost of $15,000. In the end, there was never enough money for all the needs, and many wives and families knew real hardship, especially those who had no other means of support. On the afternoon of Saturday Aug. 15, the first troops left the city bound for the front: About 200 men from Winnipeg and 100 from Saskatchewan climbed aboard the train at the CPR station to take them east. The platform was packed with friends, relatives and a few Winnipeggers determined to get to the lake for a peaceful weekend. The soldiers were recruits for Canada’s newest regiment, the Princess Patricia’s Light Infantry. The unit’s honorary colonel was Princess Patricia, the daughter of the Governor General, the Duke of Connaught. The Patricias had been raised and outfitted with money given by Hamilton Gault, a Montreal millionaire and veteran of South Africa. Gault went to France with the regiment. Wounded four times, he survived to become commanding officer in 1918. Newspaper advertisements for the Princess Patricias had stated that preference would be given to former regulars in the Canadian or Imperial armies and to men who had fought in South Africa. The men among these recruits with previous military experience were the closest Canada had to the millions of reservists who were at the time being mobilized in European countries. Winnipeg women organized their first of hundreds of large fundraising events on Aug. 18, this time in support of the St. John’s Ambulance. A parade of 160 cars carrying the members of various women’s organizations moved through the streets accompanied by women carrying collection boxes soliciting money. One woman, Grace Stapleton, stood on a float dressed as Britannia. There were many nurses in the parade, including 70 volunteers being trained by Dr. Ellen Douglass for service overseas. Army nurses were the first Canadians to go to the front in Belgium and France, and the work they did and the sights they saw placed them among the most heroic of the Canadians who went to war. Another women’s group that made a massive contribution was the Imperial Order of the Daughters of the Empire (IODE). Founded at the time of the Boer War by Montrealer Margaret Polson Murray, the purpose was to give support to Canadian volunteers fighting in South Africa and to their families. Minnie Julia Beatrice Campbell — the wife of Colin Campbell, who was the attorney general in the Roblin government — was one of the most influential members of the IODE in Western Canada. She was regent of the Fort Garry chapter and Manitoba provincial president. She had been involved with the order for many years when the war began and had taken a leading role in fundraising efforts for projects such as the Tuberculosis Sanitorium at Ninette. In August, she set out to mobilize women for war work. Even her husband’s death in October did not slow her down. On the contrary, she made the war her cause and let her fellow IODE members know that she "… meant to take up her work immediately. There is so much that we women can do now for King and empire and the world in this time of war." She led fundraising efforts for field ambulances, hospital supplies, blankets for the troops and "comforts" or items such as magazines, newspapers, shaving kits and cigarettes intended to make the lives of the men in trenches more bearable. On Oct. 8, Campbell published a column in the Free Press passing on information for volunteer knitters and sewers. Addressed to the "women of Manitoba" and beginning "Dear Compatriots," the column outlines the specifications for socks: "Use No. 13 needles and four-ply wool. Some socks sent in are knitted on too course needles for comfort, warmth and durability. We want an authorized standard of work." She took the opportunity to remind her readers of the good example of Queen Mary, who knitted constantly and expected any ladies who visited her to join in. She reported pillows were no longer needed, as the first contingent of Canadian troops had enough, but nightshirts, bed jackets and dressing gowns were now required in great numbers for the inevitable wounded. Campbell encouraged Winnipeg women to be strong-minded, wear out their old clothes, trim their old hats, and "cultivate talents growing dormant" such as knitting. They were to live their lives, in all ways, to support the war effort. Winnipeg women did indeed live their lives to support the war effort, as did the men who stayed at home. They contributed through fundraising, making hospital supplies, gas masks and socks, meeting the trains to give returning men a welcome home and all the other activities and sacrifices the war demanded. Beginning in August 1914, the people of Winnipeg organized themselves and set out to do all they could to win the war. Jim Blanchard is a Winnipeg writer and historian. His books include Winnipeg 1912 and Winnipeg’s Great War, both published by University of Manitoba Press.
<urn:uuid:250156ad-6aa7-47fa-952a-f6942436631e>
CC-MAIN-2017-17
http://www.winnipegfreepress.com/local/Winnipeg-goes-to-war-268495862.html
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917118310.2/warc/CC-MAIN-20170423031158-00541-ip-10-145-167-34.ec2.internal.warc.gz
en
0.983346
6,359
3.234375
3
In most studies of the Qiang, especially those written in China, there is an assumption that the people classified by the present Chinese government as the Qiang living in northern Sichuan can be equated with the Qiang mentioned in Chinese texts dating back to the oracle bone inscriptions written 3,000 years ago. A more careful view would be that the ancient “Qiang” were the ancestors of all or almost all of the modern Tibeto-Burman speakers, and the modern “Qiang” (who call themselves /ʐme/ in their own language, written RRmea in the Qiang orthography), are but one small branch of the ancient “Qiang”. They in fact did not think of themselves as "Qiang" (a Chinese exonym) until the early 20th century. It is clear that the culture of the stone watchtowers, which can be identified with the modern Qiang people, has been in northern Sichuan since at least the beginning of the present era. Being in this area, the Qiang people are between the Han Chinese to the east and south and the Tibetans to the west and north. In the past fighting between these two larger groups often took place in the Qiang area, and the Qiang would come under the domination of one group or the other. At times there was also fighting between different Qiang villages. The construction of the watchtowers and the traditional design of their houses give testimony to the constant threat of attack. The majority of Qiang speakers, roughly eighty thousand people, are members of the Qiang ethnicity, and the rest, approximately fifty thousand people, are a subgroup of the Tibetan ethnicity. These ethnic designations are what they call themselves in Chinese. In Qiang they all call themselves /ʐme/or a dialect variant of this word. Not all members of the Qiang ethnicity speak Qiang, and as just mentioned, not all of those who speak Qiang are considered members of the Qiang ethnicity. The traditional Qiang house is a permanent one built of piled stones and has three stories. Generally one nuclear family will live in one house. The lowest floor houses the family’s animals, and straw is used as a ground covering. When the straw becomes somewhat rotted and full of manure and urine, it is used for fertilizer. A steep wooden ladder leads to the second floor from the back of the first floor. On the second floor is the fireplace and sleeping quarters. Beds are wooden platforms with mats made of straw as mattresses. The third floor has more rooms for sleeping and/or is used for storage. A ladder also leads from there to the roof, which is used for drying fungi, corn or other items, and also for some religious practices, as a white stone (flint) is placed on the roof and invested with a spirit. The fireplace, which is the central point of the main room on the second floor, originally had three stones set in a circle for resting pots on, but now most homes have large circular three or four-legged iron potholders. In some areas, particularly to the north, enclosed stoves are replacing the old open fires. On the side of the fireplace across from the ladder leading to the second floor there is an altar to the house gods. This is also the side of the fireplace where the elders and honored guests sit. Nowadays one often finds pictures of Mao Zedong and/or Deng Xiaoping in the altar, as the Qiang are thankful for the improved life they have since the founding of the People’s Republic and particularly since the reforms instituted by Deng in the late 1970’s and after.6 Traditionally the Qiang relied on spring water, and had to go out to the spring to get it. In recent years pipes have been run into many of the houses, so there is a more convenient supply of water, though it is not like the concept of “running water” in the West. There are no bathrooms inside the house, though in some villages (e.g. Weicheng) a small enclosed balcony that has a hole in the floor has been added to the house to function as a second story outhouse. Many villages now have electricity, at least a few hours every night, and so a TV (relying on a large but inexpensive satellite dish) and in some cases a VCD player can be found in the house. All TV and VCD programs are in Chinese, and so the spread of electricity has facilitated the spread of bilingualism. In the past each village had one or more watchtowers, six or seven story-high six- or eight-sided structures made of piled stones. The outside walls were smooth and the inside had ladders going up to the upper levels. These allowed early warning in the case of attack, and were a fallback position for fighting. In some villages underground passages were also dug between structures for use when they were attacked. In most villages the towers have been taken down and the stones used to build new houses. The main staple foods are corn, potatoes, wheat, and highland barley, supplemented with buckwheat, naked oats, and rice. Wheat, barley and buckwheat are made into noodles. Noodles are handmade. Among the favorite delicacies of the Qiang are buckwheat noodles cooked with pickled vegetables. Because potatoes are abundant in the area, the Qiang have developed many ways of cooking potatoes. The easiest ways to cook them is by boiling or baking (that is, placing the potatoes into the ashes around the fire). The more complicated and more special ways of preparing them involve pounding boiled potatoes in a stone mortal and then shaping the mashed potatoes and frying them to become potato fritters or boiling them with pickled vegetables. The latter is eaten like noodle soup, the same way as noodles made of buckwheat flour are eaten. Since corn is also quite abundant in the area, the Qiang have also developed different ways of eating corn. Corn flour is cooked with vegetables to become a delicious corn porridge. Corn flour mixed with water without yeast and then left in the fire to bake is the Qiang style of corn bread. This bread is often eaten with honey. Honey is a delicacy in the Qiang area. It is not easy to come by as they have to raise the bees in order to collect honey. Another important item is salt. Because the Qiang live in the highlands, salt was traditionally difficult to come by, so when you are invited to eat in a Qiang family, the host will always try to offer you more salt or will see to it that the dishes get enough salt. The Qiang also grow walnuts, red and green chili peppers, bunge prickly ash peel (pericarpium zanthoxyli), several varieties of hyacinth bean, apples, pears, scallions, turnips, cabbage, and some rape. Crops are rotated to preserve the quality of the fields, some of which are on the mountain sides and some of which may be on the side of the stream found at the bottom of many of the gorges between the mountains. Qiang fields are of the dry type and generally do not have any sort of irrigation system. Aside from what they grow, they are also able to collect many varieties of wild vegetables, fruit, and fungi, as well as pine nuts. They now eat rice, but as they do not grow rice themselves, they exchange other crops for rice. Many types of pickled vegetables are made as a way of preserving the vegetables, and these are often cooked with buckwheat noodles or potato noodles in a type of soup. Vegetables are also salted or dried in order to preserve them. While grain is the main subsistence food, the Qiang eat meat when they can, especially cured pork. In the past they generally ate meat only on special occasions and when entertaining guests. Now their economic circumstances allow them to eat meat more frequently. They raise pigs, two kinds of sheep, cows, horses, and dogs, though they do not eat the horses or dogs. Generally there is only one time per year when the animals are slaughtered (in mid-winter), and then the meat is preserved and hung from the rafters in the house. The amount of meat hanging in one’s house is a sign of one’s wealth. As there are no large fish in the streams and rivers, the Qiang generally do not eat fish. In the past they would hunt wild oxen, wild boars, several types of mountain goat, bears, wolves (for the skin), marmots, badgers, sparrows, rabbits, and musk deer (and sell the musk). They used small cross-bows, bows and arrows, pit traps, wire traps, and more recently flint-lock rifles to hunt. Now there are not many animals left in the mountains, and many that are there are endangered species, and so can no longer be hunted. The low-alcohol liquor made out of highland barley (similar to Tibetan “chang”) or occasionally corn or other grains, called /ɕi/ in Qiang, is one of the favorite beverages of the Qiang. It plays a very important role in the daily activities of the Qiang. It is an indispensable drink for use on all occasions. It is generally drunk from large casks placed on the ground using long bamboo straws. For this reason it is called zājiŭ ‘sucked liquor’ in Chinese. Opening a cask of /ɕi/ is an important part of hosting an honored guest. At present only a few of the older Qiang men still wear the traditional Qiang clothing except on particular ceremonial occasions. One item of traditional clothing still popularly worn by men and women is the handmade embroidered shoes. These are made of cloth, shaped like a boat, with the shoe face intricately embroidered. The sole is made of thickly woven hemp. It is very durable and quite practical for climbing in the mountains. In the summer men often wear a sandal version of these shoes with a large pomp on the toe. These shoes are an obligatory item of a Qiang woman’s dowry when she gets married. In many villages, embroidered shoe soles or shoe pads are still a popular engagement gift of a woman to her lover. Recently some women have taken to selling them as tourist souvenirs as well. Another item still popular among the Qiang men and women as well is the goat-skin vest. The vest is reversible; in the winter it is normally worn with the fur inside for warmth, and when worn with the fur out, it serves as a raincoat. It also acts as padding when carrying things on the back. Qiang men often carry a lighter (traditionally it would be flint and steel) and knives on a belt around their waist. The belt has a triangular pouch in front. There are two types of these triangular pouches: one is made of cloth and intricately embroidered, another is made of leather (the skin of a musk deer). Men sometimes will also wear a piece of apron-like cloth (also embroidered with a floral pattern) over their buttocks, to be used as seat pad. The majority of Qiang women in the villages still wear traditional clothing. Qiang women’s clothing is very colorful, and also varies from village to village. The differences are mainly manifested in the color and styles of their robes and headdresses. Headdresses are worn from about the age of twelve. Women in the Sanlong area wear a square headdress embroidered with various floral patterns in wintertime. In the spring, they wear a headband embroidered with colorful floral patterns, and wear a long robe (traditionally made of hemp fiber) with fancily embroidered borders, and tie a black sheep-leather belt around the waist. Women of the Heihu area wear a white headdress, and are fond of wearing blue or light green robes (the borders are also embroidered with floral patterns). Women from the Weimen area wear a black headdress and a long robe. The border of the robe is embroidered with colorful floral patterns. They also often wear an embroidered apron (full front or from the waist down) and an embroidered cloth belt. The headdress worn by women of Mao county and the Muka area of Li county is a block-like rectangle of folded cloth, with embroidered patterns on the part that faces backwards when worn. Women in Puxi village of Li county wear plain black headdresses, oblong in shape with the two sides wider than the front. In the Chibusu district of Mao counry women wear brick-shaped headdresses wrapped in braided hair. They braid their hair, and at the tip of their braid sometimes add a piece of blue fake hair braid in order to make the braid longer (if necessary), and then coil the braid around the headdress to hold it in place. Clothing of those living near the Tibetan areas bear the influence of the Tibetan ways of clothing. Other than the headdresses and the robes, Qiang women are also fond of wearing big earrings, ornamental hairpins, bracelets, and other silver jewelry. Jewelry pieces of those who are wealthier are inlaid with precious stones like jade, agate, and coral. They often hang a needle and thread box and sometimes a mouth harp from their belt. Babies wear special embroidered hats with silver ornaments and bronze and silver bells, and a small fragrance bag. Although in the Qiang language traditionally there are no surnames, for several hundred years the Qiang have been using Han Chinese surnames. The clans or surname groups form the lowest level of organization within the village above the nuclear family. In one village there may be only a few different surnames. The village will have a village leader, and this is now an official political post with a small salary. Many of the traditional “natural” villages have now been organized into “administrative” villages comprised of several “natural” villages. Before 1949 (as early as the Yuan dynasty—13-14th century), above the village level there was a local leader (called t«us—î in Chinese) who was enfiefed by the central government to control the Qiang and collect taxes. This leader could also write his own laws and demand his own taxes and servitude from the Qiang people. The Qiang had to work for this local leader for free, and also give a part of their food to him. His position was hereditary, and many of these leaders were terrible tyrants and exploiters of the people. Some of the Qiang traditional stories are of overthrowing such tyrants. Kinship relations are quite complex, and while generally patrilineal, the women have a rather high status, supposedly a remnant of a matriarchal past. Only men can inherit the wealth of the parents, but women are given a large dowry. Marriages are monogamous, and can be with someone of the same surname, but not within the same family for at least three generations. The general practice is to marry someone of the same village but it can also be with someone outside the village. Increasingly Qiang women are marrying out of the villages to Chinese or Qiang living in the plains to have an easier life, and many of the young men who go out to study or work marry Han Chinese women. In the past marriages were decided by the parents of the bride and groom, although now the young people generally have free choice. The traditional form of marriage in the village is characterized by a series of rituals focused around drinking and eating. It is consists of three main stages: engagement, preparation for the wedding, and the wedding ceremony. The rituals start when the parents of a boy have a girl in mind for their son. The parents will start the “courtship” by asking a relative or someone who knows the girl’s family to find out whether she is available or not. If the girl is available, they will move on to the next step, that is, to ask a matchmaker to carry a package of gifts (containing sugar, wine, noodles, and cured meat) to the girl’s family. This is only to convey their intention to propose a marriage. If the girl’s parents accepted the gift, the boy’s parents will proceed to the next step, asking the matchmaker to bring some more gifts to the girl’s parents and “officially” propose. If the girl’s parents agree, then a date will be set to bring the “engagement wine” to the girl’s family. On that day, the girl’s parents and all the siblings will join in to drink and sing the “engagement song”. Once this is done, the couples are considered to be engaged, and there should be no backing out. After being engaged, the girl should avoid having any contact with members of the groom’s family. Before the wedding, a member from the groom’s family will be accompanied by the matchmaker to the bride’s family, carrying with them some wine which they will offer to the bride’s family members and relatives of the same surname, to have a drink and decide on the date of the wedding. Once the wedding date has been set, the groom, accompanied by the matchmaker and carrying some more wine, personally goes to the bride’s family to have a drink with the bride’s uncles, aunts and other family members. The wedding ceremony itself takes three days, and is traditionally hosted by the oldest brothers of the mothers of the bride and groom. On the first day, the groom’s family sends an entire entourage to the bride’s place to fetch the bride. The entourage usually consists of relatives of the groom and some boys and girls from the village whose parents are both still living, with two people playing the trumpet. They carry with them a sedan chair, horses (in some cases), clothing and jewelry for the bride. The entourage has to arrive in the bride’s village before sunset. They stay there overnight. The next day, the bride has to leave with the group to go to the groom’s family. Before stepping out of her family door, she has to cry to show how sad she is leaving her parents and family members. One of her brothers will carry her on his back to the sedan chair. Once the bride steps out of her parents’ house she should not turn her head to look back. She is accompanied by her aunts (wife of her uncle from her mother’s side, and wife of her uncle from her father’s side), sisters and other relatives. Before the bride enters the groom’s house she has to step over a small fire or a red cloth (this part of the ceremony varies among areas). The bride enters the house and the actual wedding ceremony starts. The couple will be led to the front of the family altar, and, just like the wedding practice of the Chinese, the couple will first make vows to heaven and earth, the family ancestors, the groom’s parents, the other relatives, and finally vows to each other. There is a speech by the hosting uncles, and the opening of a cask of highland barley wine. There will then be dancing and drinking. As the cask is drunk, hot water is added to the top with a water scoop, and each drinker is expected to drink one scoop’s equivalent of liquor. If the drinker fails to drink the required amount, he or she may be tossed up into the air by the others in the party. Before the couple enter the room where they are to live, two small children (whose parents are both still living) will be sent in to run around and play on the couple’s bed, as a way of blessing the couple to soon have children. On the third day the bride returns to her parents’ home. When she leaves her newlywed husband’s village, relatives of the husband wait at their doorways or at the main entrance to the village to offer her wine. The bride’s family will also prepare wine and food to welcome the newlywed couple. The groom has to visit and pay respects to all of the bride’s relatives. The bride then stays at her parents’ house for a year or so, until the birth of the first child or at least until around the time of the Qiang New Year. The groom will visit her there and may live in the woman’s house. She returns to her husband’s family to celebrate the birth or the New Year, and stays there permanently. In recent years there has been movement away from traditional style marriage ceremonies towards more Han Chinese style or Chinese- Western-Qiang mixed style marriage ceremonies. The Qiang native religion is a type of pantheism, with gods or spirits of many types. To this day when a cask of /ɕi/ (barley wine) is opened, a ritual is performed to honor the door god, the fireplace god, and the house god. Flint stone (called “white stone” in Qiang and Chinese) is highly valued, and when a house is built a piece of flint is placed on the roof of the house and a ceremony is held to invest the stone with a spirit.7 The fireplace at the center of the house is considered to be the place where the fireplace spirit lives. Before each meal, the Qiang will place some food near the iron potholder for the fireplace spirit. The iron potholder is treated by the Qiang people with great respect, and cannot be moved at random. One cannot rest one's feet on it, or hang food there to grill. Most important is that one cannot spit in front of the potholder. When the Qiang drink barley wine or tea, or eat meals, an elderly person who is present has to perform the ritual of honoring the god of the fireplace, that is by dipping his finger or the drinking straw into the barley wine and splashing the wine into the fireplace. Every household has an altar in the corner of the main floor of the house facing the door. It is usually ornately carved, and its size reflects the financial status of the family. The altar and the area around the altar is considered to be sacred. One cannot hang clothes, nor spit, burp, expel flatuence, or say inauspicious words around the altar area. Pointing one’s foot toward the altar is strictly prohibited. Other than believing in the spirits of the house and of the fireplace, the Qiang also believe in the spirits of all natural phenomena, such as heaven, earth, sun, moon, stars, rivers, hills and mountains. Two of the biggest festivals in the Qiang area are related to their worship of these spirits: the Qiang New Year, which falls on the 24th day of the sixth month of the lunar calendar (now the festival date is fixed on October 1st), and the Mountain Sacrifice Festival, held between the second and sixth months of the lunar calendar. The former is focused on sacrifices to the god of Heaven, while the latter is to give sacrifice to the god of the mountain. Religious ceremonies and healing rituals are performed by shamans known as /ɕpi/ in Qiang and Duān Gōng in Chinese. To become such a shaman takes many years of training with a teacher. The Duān Gōng also performs the initiation ceremony that young men go through when they are about eighteen years old. This ceremony, called “sitting on top of the mountain” in Qiang, involves the whole family going to the mountain top to sacrifice a sheep or cow and to plant three cypress trees. These shamans also pass on the traditional stories of the Qiang. The stories include the creation story, the history of the Qiang (particular famous battles and heroes), and other cultural knowledge. As there was no written language until recently, story telling was the only way that this knowledge was passed on. Very few such shamans are left, and little story telling is done now that many villages have access to TVs and VCD players. Because the Qiang villages are generally high up on the mountains, and there often is no road to the village, only a steep narrow path (this is the case, for example, in Ronghong village, where the nearest road is hours away), travel has traditionally been by foot, though horses are sometimes used as pack animals where the path or road allows it. In the summer the horses are taken to remote pastures to prevent them from eating the crops near the villages. In some cases there is a road to the village large enough for vehicles to pass, but the condition of the road is usually quite bad, and as it runs along the very edge of the mountain, it can be quite dangerous. On every field trip we saw at least one car or truck that had just fallen off the side of a mountain. Because the condition of the road varies with the weather and there are sometimes landslides, before attempting to drive to (or near) a village, one has to try to find out if the road is actually passable. The streams and rivers are too shallow to navigate, and so the Qiang do not make boats. In general it was the work of the men to hunt, weave baskets (large back baskets and small baskets), shepherd the cows, gather wild plants, and do some of the harder labor such as plowing the fields, getting wood, and building houses, and it was the work of the women to weave cloth, embroider, hoe the fields, spread seeds, cook most of the food, and do most of the housework. In the winter men often went down into the flatlands to dig wells for pay (this often involved a twelve-day walk down to the Chengdu area!). Any trading was also only done by men. In the past the Qiang traded opium, animal skins and medicinal plants in order to get gold, silver, coral, and ivory. These items were often made into jewelry for the women. Nowadays both men and women cook and gather wild plants, and it is common for men to leave the village for long periods of time to go out to work in the flatlands or to sell medicinal herbs, wood, vegetables, animal skins or other items in exchange for money or rice. Although some ancient ceramics have been unearthed in the Qiang areas, in the recent past ceramics were not made by the Qiang. Most Qiang-made utensils were of wood, stone or iron. There were specialists in metalworking. Nowadays most such items are bought from outside the Qiang area.
<urn:uuid:5c95a2dd-364a-4737-9bab-15dbfcafc592>
CC-MAIN-2017-17
http://www.tibeto-burman.net/qiang/about.html
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917121267.21/warc/CC-MAIN-20170423031201-00308-ip-10-145-167-34.ec2.internal.warc.gz
en
0.972261
5,601
3.15625
3
Presentation on theme: "Shi’ites Muslims belonging to the branch of Islam believingthat God vests leadership of the community in a descendant of Muhammad’s son-in-law Ali. Shi’ism."— Presentation transcript: Shi’ites Muslims belonging to the branch of Islam believingthat God vests leadership of the community in a descendant of Muhammad’s son-in-law Ali. Shi’ism is the state religion of Iran. Sunnis Muslims belonging to branch of Islam believing that the community should select its own leadership. The majority religion in most Islamic countries. Mecca City in western Arabia; birthplace of the Prophet Muhammad, and ritual center of the Islamic religion. Muhammad (570–632 c.e.) Arab prophet; founder of religion of Islam. Muslim An adherent of the Islamic religion; a person who 都 ubmits � (in Arabic, Islam means 都 ubmission � ) to the will of God. Islam Religion expounded by the Prophet Muhammad (570–632 C.E.) on the basis of his reception of divine revelations, which were collected after his death into the Quran. In the tradition of Judaism and Christianity, and sharing much of their lore, Islam calls on all people to recognize one creator god—Allah—who rewards or punishes believers after death according to how they led their lives. Medina City in western Arabia to which the Prophet Muhammad and his followers emigrated in 622 to escape persecution in Mecca. umma The community of all Muslims. A major innovation against the background of seventh-century Arabia, where traditionally kinship rather than faith had determined membership in a community. caliphate Office established in succession to the Prophet Muhammad, to rule the Islamic empire; also the name of that empire. Quran Book composed of divine revelations made to the Prophet Muhammad between ca. 610 and his death in 632; the sacred text of the religion of Islam. Umayyad Caliphate First hereditary dynasty of Muslim caliphs (661 to 750). From their capital at Damascus, the Umayyads ruled an empire that extended from Spain to Abbasid Caliphate Descendants of the Prophet Muhammad’s uncle, al-Abbas, the Abbasids overthrew the Umayyad Caliphate and ruled an Islamic empire from their capital in Baghdad (founded 762) from 750 to 1258. Mamluks Under the Islamic system of military slavery, Turkic military slaves who formed an important part of the armed forces of the Abbasid Caliphate of the ninth and tenth centuries. Mamluks eventually founded their own state, ruling Egypt and Syria (1250–1517). Ghana First known kingdom in sub-Saharan West Africa between the sixth and thirteenth centuries C.E. Also the modern West African country once known as the Gold Coast. ulama Muslim religious scholars. From the ninth century onward, the primary interpreters of Islamic law and the social core of Muslim urban societies. hadith A tradition relating the words or deeds of the Prophet Muhammad; next to the Quran, the most important basis for Islamic law. Charlemagne (742-814) King of the Franks (r. 768-814); emperor (r. 800-814). Through a series of military conquests he established the Carolingian Empire, which encompassed all of Gaul and parts of Germany and Italy. Though illiterate himself, he sponcored a brief intellectual revival. medieval Literally 杜 iddle age, � a term that historians of Europe use for the period ca. 500 to ca. 1500, signifying its intermediate point between Greco-Roman antiquity and the Renaissance. Byzantine Empire Historians’ name for the eastern portion of the Roman Empire from the fourth century onward, taken from 釘 yzantion, � an early name for Constantinople, the Byzantine capital city. The Empire fell to the Ottomans in 1453. Kievan Russia State established at Kiev in Ukraine ca. 879 by Scandinavian adventurers asserting authority over a mostly Slavic farming population. schism A formal split within a religious community. See Great Western Schism. manor In medieval Europe, a large, self-sufficient landholding consisting of the lord’s residence (manor house), outbuildings, peasant village, and surrounding land. serf In medieval Europe, an agricultural laborer legally bound to a lord’s property and obligated to perform set services for the lord. In Russia some serfs worked as artisans and in factories; serfdom was not abolished there until 1861. fief In medieval Europe, land granted in return for a sworn oath to provide specified military service. vassal In medieval Europe, a sworn supporter of a king or lord committed to rendering specified military service to that king or lord. papacy The central administration of the Roman Catholic Church, of which the pope is the head. Holy Roman Empire Loose federation of mostly German states and principalities, headed by an emperor elected by the princes. It lasted from 962 to 1806. investiture controversy Dispute between the popes and the Holy Roman Emperors over who held ultimate authority over bishops in imperial lands. monasticism Living in a religious community apart from secular society and adhering to a rule stipulating chastity, obedience, and poverty. It was a prominent element of medieval Christianity and Buddhism. Monasteries were the primary centers of learning and literacy in medieval Europe. horse collar Harnessing method that increased the efficiency of horses by shifting the point of traction from the animal’s neck to the shoulders; its adoption favors the spread of horse-drawn plows and vehicles. Crusades (1096–1291) Armed pilgrimages to the Holy Land by Christians determined to recover Jerusalem from Muslim rule. The Crusades brought an end to western Europe’s centuries of intellectual and cultural isolation. pilgrimage Journey to a sacred shrine by Christians seeking to show their piety, fulfill vows, or gain absolution for sins. Other religions also have pilgrimage traditions, such as the Muslim pilgrimage to Mecca and the pilgrimages made by early Chinese Buddhists to India in search of sacred Buddhist writings. Li Shimin (599–649) One of the founders of the Tang Empire and its second emperor (r. 626–649). He led the expansion of the empire into Central Asia. Tang Empire Empire unifying China and part of Central Asia, founded 618 and ended 907. The Tang emperors presided over a magnificent court at their capital, Chang’an. Grand Canal The 1,100-mile (1,700-kilometer) waterway linking the Yellow and the Yangzi Rivers. It was begun in the Han period and completed during the Sui Empire. tributary system A system in which, from the time of the Han Empire, countries in East and Southeast Asia not under the direct control of empires based in China nevertheless enrolled as tributary states, acknowledging the superiority of the emperors in China in exchange for trading rights or strategic alliances. bubonic plague A bacterial disease of fleas that can be transmitted by flea bites to rodents and humans; humans in late stages of the illness can spread the bacteria by coughing. Because of its very high mortality rate and the difficulty of preventing its spread, major outbreaks have created crises in many parts of the world. Uighurs A group of Turkic-speakers who controlled their own centralized empire from 744 to 840 in Mongolia and Central Asia. Tibet Country centered on the high, mountain-bounded plateau north of India. Tibetan political power occasionally extended farther to the north and west between the seventh and thirteen centuries. Song Empire Empire in central and southern China (960– 1126) while the Liao people controlled the north. Empire in southern China (1127–1279; the 鉄 outhern Song � ) while the Jin people controlled the north. Distinguished for its advances in technology, medicine, astronomy, and mathematics. junk A very large flatbottom sailing ship produced in the Tang,Ming, and Song Empires, specially designed for longdistance commercial travel. gunpowder A mixture of saltpeter, sulfur, and charcoal, in various proportions. The formula, brought to China in the 400s or 500s, was first used to make fumigators to keep away insect pests and evil spirits. In later centuries it was used to make explosives and grenades and to propel cannonballs, shot, and bullets. neo-Confucianism Term used to describe new approaches to understanding classic Confucian texts that became the basic ruling philosophy of China from the Song period t the twentieth century. Zen The Japanese word for a branch of Mahayana Buddhism based on highly disciplined meditation. It is known in Sanskrit as dhyana, in Chinese as chan, and in Korean as son. movable type Type in which each individual character is cast on a separate piece of metal. It replaced woodblock printing, allowing for the arrangement of individual letters and other characters on a page, rather than requiring the carving of entire pages at a time. It may have been invented in Korea in the thirteenth century. Koryo Korean kingdom founded in 918 and destroyed by a Mongol invasion in 1259. Fujiwara Aristocratic family that dominated the Japanese imperial court between the ninth and twelfth centuries. Kamakura shogunate The first of Japan’s decentralized military governments. (1185–1333). Champa rice Quick-maturing rice that can allow two harvests in one growing season. Originally introduced into Champa from India, it was later sent to China as a tribute gift by the Champa state. Teotihuacan A powerful city-state in central Mexico (100 B.C.E.–750 C.E.). Its population was about 150,000 at its peak in 600. chinampas Raised fields constructed along lake shores in Mesoamerica to increase agricultural yields. Maya Mesoamerican civilization concentrated in Mexico’s Yucat 疣 Peninsula and in Guatemala and Honduras but never unified into a single empire. Major contributions were in mathematics, astronomy, and development of the calendar. Toltecs Powerful postclassic empire in central Mexico (900–1168 C.E.). It influenced much of Mesoamerica. Aztecs claimed ties to this earlier civilization. Aztecs Also known as Mexica, the Aztecs created a powerful empire in central Mexico (1325–1521 C.E.). They forced defeated peoples to provide goods and labor as a tax. Tenochtitlan Capital of the Aztec Empire, located on an island in Lake Texcoco. Its population was about 150,000 on the eve of Spanish conquest. Mexico City was constructed on its ruins. tribute system A system in which defeated peoples were forced to pay a tax in the form of goods and labor. This forced transfer of food, cloth, and other goods subsidized the development of large cities. An important component of the Aztec and Inca economies. Anasazi Important culture of what is now the Southwest United States (1000–1300 C.E.). Centered on Chaco Canyon in New Mexico and Mesa Verde in Colorado, the Anasazi culture built multistory residences and worshipped in subterranean buildings called kivas. chiefdom Form of political organization with rule by a hereditary leader who held power over a collection of villages and towns. Less powerful than kingdoms and empires, chiefdoms were based on gift giving and commercial links. khipu System of knotted colored cords used by preliterate Andean peoples to transmit information. ayllu Andean lineage group or kin-based community. mit’a Andean labor system based on shared obligations to help kinsmen and work on behalf of the ruler and religious organizations. Moche Civilization of north coast of Peru (200–700 C.E.). An important Andean civilization that built extensive irrigation networks as well as impressive urban centers dominated by brick temples. Tiwanaku Name of capital city and empire centered on the region near Lake Titicaca in modern Bolivia (375– 1000 C.E.). Wari Andean civilization culturally linked to Tiwanaku, perhaps beginning as a colony of Tiwanaku. Inca Largest and most powerful Andean empire. Controlled the Pacific coast of South America from Ecuador to Chile from its capital of Cuzco. Mongols A people of this name is mentioned as early as the records of the Tang Empire, living as nomads in northern Eurasia. After 1206 they established an enormous empire under Genghis Khan, linking western and eastern Eurasia. Genghis Khan (ca. 1167–1227) The title of Temujin when he ruled the Mongols (1206–1227). It means the 登 ceanic � or 砥 niversal � leader. Genghis Khan was the founder of the Mongol Empire. nomadism A way of life, forced by a scarcity of resources, in which groups of people continually migrate to find pastures and water. Yuan Empire (1271–1368) Empire created in China and Siberia by Khubilai Khan. bubonic plague A bacterial disease of fleas that can be transmitted by flea bites to rodents and humans; humans in late stages of the illness can spread the bacteria by coughing. Because of its very high mortality rate and the difficulty of preventing its spread, major outbreaks have created crises in many parts of the world. Il-khan A 都 econdary � or 菟 eripheral � khan based in Persia. The Il-khans’ khanate was founded by Hu ̈ legu ̈, a grandson of Genghis Khan, and was based at Tabriz in modern Azerbaijan. It controlled much of Iran and Iraq. Golden Horde Mongol khanate founded by Genghis Khan’s grandson Batu. It was based in southern Russia and quickly adopted both the Turkic language and Islam. Also known as the Kipchak Horde. Timur (1336–1405) Member of a prominent family of the Mongols’ Jagadai Khanate, Timur through conquest gained control over much of Central Asia and Iran. He consolidated the status of Sunni Islam as orthodox, and his descendants, the Timurids, maintained his empire for nearly a century and founded the Mughal Empire in India. Rashid al-Din (d.1318) Adviser to the Il-khan ruler Ghazan, who converted to Islam on Rashid’s advice. Nasir al-Din Tusi (1201–1274) Persian mathematician and cosmologist whose academy near Tabriz provided the model for the movement of the planets that helped to inspire the Copernican model of the solar system. Nevskii, Alexander (1220–1263) Prince of Novgorod (r. 1236– 1263). He submitted to the invading Mongols in 1240 and received recognition as the leader of the Russian princes under the Golden Horde. tsar (czar) From Latin caesar, this Russian title for a monarch was first used in reference to a Russian ruler by Ivan III (r. 1462–1505). Ottoman Empire Islamic state founded by Osman in northwestern Anatolia ca. 1300. After the fall of the Byzantine Empire, the Ottoman Empire was based at Istanbul (formerly Constantinople) from 1453 to 1922. It encompassed lands in the Middle East, North Africa, the Caucasus, and eastern Europe. Khubilai Khan (1215–1294) Last of the Mongol Great Khans (r. 1260–1294) and founder of the Yuan Empire. lama In Tibetan Buddhism, a teacher. Beijing China’s northern capital, first used as an imperial capital in 906 and now the capital of the People’s Republic of China. Ming Empire (1368–1644) Empire based in China that Zhu Yuanzhang established after the overthrow of the Yuan Empire. The Ming emperor Yongle sponsored the building of the Forbidden City and the voyages of Zheng He. The later years of the Ming saw a slowdown in technological development and economic decline. Yongle Reign period of Zhu Di (1360–1424), the third emperor of the Ming Empire (r. 1403–1424). He sponsored the building of the Forbidden City, a huge encyclopedia project, the expeditions of Zheng He, and the reopening of China’s borders to trade and travel. Zheng He (1371–1433) An imperial eunuch and Muslim, entrusted by the Ming emperor Yongle with a series of state voyages that took his gigantic ships through the Indian Ocean, from Southeast Asia to Africa. Yi (1392–1910) The Yi dynasty ruled Korea from the fall of the Koryo kingdom to the colonization of Korea by Japan. kamikaze The 電 ivine wind, � which the Japanese credited with blowing Mongol invaders away from their shores in 1281. Ashikaga Shogunate (1336–1573) The second of Japan’s military governments headed by a shogun (a military ruler). Sometimes called the Muromachi Shogunate. tropics Equatorial region between the Tropic of Cancer and the Tropic of Capricorn. It is characterized by generally warm or hot temperatures year-round, though much variation exists due to altitude and other factors. Temperate zones north and south of the tropics generally have a winter season. monsoon Seasonal winds in the Indian Ocean caused by the differences in temperature between the rapidly heating and cooling landmasses of Africa and Asia and the slowly changing ocean waters. These strong and predictable winds have long been ridden across the open sea by sailors, and the large amounts of rainfall that they deposit on parts of India, Southeast Asia, and China allow for the cultivation of several crops a year. Ibn Battuta (1304–1369) Moroccan Muslim scholar, the most widely traveled individual of his time. He wrote a detailed account of his visits to Islamic lands from China to Spain and the western Sudan. Delhi Sultanate (1206–1526) Centralized Indian empire of varying extent, created by Muslim invaders. Mali Empire created by indigenous Muslims in western Sudan of West Africa from the thirteenth to fifteenth century. It was famous for its role in the trans-Saharan gold trade. Mansa Kankan Musa Ruler of Mali (r. 1312–1337). His pilgrimage through Egypt to Mecca in 1324–1325 established the empire’s reputation for wealth in the Mediterranean world. Gujarat Region of western India famous for trade and manufacturing; the inhabitants are called Gujarati. dhow Ship of small to moderate size used in the western Indian Ocean, traditionally with a triangular sail and a sewn timber hull. Swahili Coast East African shores of the Indian Ocean between the Horn of Africa and the Zambezi River; from the Arabic sawahil, meaning 都 hores. � Great Zimbabwe City, now in ruins (in the modern African country of Zimbabwe), whose many stone structures were built between about 1250 and 1450, when it was a trading center and the capital of a large state. Aden Port city in the modern south Arabian country of Yemen. It has been a major trading center in the Indian Ocean since ancient times. Malacca Port city in the modern Southeast Asian country of Malaysia, founded about 1400 as a trading center on the Strait of Malacca. Also spelled Melaka. Urdu A Persian-influenced literary form of Hindi written in Arabic characters and used as a literary language since the 1300s. Timbuktu City on the Niger River in the modern country of Mali. It was founded by the Tuareg as a seasonal camp sometime after 1000. As part of the Mali empire, Timbuktu became a major terminus of the trans-Saharan trade and a center of Islamic learning. Latin West Historians’ name for the territories of Europe that adhered to the Latin rite of Christianity and used the Latin language for intellectual exchange in the period ca. 1000– 1500. three-field system A rotational system for agriculture in which one field grows grain, one grows legumes, and one lies fallow. It gradually replaced two-field system in medieval Europe. Black Death An outbreak of bubonic plague that spread across Asia, North Africa, and Europe in the mid-fourteenth century, carrying off vast numbers of persons. water wheel A mechanism that harnesses the energy in flowing water to grind grain or to power machinery. It was used in many parts of the world but was especially common in Europe from 1200 to 1900. Hanseatic League An economic and defensive alliance of the free towns in northern Germany, founded about 1241 and most powerful in the fourteenth century. guild In medieval Europe, an association of men (rarely women), such as merchants, artisans, or professors, who worked in a particular trade and banded together to promote their economic and political interests. Guilds were also important in other societies, such as the Ottoman and Safavid empires. Gothic cathedrals Large churches originating in twelfthcentury France; built in an architectural style featuring pointed arches, tall vaults and spires, flying buttresses, and large stained-glass windows. Renaissance (European) A period of intense artistic and intellectual activity, said to be a 途 ebirth � of Greco-Roman culture. Usually divided into an Italian Renaissance, from roughly the mid-fourteenth to mid-fifteenth century, and a Northern (trans-Alpine) Renaissance, from roughly the early fifteenth to early seventeenth century. universities Degree-granting institutions of higher learning. Those that appeared in Latin West from about 1200 onward became the model of all modern universities. scholasticism A philosophical and theological system, associated with Thomas Aquinas, devised to reconcile Aristotelian philosophy and Roman Catholic theology in the thirteenth century. humanists (Renaissance) European scholars, writers, and teachers associated with the study of the humanities (grammar, rhetoric, poetry, history, languages, and moral philosophy), influential in the fifteenth century and later. printing press A mechanical device for transferring text or graphics from a woodblock or type to paper using ink. Presses using movable type first appeared in Europe in about 1450. Great Western Schism A division in the Latin (Western) Christian Church between 1378 and 1417, when rival claimants to the papacy existed in Rome and Avignon. Hundred Years War (1337–1453) Series of campaigns over control of the throne of France, involving English and French royal families and French noble families. new monarchies Historians’ term for the monarchies in France, England, and Spain from 1450 to 1600. The centralization of royal power was increasing within more or less fixed territorial limits. reconquest (of Iberia) Beginning in the eleventh century, military campaigns by various Iberian Christian states to recapture territory taken by Muslims. In 1492 the last Muslim ruler was defeated, and Spain and Portugal emerged as united kingdoms. Zheng He (1371–1433) An imperial eunuch and Muslim, entrusted by the Ming emperor Yongle with a series of state voyages that took his gigantic ships through the Indian Ocean, from Southeast Asia to Africa. Arawak Amerindian peoples who inhabited the Greater Antilles of the Caribbean at the time of Columbus. Henry the Navigator (1394–1460) Portuguese prince who promoted the study of navigation and directed voyages of exploration down the western coast of Africa. caravel A small, highly maneuverable three-masted ship used by the Portuguese and Spanish in the exploration of the Atlantic. Gold Coast (Africa) Region of the Atlantic coast of West Africa occupied by modern Ghana; named for its gold exports to Europe from the 1470s onward. Dias, Bartolomeu (1450?–1500) Portuguese explorer who in 1488 led the first expedition to sail around the southern tip of Africa from the Atlantic and sight the Indian Ocean. Gama,Vasco da (1460?–1524) Portuguese explorer. In 1497– 1498 he led the first naval expedition from Europe to sail to India, opening an important commercial sea route. Columbus, Christopher (1451–1506) Genoese mariner who in the service of Spain led expeditions across the Atlantic, reestablishing contact between the peoples of the Americas and the Old World and opening the way to Spanish conquest and colonization. Magellan, Ferdinand (1480?–1521) Portuguese navigator who led the Spanish expedition of 1519–1522 that was the first to sail around the world. conquistadors Early-sixteenth-century Spanish adventurers who conquered Mexico, Central America, and Peru. Cort 駸,Hern 疣 (1485–1547) Spanish explorer and conquistador who led the conquest of Aztec Mexico in 1519–1521 for Spain. Moctezuma II (1466?–1520) Last Aztec emperor, overthrown by the Spanish conquistador Hern 疣 Cort 駸. Pizarro, Francisco (1475?–1541) Spanish explorer who led the conquest of the Inca Empire of Peru in 1531–1533. Atahualpa (1502?–1533) Last ruling Inca emperor of Peru. He was executed by the Spanish. Renaissance (European) A period of intense artistic and intellectual activity, said to be a 途 ebirth � of Greco-Roman culture. Usually divided into an Italian Renaissance, from roughly the mid-fourteenth to mid-fifteenth century, and a Northern (trans-Alpine) Renaissance, from roughly the early fifteenth to early seventeenth century. papacy The central administration of the Roman Catholic Church, of which the pope is the head. indulgence The forgiveness of the punishment due for past sins, granted by the Catholic Church authorities as a reward for a pious act. Martin Luther’s protest against the sale of indulgences is often seen as touching off the Protestant Reformation. Protestant Reformation Religious reform movement within the Latin Christian Church beginning in 1519. It resulted in the 菟 rotesters � forming several new Christian denominations, including the Lutheran and Reformed Churches and the Church of England. Catholic Reformation Religious reform movement within the Latin Christian Church, begun in response to the Protestant Reformation. It clarified Catholic theology and reformed clerical training and discipline. witch-hunt The pursuit of people suspected of witchcraft, especially in northern Europe in the late sixteenth and seventeenth centuries. Scientific Revolution The intellectual movement in Europe, initially associated with planetary motion and other aspects of physics, that by the seventeenth century had laid the groundwork for modern science. Enlightenment A philosophical movement in eighteenth century Europe that fostered the belief that one could reform society by discovering rational laws that governed social behavior and were just as scientific as the laws of physics. bourgeoisie In early modern Europe, the class of well-off town dwellers whose wealth came from manufacturing, finance, commerce, and allied professions. joint-stock company A business, often backed by a government charter, that sold shares to individuals to raise money for its trading enterprises and to spread the risks (and profits) among many investors. stock exchange A place where shares in a company or business enterprise are bought and sold. gentry In China, the class of prosperous families, next in wealth below the rural aristocrats, from which the emperors drew their administrative personnel. Respected for their education and expertise, these officials became a privileged group and made the government more efficient and responsive than in the past. The term gentry also denotes the class of landholding families in England below the aristocracy. Little Ice Age A century-long period of cool climate that began in the 1590s. Its ill effects on agriculture in northern Europe were notable. deforestation The removal of trees faster than forests can replace themselves. Holy Roman Empire Loose federation of mostly German states and principalities, headed by an emperor elected by the princes. It lasted from 962 to 1806. Habsburg A powerful European family that provided many Holy Roman Emperors, founded the Austrian (later Austro- Hungarian) Empire, and ruled sixteenth- and seventeenth century Spain. English Civil War (1642-1649) A conflict over royal versus. Parliamentary rights, caused by King Charles I’s arrest of his parliamentary critics and ending with his execution. Its outcome checked the growth of royal absolutism and, with the Glorious Revolution of 1688 and the English Bill of Rights of 1689, ensured that England would be a constitutional monarchy. Versailles The huge palace built for French King Louis XIV south of Paris in the town of the same name. The palace symbolized the preeminence of French power and architecture in Europe and the triumph of royal authority over the French nobility. balance of power The policy in international relations by which, beginning in the eighteenth century, the major European states acted together to prevent any one of them from becoming too powerful. Columbian Exchange The exchange of plants, animals, diseases, and technologies between the Americas and the rest of the world following Columbus’s voyages. Council of the Indies The institution responsible for supervising Spain’s colonies in the Americas from 1524 to the early eighteenth century, when it lost all but judicial responsibilities. Las Casas, Bartolom � de (1474–1566) First bishop of Chiapas, in southern Mexico. He devoted most of his life to protecting Amerindian peoples from exploitation. His major achievement was the New Laws of 1542, which limited the ability of Spanish settlers to compel Amerindians to labor for them. Potos � Located in Bolivia, one of the richest silver mining centers and most populous cities in colonial Spanish America. encomienda A grant of authority over a population of Amerindians in the Spanish colonies. It provided the grant holder with a supply of cheap labor and periodic payments of goods by the Amerindians. It obliged the grant holder to Christianize the Amerindians. creoles In colonial Spanish America, term used to describe someone of European descent born in the New World. Elsewhere in the Americas, the term is used to describe all nonnative peoples. mestizo The term used by Spanish authorities to describe someone of mixed Amerindian and European descent. mulatto The term used in Spanish and Portuguese colonies to describe someone of mixed African and European descent. indentured servant A migrant to British colonies in the Americas who paid for passage by agreeing to work for a set term ranging from four to seven years. House of Burgesses Elected assembly in colonial Virginia, created in 1618. Pilgrims Group of English Protestant dissenters who established Plymouth Colony in Massachusetts in 1620 to seek religious freedom after having lived briefly in the Netherlands. Puritans English Protestant dissenters who believed that God predestined souls to heaven or hell before birth. They founded Massachusetts Bay Colony in 1629. Iroquois Confederacy An alliance of five northeastern Amerindian peoples (six after 1722) that made decisions on military and diplomatic issues through a council of representatives. Allied first with the Dutch and later with the English, the Confederacy dominated the area from western New England to the Great Lakes. New France French colony in North America, with a capital in Quebec, founded 1608. New France fell to the British in 1763. coureurs des bois (runners of the woods) French fur traders, many of mixed Amerindian heritage, who lived among and often married with Amerindian peoples of North America. Tupac Amaru II Member of Inca aristocracy who led a rebellion against Spanish authorities in Peru in 1780–1781. He was captured and executed with his wife and other members of his family. Royal African Company A trading company chartered by the English government in 1672 to conduct its merchants’ trade on the Atlantic coast of Africa. Atlantic system The network of trading links after 1500 that moved goods, wealth, people, and cultures around the Atlantic Ocean basin. chartered companies Groups of private investors who paid an annual fee to France and England in exchange for a monopoly over trade to the West Indies colonies. Dutch West India Company (1621–1794) Trading company chartered by the Dutch government to conduct its merchants’ trade in the Americas and Africa. plantocracy In the West Indian colonies, the rich men who owned most of the slaves and most of the land, especially in the eighteenth century. driver A privileged male slave whose job was to ensure that a slave gang did its work on a plantation. seasoning An often difficult period of adjustment to new climates, disease environments, and work routines, such as that experienced by slaves newly arrived in the Americas. manumission A grant of legal freedom to an individual slave. maroon A slave who ran away from his or her master. Often a member of a community of runaway slaves in the West Indies and South America. capitalism The economic system of large financial institutions— banks, stock exchanges, investment companies— that first developed in early modern Europe. Commercial capitalism, the trading system of the early modern economy, is often distinguished from industrial capitalism, the system based on machine production. mercantilism European government policies of the sixteenth, seventeenth, and eighteenth centuries designed to promote overseas trade between a country and its colonies and accumulate precious metals by requiring colonies to trade only with their motherland country. The British system was defined by the Navigation Acts, the French system by laws known as the Exclusif. Atlantic Circuit The network of trade routes connecting Europe, Africa, and the Americas that underlay the Atlantic system. Middle Passage The part of the Atlantic Circuit involving the transportation of enslaved Africans across the Atlantic to the Americas. Songhai A people, language, kingdom, and empire in western Sudan in West Africa. At its height in the sixteenth century, the Muslim Songhai Empire stretched from the Atlantic to the land of the Hausa and was a major player in the trans- Saharan trade. Hausa An agricultural and trading people of central Sudan in West Africa. Aside from their brief incorporation into the Songhai Empire, the Hausa city-states remained autonomous until the Sokoto Caliphate conquered them in the early nineteenth century. Bornu A powerful West African kingdom at the southern edge of the Sahara in the Central Sudan, which was important in trans-Saharan trade and in the spread of Islam. Also known as Kanem-Bornu, it endured from the ninth century to the end of the nineteenth.
<urn:uuid:b691ed8d-4ff4-415a-9d87-d6a6990742a2>
CC-MAIN-2017-17
http://slideplayer.com/slide/4293041/
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917122996.52/warc/CC-MAIN-20170423031202-00191-ip-10-145-167-34.ec2.internal.warc.gz
en
0.943511
7,131
3.46875
3
III. Major trends and policy questions in food and agriculture A. World situation and outlook B. International agricultural adjustment C. Proposal by the director-general on a world food security policy A. World situation and outlook Salient features of the world food and agricultural situation Sahelian zone operations Water problems affecting agricultural development Salient features of the world food and agricultural situation 31. The Conference discussed the world food and agricultural situation on the basis of the preliminary version of the Director-General's report on the State of Food and Agriculture 1973, supplemented by more up-to-date information furnished at the Session. The final version of the report was made available during the Session. 32. The Conference noted that during most of the period since its Sixteenth Session the situation had been more difficult than at any time since the years immediately following the Second World War. 1973 had been a year of concern, with the world's food supplies depending almost entirely on the current harvests, in the absence of large stocks to fall back on in case of crop failures. Although the latest information on the 1973 crops was encouraging, the situation remained tight. 33. In 1972 world agricultural production had fallen slightly for the first time since the Second World War. This decline was due to a fall in production in some large developed countries and a failure of production to increase in certain developing regions, In per caput terms, given the continued rapid population growth, food production in the developing countries was 3 percent lower in 1972 than in 1971, and in the heavily populated Far East region the fall in per caput production had been as much as 6 percent. 34. By the middle of 1973, world grain stocks had been reduced to the lowest level for two decades. The price of wheat on world markets had trebled between mid-1972 and mid-1973. With shortages of a number of other products as well, including such major sources of protein as soybeans and fish meal, there had been a chain reaction in the prices of many other commodities. The value of world trade in agricultural products had risen in 1972 by 15 percent at current prices, but in real terms the increase was only half as much, and as in past years the major share of the increase had gone to developed countries. 35. Despite some windfall gains in export earnings, the main effect on the developing countries of the 1972 production shortfalls was a reduction in the food consumption of the poorest strata of their populations. Especially during 1973, sufficient import supplies of wheat and rice had not generally been available to make up the production deficits, and even when obtainable had cost much more than in the past. Sharp rises in retail food prices had been almost universal. Emergency situations had developed in several areas, particularly the Sahelian zone of Africa. 36. The main cause of the poor production in 1972 had been the unusually widespread prevalence of unfavourable weather, especially drought. The Conference noted that in 1973 weather conditions had generally been favourable to agriculture, and that in some countries special government measures had contributed to a large increase in agricultural production. Information on the current harvests was still highly tentative, especially for the developing countries, and the estimates had been changing rapidly. But on the basis of the latest available information it appeared that world agricultural production had increased by between 3 and 4 percent in 1973. 37. Major factors in the improved outlook had been increased production in North America and a substantial recovery in the U.S.S.R.. There had been a drastic upward revision in October 1973 in the official estimate of the U.S.S.R. grain harvest, as a result of which FAO estimated that agricultural production in eastern Europe and the U.S.S.R. had increased by 7 to 8 percent in 1973. In the developed market economies the latest estimates indicated a rise of 2 to 3 percent, with increases of 1 to 2 percent in western Europe, 2 to 3 percent in North America, and a recovery of 5 to 6 percent in Oceania. 38. Production in the developing market economies was tentatively estimated to have shown an encouraging expansion of 3 to 4 percent in 1973. In the Far East, where production had declined in 1972, a rise of 6 to 8 percent was estimated for 1973, although the final outcome still depended on rice crops that had yet to be harvested in many areas. Unofficial estimates for China indicated an increase of 2 to 3 percent. An increase of 3 to 4 percent was estimated for Latin America, but in both Africa and the Near East the latest data showed a fall of 3 to 4 percent, The situation remained serious in many parts of Africa. In the Near East, however, it was less serious in view of the big increase in production in the previous year. 39. The world cereal balance in 1973/74 seemed likely to be less precarious than had been feared in the early autumn. However, world grain prices remained very high, and the rice situation would still be uncertain until all the major Far Eastern harvests were in. 40. The Conference emphasized that serious medium-term and longer-term problems would continue, especially structural problems of production, consumption and trade for the developing countries 41. In the medium-term, the duration of the present instability and high level of agricultural commodity prices was a major area of uncertainty. Although the replenishment of depleted cereal stocks should reduce price fluctuations, it was not clear to what extent the recent price rises were due to such factors as general inflation, currency changes, speculation, and higher transport costs, as well as to shortages. 42. The supply of inputs was crucial to the expansion of agricultural production in the developing countries. Agricultural development programmes in many of these countries were severely hindered by the current shortage and high price of fertilizers on world markets, and the Conference therefore welcomed the establishment of an FAO Commission on Fertilizers, and stressed the importance of increased investment in fertilizer production under aid programmes. 43. In the longer-term, the main need was for a more rapid and sustained expansion of agricultural production in the developing countries. Although some individual countries had achieved considerable success in this regard, the long-term growth of agricultural production in the developing countries as a whole was far behind the average annual increase of 4 percent called for in the international strategy for the Second United Nations Development Decade (DD2) and an average increase of about 5 percent was now required in the rest of the decade if the target was to be met. Many longer-term deficiencies had to be remedied by measures taken in the developing countries themselves if sufficient progress were to be made. The Conference laid particular stress on improvements in rural structures; technological advances suitable for less favoured environments and for small farmers; better services for the transfer of technology to small farmers; improved marketing; and incentive prices. 44. The Conference felt that greatly increased international support was needed for the efforts of the developing countries if constantly recurring crises were to be avoided, if international disparities were to be reduced, and if stability and progress were to be achieved in the interlinked world of developing and developed countries. The global flow of financial assistance was far below the internationally agreed DD2 target, and efforts should be made to reach this target as quickly as possible by those countries that had not yet attained it. Much more of such assistance should be devoted to agriculture and to the transfer of appropriate agricultural technology to the developing countries, including special measures for the least developed among these countries. It was stated that improved international trade conditions were also essential for progress, and that the agricultural trade of the developing countries continued to be in a weak position because of the competition of developed countries and difficulties of access to the markets of those countries. 45. Recent harvests had highlighted the instability of agricultural production as a result of fluctuating weather conditions. In addition to the establishment of reserve stocks, high priority should therefore be given to the expansion of controlled irrigation facilities. FAO and WMO should study rainfall patterns and their likely effect on agricultural production. More active cooperation with international organizations was also referred to as desirable, in particular concerning irrigation and drainage problems. 46. Several delegates stressed the need for developing countries to take fuller advantage of the favourable opportunities for forest products on world markets. Particular attention should be paid to the development of forest industries and to forest management. 47. The Conference noted the decision of the Sixty-First Council Session that the practice of preparing the Director-General's annual report on the State of Food and Agriculture in a preliminary version should be discontinued The report would henceforth be issued only in a final version, and would concentrate on the analysis of trends and policies. The needs of the governing bodies of FAO and of the public for up-to-date information would be met mainly through periodic reports for publication in the Monthly Bulletin of Agricultural Economics and Statistics (in particular the July/August and November issues). The Conference accepted this procedure on an experimental basis with a view to examining it again at its next session in the light of experience. 48. It was suggested that future issues of the State of Food and Agriculture should include information on inland fisheries, the amount of total development assistance devoted to agriculture, the analysis of rainfall patterns in important producing areas, and food supplies and consumption (including mothers' milk). It was noted that it was planned to include a special chapter on food and nutrition in the 1974 issue, while in 1975 the main topic would be the mid-term review and appraisal of progress in DD2. In connection with the latter work it was suggested that detailed analyses should be made not only of the constraints on increasing production in the developing countries but also of the reasons for the success of certain of these countries in increasing their production. Sahelian zone operations 49. The Conference considered the documents on the Sahelian Emergency Relief Operations of the UN system and the Summary Report of the Multidonor Mission to assess the food-aid necessary in 1973-74 to the six drought stricken Sahelian countries (Chad, Mali, Mauritania, Niger, Senegal and Upper Volta). 50. The debate was explicit in that all countries expressed deep concern with a situation where whole populations were so exposed to the vagaries of the weather, All speakers paid tribute to the efforts of the governments themselves, the Permanent Inter-State Committee and the bilateral donors, both governmental and non-governmental They also noted with satisfaction the prompt and pertinent measures taken by FAO, which was the focal point for coordinating the efforts of the UN system, in cooperation with WFP and other international organizations. A spirit of cooperation had prevailed between donors both within and without the UN system. The views expressed showed that if a second call for assistance was made, it would receive a willing response. Many speakers had also referred to the need for a more thorough examination of the problems which caused drought conditions in the Sahel and their remedy. 51. The Conference heard an address by the Minister for Agriculture of the Republic of Upper Volta and Chairman of the Permanent Inter-State Committee for Drought Control, speaking on behalf of the six Sudano-Sahelian countries, who thanked the international community for its generous. assistance in 1972-73 and the international news media for its perseverance in focussing attention on the problems of the Sahel. He stated that the rains, which had shown some promise for a better harvest in August 1973 had failed in September and October and the expected harvest had not materialized. Urgent help was needed in the coming months if the spectre of famine was to be removed. As a result of the failure of the harvest, the food needs of the six countries would be more than those estimated by the multidonor mission and would go as high as 1.2 million tons. He stressed that the emergency relief operations undertaken by FAO through its Office for Sahelian Relief Operations (OSRO) should therefore be continued with particular reference to harmonization of the transport of outside supplies, internal transportation, storage, special measures for remote and inaccessible areas, and seed in 1973-74, as also' recommended by the multidonor mission. He added that it would be necessary to look beyond the immediate future and that the Heads of States in their meeting at Ouagadougou in September 1973 had defined the common strategy to control drought and had suggested specific measures for this purpose. 52. The Conference felt that continued assistance during 1973-74 was needed and called upon governments, other bilateral donors and non-governmental organizations, together with the UN system as coordinated by FAO, to provide generous help in terms of food grains, protective foods, nutritional needs and the other logistical requirements, recommended by the multidonor mission Offers were made by governments to provide assistance. 53. The Conference also attached considerable importance to FAO's contribution to sustained efforts in the medium and long term to tackle the causes and effects of drought, which were being coordinated by the United Nations for the UN System. These medium and long-term measures would include inter alia the harnessing of both surface and groundwater resources. 54. The Conference also felt that emergency measures should be taken in the fields of nutrition, public health, animal health and protection and the supply of animal feed. 55. The Conference adopted the following resolution: SAHELIAN ZONE OPERATIONS OF THE UN SYSTEM Recalling the Economic and Social Council Resolutions 1759 (LIV) of 18 May 1973 and 1797 (LV) of 11 July 1973 on Aid to the Sudano-Sahelian Populations threatened with Famine, and the General Assembly Resolution 3054 (XXVIII) of 17 October 1973 on consideration of the economic and social situation in the Sudano-Sahelian region stricken by drought and measures to be taken for the benefit of that region, Noting with satisfaction the special efforts of the Director-General, in concert with the Secretary-General and other members of the United Nations system and bilateral donors in providing speedy and effective assistance to the drought-stricken countries and peoples of the Sudano-Sahelian zone of West African Further noting with appreciation the dispatch under the sponsorship of FAO, at the request of the Permanent Inter-State Committee on Drought Control in the Sahel, of a Multidonor Mission to visit the Sahelian countries in order to assess their food and nutritional requirements for 1973-74, 1. Expresses its appreciation of the generous contributions made and support given by Governments, international organizations and voluntary aid agencies to the operations of the United Nations system, and the unstinted efforts of the affected countries themselves to collaborate with each other in ensuring the optimum utilization of al] assistance, 2. Appeals to Member Governments and intergovernmental and non-governmental organizations to give the most favourable response possible to the recommendations made by the Multidonor Mission taking into account the comments related to food assistance, and to the measures recommended by the Director-General and the Secretary-General for implementing them, 3. Requests the Director-General that in his recommendations about food assistance, subsequent developments with respect to actual harvests, and special conditions concerning inland transport in certain land-locked countries, should be taken into account, 4. Further requests the Director-General and the Permanent Inter-State Committee to use the experience gained from the present relief operations in continuing assistance to the countries concerned during 1973-74, with particular reference to harmonizing the transport of outside supplies, pre-positioning of stocks in remote and inaccessible areas, seeds, feed, storage and transport requirements, public health and nutritional needs of vulnerable groups, and animal health programmes. 5. Urges the Director-General to ensure full cooperation with the Special Sahelian Office, established for the coordination, in cooperation with the Permanent Inter-State Committee, of the medium and long-term Activities of the Organizations of the United Nations system which would include inter alia the harnessing of surface and ground water resources of the Sahelian zone, 6. Requests that the Director-General inform the Council, at its Sixty-Third Session, of further developments and action taken. (Adopted 27 November 1973) 56. The Conference reviewed the current and prospective world commodity situation and examined the main issues arising from it. In 1972-73, commodity markets were characterized by supply shortages, sharp price increases and declining carryover stocks. Food and feed products, some tropical beverages and a number of agricultural raw materials were all affected. The value of world agricultural exports in 1972 rose by 15 percent, but only about 7 percent in real terms. However, as in earlier years, the increase in exports from developed countries was greater than that in exports from the developing countries, as a result of which the share of the latter in world agricultural trade showed a further decline. 57. One of the major causes for the dramatic change in world agricultural commodity markets was the simultaneous occurrence of production setbacks in a number of major producing and consumer countries which resulted in a reduction in supplies, a sharp rise in import demand and heavy drawing on carryover stocks, particularly those of wheat. The price rises were also accentuated by recurrent monetary disturbances and by some speculation in commodity markets. Besides these short-term factors, some longer-term tendencies also appeared to have been at work. There had been a slowing down in the growth of agricultural output in the last decade. Surplus output of some commodities had been curtailed by supply management policies in the developed countries. Increasing difficulties in raising per caput production were experienced in developing countries due to rising population and growing dependence on important inputs like fertilizers which were not available in adequate quantities. Further, there was an increasing demand for protein-rich foods and feeds due to changing consumption patterns 58. While the unusual combination of short-term factors that had operated in 1972-73 was unlikely to recur, it was generally felt that world agricultural commodity markets had entered a new phase which, because of the longer-term factors mentioned, would be characterized by greater instability in supplies and prices than had been experienced in recent years. In this context, the Conference felt that a more intensive examination of the factors at work in the current situation was needed, including an analysis of policy implications. 59. The Conference agreed on the need for the developing countries to increase their agricultural production for both domestic use and for export, to improve marketing and to expand processing of their agricultural produce. For this purpose, there was need for the development of an appropriate infrastructure of institutions and technology and for a more diversified economy. Such a development would add value to agricultural output and bring about more rural activity and employment. In this effort, the developing countries needed the cooperation of the developed countries and their technical and financial assistance. 60. The Conference fully recognized the importance of growth of international trade, particularly of exports from developing countries. In this context, delegates stressed the importance for trade in agricultural products, including processed commodities, of the reduction or elimination of tariffs and non-tariff barriers, such as variable levies, quantitative restrictions and hygiene regulation's, and the elimination of subsidized exports. 61. The Conference welcomed the intensive commodity consultations, which had been initiated under UNCTAD Resolution 83(III) and Resolution 7 (VII) of the UNCTAD Committee on Commodities, and which ''(a) should examine problems in the field of trade liberalization and pricing policy, and (b) should aim to present concrete proposals to governments designed to expand trade in products of export interest to the developing countries and thus contribute to the growth of their foreign exchange earnings as well as to their increased participation in market growth by (i) improving their access to world markets, and (ii) securing stable, remunerative and equitable prices for primary products ''. 62. The Conference noted that the FAO intergovernmental commodity groups were to be used as the fore for a number of consultations to be convened under the UNCTAD resolutions. This decision provided these groups with an opportunity of making a significant contribution to the development of solutions to the problems confronting governments in the commodity field. 63. The Conference attached great importance to current intergovernmental efforts aiming at trade liberalization under GATT and UNCTAD. It warmly welcomed the Declaration adopted at the ministerial meeting in Tokyo which initiated a new round of multilateral trade negotiations, It welcomed in particular the statement in the Declaration that the developed countries did not ''expect reciprocity for commitments made by them in the negotiations to reduce or remove tariff and other barriers to the trade of developing countries''. It also welcomed the recognition by the developed countries of the importance of maintaining and improving the Generalized System of Preferences and of " the importance of the application of differential measures to developing countries in ways which will provide special and more favourable treatment for them in areas of the negotiation where this is feasible and appropriate'', The Conference recommended that the Director-General and the FAO bodies concerned with commodity questions should be guided by these principles in their contributions to the multilateral trade negotiations and to the FAO/UNCTAD intensive intergovernmental consultations and in all other related work. 64. The Conference believed that the FAO Secretariat should make an effective contribution both to the multilateral trade negotiations in the GATT and to the intensive commodity consultations to be held under the UNCTAD resolutions and requested the Director-General to give all possible support to these initiatives within the resources available. This contribution should not be limited to the provision of information and analysis, but should also include advice on policy alternatives. 65. The Conference felt that special attention should be given to the problems of instability of markets. The recent history of attempts to establish or renew international commodity agreements or other measures of market stabilization had been very disappointing. The new phase of instability in commodity markets could have serious repercussions on foreign exchange earnings and growth of trade and, for some commodities, could lead to increased competition from synthetics and substitutes. The Conference therefore felt that renewed efforts were needed to seek solutions and stressed the need for an examination in depth of the reasons for the lack of success hitherto of international efforts in this field. 66. The Conference considered a document before it which contained a review of the current status of world fisheries and their problems. 67. The Conference unanimously recognized that the Technical Conference on Fishery Management and Development held in Vancouver, had been valuable, and endorsed the recommendations made by that conference, It expressed its gratitude to the Government of Canada for hosting and financing the conference. 68. In considering problems of management the Conference noted that as a result of the Third Law of the Sea Conference FAO might have to play an increased role in studying problems of management and assisting countries and regional bodies in their solution, and that partial implementation of this role, since it would be technical, need not wait for the conclusions of the Law of the Sea Conference, The Conference hoped that FAO would be in a position to cope with these increased tasks. 69. The establishment of the Western Central Atlantic Fishery Commission by the Sixty-First Council Session was noted with satisfaction. In this regard the Conference emphasized the value of the regional fishery bodies established within the framework of FAO and endorsed their activities concerning the rational utilization of fishery resources. 70. The Conference expressed concern regarding the high level of spoilage during distribution of fish and wastage of valuable protein food through discarding of non-marketable fish at sea as trash fish. The need for FAO and other programmes to provide technical assistance to developing countries in improving marketing through better preservation, storage and distribution infrastructure was emphasized. In this connexion the hosting of the Technical Conference on Fishery Products by the Japanese Government (Tokyo, 4-11 December 1973) was commended. 71. The Conference emphasized that the questions of development and management should be considered simultaneously in promoting the rational exploitation of fishery resources. Recognizing that the problems of fishery management would become more acute in the coming years as fishing effort continued to increase, it urged FAO to promote close monitoring of living aquatic resources on a continuous basis, serving the governments by keeping them abreast of scientific and technological developments relating to stock evaluation and management methods, by disseminating relevant information, and promoting scientific work on survey and evaluation of fishery resources through regional bodies, or field projects. 72. As regards unconventional species it was recognized that more surveys and technological investigations were required to make use of these resources aiming at maximum protein yields for human consumption. 73. The Conference drew attention to the need for assistance from FAO in the field of joint ventures. Undoubtedly such agreements had benefits for developing countries through the transfer of technology and training of local personnel, as well as for developed countries. However, joint venture arrangements were in some cases, through inexperience, agreed upon on terms which were unfavourable to the developing countries. The Conference therefore urged FAO to play a more active role in this field and to assist developing countries in negotiations leading to such agreements. In this respect, the Conference felt that although well prepared publications on this subject were useful, it was of greater practical importance that the Organization should always be ready to give help and advice, for instance by way of collecting data on trade, fishery products specifications and market prices which would serve to overcome the lack of knowledge on market outlets on the part of the developing countries. 74. The Conference recognizing the importance of an integrated approach to fishery development planning, emphasized the critical need for more and better statistics and renewed its request for additional specialized assistance from FAO to aid attempts to establish reliable national fishery statistics systems. The Conference, drawing attention to the increasing technical responsibilities of FAO, emphasized the benefits to be gained from an integrated multi-disciplinary approach to data collection, analysis and dissemination, which should embrace trade and economic data as well as resource statistics and environmental data. Attention should be paid to the biomass approach in the development of the scientific basis for management. 75. The high priority which must be attached to fisheries training and education, and to associated extension services, if national fisheries development plans were to be fulfilled, was emphasized by the Conference. The Conference urged that FAO should give even greater assistance than in the past in this respect, both at the national and regional level. 76. The work being carried out by FAO in assisting certain countries with perspective studies of agricultural (including fisheries) development was recognized by the Conference which requested the Director-General to expand and extend such work to other nations. 77. The Conference emphasized the importance of the artisanal sector of most developing fisheries for the production of high-value protein food for local consumption and export. The Conference stressed the need for intensified efforts in this sector in view of its potential for employment and for raising the standard of living in remote fishing communities. Although the artisanal fishery predominantly exploited fishing grounds and resources which were of minor interest or not accessible to industrial fisheries, the risk of competition and the resultant need for coordination of development activities relating to these two types of fisheries should be considered. Realizing the complex character of artisanal fisheries, the Conference stressed the importance of an integrated approach and recommended that FAO intensify its activities in assisting the development of artisanal fisheries including relevant background studies and extension work, 78. The Conference endorsed the activities of the International Indian Ocean Fishery Survey and Development Programme, the International Project for Development of the Fisheries in the Eastern Central Atlantic and the South China Sea Fishery Development and Coordinating Programme. 79. The Conference emphasized the importance of developing national capabilities to participate actively in all aspects of fishery research, exploitation and management, which entailed the training of scientists for fishery surveys, stock assessment and protection of fishery resources and aquaculture from pollution. The Conference stressed the need for assessing the training and manpower requirements of the developing countries in fisheries science and agreed that the regional fishery bodies established within the framework of FAO could be an effective mechanism in carrying out that assessment. 80. The Conference, noting that certain existing institutions could be developed into centres for training in various disciplines of fisheries science, and noting further that some governments were prepared to accept trainees for education in such centres, urged FAO to identify those centres to initiate training programmes under those governments, with FAO support through extra-budgetary funds when required. 81. The Conference emphasized the importance of aquaculture in meeting the increasing demand of the world for high quality fish protein especially at a time when population pressure might create a food shortage, and recommended that FAO intensify its activities in this sector which would not only provide food but would also give further employment possibilities to the vast labour force available in developing countries as well as export earnings in these countries. The Conference noted that the cost of production through aquaculture should he reduced by the adoption of efficient techniques and therefore adequate research support would have to be given by FAO as well as by governments in order to improve present culture techniques, seed production, and the preparation of inexpensive feeds and effective control of diseases. 82. The Conference noted that rational management of fishery resources and their speedy development required an adequate data base for decision-making and programme implementation. It recognized that the preparation of fishery development projects in developing countries was often based on inadequate data and therefore there was a very clear need for assistance in this field both by providing direct help in drawing up projects and in establishing biological, economic and technological data collection as well as catch and effort statistics that were required for fishery resource evaluation studies and management decisions. 83. The Conference noted with satisfaction FAO's leading role in the protection of living aquatic resources from pollution and the protection of the environment from degradation, and urged that FAO progressively increase, as required, its activities in this field in order to provide adequate advice for decision-making on management of the aquatic environment and on the regulation of waste disposal with the aim of protecting living resources and fisheries. In this regard the Conference noted with approval that FAO was convening a consultation on the protection of living resources and fisheries from pollution in the Mediterranean. The Conference noted further that FAO maintained close collaboration with various other UN Organizations concerned with environmental issues and recognized that support by UNEP was required to strengthen some of the ongoing activities undertaken and the services maintained by FAO in this field that were of significant importance for the developing countries.
<urn:uuid:1d757285-7a42-4b05-b3cc-cf9eccd08a3d>
CC-MAIN-2017-17
http://www.fao.org/docrep/x5590E/x5590e03.htm
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917121267.21/warc/CC-MAIN-20170423031201-00307-ip-10-145-167-34.ec2.internal.warc.gz
en
0.964022
6,154
2.640625
3
Corfu History from ancient times to today Corfu History begins over 3.000 years ago and is separated into periods. It is turbulent and fascinating, and despite numerous raids and attacks by barbarians, conquests by Europeans during medieval period, Corfu has managed to survive and keep intact its Greek identity while incorporating in its culture the best elements of civilizations that have passed from here. Prehistoric and Ancient times Corfu has been inhabited since the Stone Age. At that time it was part of the mainland and the sea that today separates it from the mainland was only a small lake, it became an island after the rising of the sea at the end of the Ice Age in about 10.000-8000 BC, Evidence of Paleolithic occupation have been found near the village of Agios Mattheos at southwest and a Neolithic occupation near the village of Sidari. The Greek name of Kerkyra came from a mythological Nymph called Corcyra, a daughter of the river god Asopos, Corcyra was kidnapped by the god of sea Poseidon who brought her here and gave her name to the island. Corcyra became Kerkyra later in the Doric dialect. The first residents in the 12th century BC were the Phaecians, the first founder was Phaeks and his son was Nafsithoos who was the father of the Homeric king Alkinoos, known from the Odyssey. King Alkinoos and his daughter Nausikaa helped Odysseus to return to Ithaca. In this point mythology gets muddled with history, and we do not know the exact origin of the Phaecians who according to Homer had some relationship with the Mycenaens, although archaeological investigations have failed to find a link with any Mycenaean remains. Later more immigrants came from Illyria, Sicily, Crete, Mycenae and the Aegean islands. The Ancient times – the first Greek colonization In about 775 BC, we had the first Greek colonization by Dorians from Eretria of Euboea, soon followed by also Dorian refugees from Corinth in 750 BC, who with their leader Hersikrates created a strong colony. They dominated the area around Corfu creating their own colonies, one of which was Epidamnus in ancient Illyria (today Dyrachio in Albania) The ancient city of Corfu was then in the area where Garitsa and Kanoni are today. Kerkyra(Greek name for Corfu) was the first of the Greek cities, to build a fleet of triremes in 492 BC. In the lagoon of Chalikiopoulos, where the airport is today, the harbour was situated, home base of the strongest fleet of ancient Greece (second only to Athenian navy) with more than 300 triremes and other vessels. The fast-growing colony quickly gained strength and openly challenged the metropolis of Corinth, and the unhappy Corinthians sent their fleet to occupy the island of Corfu and regain control of this strategic region and especially the colony of Epidamnus. The first naval battle between Greeks in 680 BC was a failure for the Corinthians. After the battle both Corfu and Corinth sent ambassadors to Athens trying to gain their support, the Athenians preferred the great naval power of Corfu and chose to conclude a defensive alliance with the Corfiots sending 10 triremes and later another 30. The alliance continued during the Peloponnesian war and lasted more than a century. The Corinthians came back in 435 BC with a strong fleet of 150 ships against the Corfiots, the battle raged near to the coast of Lefkimi in the south narrow channel of the island and near Sivota off the mainland coast. The right wing of Corfiot fleet started to buckle so the 10 Athenian triremes and shortly another 20 intervened, and the Corinthians decided to retreat. After that in 375 BC, Corfu became a member of the Athenian Confederation and fought for Athenian interests during the entire Peloponnesian war. The Corfiot issue, as ancient historical Thucydides writes, was one of the causes of the thirty-year long civil Peloponnesian War that eventually weakened and broke Greece, but the real reason was the growing fear of Sparta about the expansionist imperialist policy of Athens that made the war inevitable. Roman era and early Byzantine period First Roman era (229 BC– 379 AD) After the Peloponnesian war internal political conflicts resulted in the disintegration of the alliance. The island then was captured by Illyrian pirates for a very sort period and the Romans exploited this opportunity and captured the island in 229 BC. Romans gave autonomy to the Corfiots provided they were allowed to use it as a naval base. Corfu followed the fate of all other Greek city-states, they accepted the sovereignty and protection of Rome from the various invaders and intruders of that era. During first century AD. Christianity arrived, brought by two disciples of St Paul, Jason and Sosipatros. After the death of emperor Constantine at 337AD the Roman empire divided into three sections- the north (Spain, France, England), east (Konstantinople and Asia Minor) and the west which included Greece, Italy and Rome’s African territories. Corfu then was included in the so called west empire. Early Byzantine period (379 AD– 562 AD) At the time of emperor Theodosius (339 AD) the Roman empire was re-divided into east and west, Corfu then belonged to the east empire and this period known as early Byzantine lasted for about three centuries. During this period the whole island was exposed to frequent barbarian raids and pirate invasions. Middle Ages and Byzantine period East Roman empire (Byzantine empire) At the time of emperor Theodosius (339 AD) the Roman empire was re-divided into east and west. Corfu then belonged to the east and this lasted for about three centuries. During this period the whole island was exposed to frequent barbarian raids and pirate invasions. In 562 AD during one of these raids the Goths destroyed the ancient city of Corfu, leaving the ruins that today are called Paleopolis. This was the end of the ancient City and the beginning of medieval age for the island, the old city`s remaining inhabitants abandoned the location. They fled further north to the natural promontory of land which later became the old fortress, and from there the city expanded until it covered the area where it is today. The period from 562 AD until 1267 AD, when Corfu was occupied by the Angevins, is known as Byzantine period. It was a very difficult period for Corfu which as the westernmost corner of the empire was very vulnerable to the constant pirate attacks and the various appetites of their neighbors. The multicultural Byzantine Empire was trying to protect it in any way moving here several mercenary guards of various races and peoples. Guards consisted of Greeks from Syria, Bulgarians, Byzantines soldiers (stradioti) scattered in outposts began from the northeast of the island and reached up to the southwest, the border guards were slowly merged with the local population. This was the era when most fortresses scattered throughout the island were build, the redesigning and strengthening of the old Corfu fortress in the city happened then, Angelokastro fortress in northwestern Corfu, the fortress at Kassiopi, the fortress in Gardiki at southwest and other smaller ones were constructed. Τhe turbulent years after the Fourth Crusade (1204 AD – 1214 AD) At 1204 AD Corfu was captured by the Normands of the 4rth crusade and they were followed by the Venetians for a short period until 1214 AD. The Despotate of Epirus (1214 AD – 1267 AD) From 1214 -1259 AD, Corfu became part of the Byzantine domain of Epirus (called Despotate of Epirus) and at this time the fortress of Angelokastro was built an the northwest part of the island north of Paleokastritsa by the Despot Duke Michael-Angelos Komnenos the second. Period of Sicilian rulers Another turbulent period followed from 1259 to 1267 with various Sicilian rulers attempting to claim Corfu, some kings and admirals, first Manfred of Sicily followed by his Francocypriot admiral Philip Cinardo, then Garnerio brothers and finally Thomas Alamano.(Alamanos today is a very common surname in Corfu) The House of Anjou (1267 AD – 1386 AD) At 1267 the Angevin King of Sicily Charles of the house of Anjou, conquered the island. The island was divided into four departments-regions, called Gyrou, Orous, Mesis and Lefkimi respectively – names still heard today. That was the era when large numbers of Jewish people, mainly from Spain, settled in Corfu and created the Corfiot Jewish community. Charles of Anjou attempted to erase the Orthodox Christian faith by changing all Orthodox churches into Roman Catholic and persecuting all the Orthodox, this attempt failed and stopped later when the Venetians returned to the island. The Venetian domination in Corfu 1386 – 1797 AD The Council of Corfu and especially the overwhelming majority of nobility were friendly with the Venetians. They did not expect protection from the collapsing Byzantine Empire, and because of the ever-present Turkish threat, they asked at 1386 AD for the protection of the Republic of Saint Mark. Venetians knew that Corfu was a key strategic location to guard their naval interests in the region, and also a very fertile island for agriculture, therefore they bought the island from the kingdom of Naples, paying an amount of 30,000 gold ducats. Then disembarked their forces in Corfu led by the “Admiral of the Gulf,” Giovanni Miani. In that turbulent era, where there was no national awareness, strange events happened, so that while the Venetians occupied the Old Fortress without resistance and secured their dominance over most of the island, in the north the fortresses of Angelokastro and Cassiope were still controlled by some Angevins who did not agreed with the sale of the island, strangely many locals were supporting them and fought along the Angevins against the Venetians. Corfu history – medieval Corfu The Venetians sent army to capture the two forts, and while Aggelokastro surrendered almost immediately, the Angevins and Corfiots of Kassiope resisted furiously, the Venetians got angry to such an extent that after the conquest of the castle they destroyed it completely and for this reason there are now only remnants of that fort. Thus started the second long period of Venetian rule in Corfu that lasted more than 400 years, actually 411 years, 11 months and 11 days precisely. Venetians established the feudalistic system to rule, There were three social classes, the nobility of aristocrats, the citizens (civili) and the poor people (popolari). In the next painting we see a typical snapshot of medieval Corfu, currently called Nikiforos Theotokis street, apart from the costumes have not changed much since then. Agriculture had developed with the planting of many olive trees, Arts and Science were also evolving now that Corfu had links with one of the great empires.. The Venetian era left indelible marks on Corfu in all areas such as art, musical tradition, culture, the singing pronunciation of language, Corfiot cuisine and most noticeably the architecture of the city and the villages. The constitution during Venetian domination The constitution in Corfu, and in all the Ionian islands during the Venetian occupation was exclusive, all political power was in the hands of the nobility, the only Venetians were the General Proveditor of the Sea who wielded the greatest political power, and his Judiciary flanked by Vailos and his two consultants. All the rest were local nobles whose names were written in the Golden book (libro d`Oro). Centuries later during the era of the second Ionian state, only the people whose names appeared on this list were allowed to take their coffee on the Liston area! In early editions of the Libro d`Oro the names all the noble of Byzantine origin, also Byzantine soldiers and large landowners were written, but later many wealthy civilians who were able to offer financial support to the Treasury of the state, were added too. If we look at the names in the libro d`Oro, we see, with surprise that most names known in the city of Corfu today are written there, but few of the common village names. The migration flow from the Turkish-occupied Greece The Venetians did well to protect the city of Corfu, but despite their military measures in the first centuries they failed to protect the island’s countryside which saw many tragedies and often paid a heavy toll in barbarian raids. It also suffered from pirate attacks, especially during the first two major Turkish raids, one in 1537 and the second in 1571. In 1537 AD the Turks invaded and seized 20.000 men from the countryside to sell as slaves in Konstantinople and Egypt,. The countryside was devastated, so many Greeks from Peloponnese, Epirus and Crete came as migrant workers to the island, and later became part of the resident population. More recently especially under the British rule, many immigrants came from the small Mediterranean island of Malta, the original home of many, mainly Roman Catholic, Corfiots. Following the raids of 1537 Corfu was almost deserted, and a few years later, in 1571 the Venetians lost Peloponnese, Crete and Cyprus, all three islands were conquered by the Turks,. This created the inevitable large wave of refugees from these areas looking for new home, and the Ionian Islands was the ideal destination, so by this coincidence the Turks both depopulated and helped repopulate Corfu. The Venetians also gave impetus to this migration stream for at least two additional reasons, firstly to revive the dead countryside, and secondly to encourage people with great spiritual, military, technical and economic potential to leave the Turkish dominated land- which would also weaken the Ottoman occupiers, and at the same time strengthen Venice. A large group of refugees came from Nafplio and Monemvasia, half of them settled in the area of Lefkimi and build Anaplades village, the others scattered on the northeast coast, from Pyrgi up to Kassiopi. Their leader was the chieftain Barbatis, and the area south of Nissaki is called Barbati after him. There is a suburb north of the city called Stratia, formerly known as Anaplitochori. Another group from the Peloponnese built the village of Moraitika, took over the deserted village of korakiana,and spread to other villages such as Benitses Across the island there are many families with the surname Moraitis and also many whose last name ends with the Peloponnesian suffix. . “opoulos” The largest group of all was from Crete, many settled in Garitsa, just south of the city, and the most prosperous new arrivals moved into the city itself. Others built the village of Saint Markos in the north above Ipsos, whilst in the south of Corfu the villages of Stroggyli, Messonghi, Argyrades and Kritika were also established by Cretans. All these populations introduced elements of their tradition and culture to Corfu, especially the Cretans who contributed much to the formation of the Corfu idiom which anyway was constantly evolving, the prefix “chi” instead of “tis” is pronounced like this only in Crete and the Ionian Islands. After a time the Corfiot culture proved too strong, and all these people were absorbed into the local community and within a few years became regular Corfiot Later on around 1800, a large group of refugees from Souli, after its destruction by Ali Pasha, fled to Corfu and most of them settled in Benitses, their descendants today constitute about 70% of the Benitses population. The Venetian fortifications and the frequent Turkish raids The Venetians tried to convert the population to Catholicism, but they did not succeed, and later for political reasons, as they had come into conflict with the Vatican and especially after the loss of Cyprus in 1571, they abandoned any such effort, and justified this religious tolerance with the famous saying “Siamo prima Veneziani e poi Cristiani”, which means, we are first Venetians and then Christians. Indeed to be liked by both faiths they organized and established many common religious events in which both faiths took part , some of these events are still observed today. The failure of the Venetians to protect the countryside and suburbs of the town from Turkish incursions roused wide public discontent. Moreover, especially after the loss of Crete and Cyprus, Corfu was the most important possession after Venice herself, and therefore they decided to increase the island’s defences. The Venetians made the most ambitious defense plans, by constructing the largest and most modern fortifications of the age for Corfu.. From 1576 to 1588 they built a new fortress on the hill of San Markos in the west of the town, then cleared the open space in front of the old fortress to make the vast Esplanade Square. They joined the two fortresses with a wall that protected the whole city from the west, with powerful defensive systems like the bastions of Raimondos, St. Athanasius and the bastion of Sarantaris, also they built four main city gates for residents and two more gates for military purposes. The four main gates of the city were the Porta Reala, the Porta Raymonda, the gate of Spilia and the gate of Saint Nicholas. Porta Reala was of unique beauty and was demolished without reason in 1895 creating an international outcry. These defensive plans were made by the engineers Michele Sanmicheli from Verona and Ferante Vitelli. Fortifications were constantly enhanced and later in the 17th century another wall was added outside the existing one, designed by the engineer F. Verneda, following the third great Turkish siege in 1716 which was successfully repulsed by the Prussian Marshal Johann Mattias Von Schulenburg, who then had responsibiliy for the defense of Corfu. After the Turkish invasion of 1716, Venetians fortified the island of Vido too, and the hills of Avrami and Saint Sotiros, they also built a fortification for the area of San Rocco (today Saroko). The Turkish siege of 1716 The 1716 siege of Corfu was part of the Seventh Venetian-Turkish war, the occupation of this strategic importance`s island would open the path for the occupation of Venice and then the rest of Europe. Turkish forces estimated that were 25000-30000 men along with auxiliary and irregulars and 71 ships with about 2,200 guns, if we add the crews of the ships they were reach a total power of 45-50,000 men. On the contrary the military forces of Venice was only 3097 men, of whom only 2,245 combatants. Corfu New fortress where the big fights held, had 144 guns and four mortars. Marshal Johann Mattias Von Schulenburg who had the responsibility of Corfu defense, managed to successfully deal with the chaos that prevailed within the local population, locals trying in every way to leave the island or take refuge in the mountains. He immediately ordered the recruitment of those who were able to fight and so secured several reservists and revived the morale of the besieged. The siege had begun on July 8th when the Turks landed in Ipsos and Gouvia and ended after many cruel and deadly battles on Saturday August 22th. Meanwhile at 20th of August an unprecedented storm scattered the Turkish ships and drowned many Turkish soldiers and sailors. This storm and the salvation of the city, was attributed by the common people to a miraculous intervention by St. Spyridon, and ever since then there has been a litany, and a procession of Saint Spiridon on 11th August. But despite people’s believe, the historic truth is that two were the main causes for Turkish defeat, first the strong resistance by the defenders up to the last minute, and second the defeat and destruction of the Ottoman army in Peterwardein by Eugene of Savoy, which forced the Turks to retreat. Yielding the victory to divine intervention, we misrepresent history and underestimate the heroism of the defenders. Final losses for the defenders were about 800 dead and 700 wounded while for the Turks losses were high and reached 6,500 men, among those killed was Muchtar, grandfather of Ali Pasha. Fighting alongside Corfiots were Venetians, Germans, Italians, 4 Maltese ships, 4 Papal galleys, 2 galleys from Genoa, 3 galleys from Tuscan, 5 Spanish galleys and even Portuguese forces who also participated before the end of the siege. The Jews of the city showed great courage in fighting, equipped at the expense of the Corfiot Jewish community and under the leadership of the son of Rabbi himself. General Proveditor of Corfu was Antrea Pizanis who had the leadership of the light fleet and adjutant of Marshal Schulenburg was the Corfiot Lieutenant Dimitrios Stratigos. Marshal Schulenburg was honored for his determination and bravery to life pension from the Senate of Venice and his statue can still be seen at the entrance of the Old Fortress, also everyone who showed bravery during the fighting was honoured. The Turkish failure in Corfu was a historical event of enormous importance, a landmark that influenced the historical course of all Europe and especially of Greece. Very few know that without the bravery of Corfiots and many Europeans the course of the Turks certainly would have not stop here and the Ottoman Empire could expand instead of collapsing, with obvious implications for the nascent Greek nation and Europe itself. Unfortunately it was not treated by historians with deserved importance, the Turkish invasion to the West was permanently blocked, they overlook the fact that without this victory today`s Greek state might not existed! Τhe repulse of the Turkish invasion of 1716 has been very important event for Western Europe at that era, it was celebrated with impressive events in Europe, the oratorium Juditha triumphans by Antonio Vivaldi was written because of this event and played in all the major theaters for many years. This was the last of many Turkish attempts to expand their empire into Europe. The period of Venetian rule left many positive elements in the culture and civilization, but was also marked by many dark spots, there were numerous popular uprisings, mainly in the villages due to the authoritarian rule of the Venetians and the arbitrariness and lawlessness of the ruling class of nobles,. Relations between people and nobility was like relations between slaves and master, and there were many bloody uprisings. Corfu was very important to Venice, and remained an integral territory of the State until the fall of Venice to the French. The Ionian State (Septinsular Republic) 1800-1807 United states of Ionian islands 1815-1864 The Venetian period was followed by the first French occupation in 1797, It was the end of the feudal system, and the people burned the book of Gold (libro d` oro) where all Aristocrats were listed. In a symbolic gesture the libro d`oro was burned in all Ionian islands. The initial euphoria after the arrival of French, who were welcomed as liberators, quickly turned to severe distress due to French arrogance towards the locals and the heavy taxation. Followed by a period of instability, people were divided, the Nobles began to exploit the popular discontent against the French, and began to plot for the occupation of Corfu by the Russians. They finally succeeded in 1799 when a strange alliance of Russians and Turks occupied Corfu. The Russian admiral Ousakof, of aristocratic origin, immediately restored the privileges of the nobility and later on 21st of March 1800, at the instigation of Ioannis Kapodistrias, then foreign minister of Russia, founded the Ionian State, also known as Septinsular Republic, this was the first independent Greek state, something that Kapodistrias envisioned as a harbinger of the rebirth of a Greek state. It was a federation of the seven larger island states, Corfu, Kefalonia, Zakynthos(Zante), Paxos, Lefkada, Ithaka and Kythyra, also included all other smaller Ionian islands, the capital was Corfu. This state remained until 1807 when the French under Napoleon returned and stayed until 1814. It was the time when the two buildings which today are the famous Liston were built by the French for use as military barracks. In 1815 Corfu went under British rule, the seven Ionian island state declared its independence under British protection with Greek as the official language and Corfu town as the capital. The first “Lord High Commissioner of the Ionian Islands” was Lieutenant-General Sir Thomas Maitland. The state`s govertment had 29 members, 7 members from Corfu, 7 from Kefalonia and 7 from Zante, 4 were elected from Lefkada. Paxos, Ithaka and kythyra elected 1 each plus a second member which was elected in rotation by the three. The official name of the new protectorate was: “United states of Ionian islands”, During this period the Ionian Academy, the Reading Society and the public library were established. Under British rule the local economy was well developed, the palace of Saint Michael and George was built and also the road network of the island was expanded as well as the construction of the aqueduct that supplied Corfu town with water from the hills around Benitses. Power plants too were built in Corfu, which however after the union with Greece were moved to Piraeus. Many other projects and significant improvements to the island’s infrastructure were made during this period. Modern Times, union with Greece On 21 May 1864 after the London treaty and the positive vote of Ionian parliament, Corfu and all Ionian islands united with Greece. It was one of the most important turning points in the history of Corfu, the turbulent historical past of the island ended, so ends the prominence of Corfu as capital of the Ionian state, the emergent Greek state could not afford the existence of two centres of economic and cultural strength, so in the battle with Athens Corfu lost its university, its fame, its cultural lead and after just 40 years became a Greek provincial town. But the memories of the glorious past remain and this is what makes Corfu unique, a Greek island which does not look like the others.
<urn:uuid:e7ad4f7a-7983-4cad-b09f-567c0cae0e06>
CC-MAIN-2017-17
https://atcorfu.com/corfu-history/
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917121644.94/warc/CC-MAIN-20170423031201-00072-ip-10-145-167-34.ec2.internal.warc.gz
en
0.974811
5,774
3.234375
3
A fanciful interpretation of the Salem Witch Trials. Is that light a comet or some other malignant force? Comets in pre-modern belief were harbingers of ill. They wandered into the affairs of the orderly cosmos of man and brought about disruption. H. G. Wells wrote his In The Days of the Comet as a utopian vision of how mankind could benefit from an alchemical like transformation in which elements in the comet affected the atmosphere of the earth and humanity breathed in sanity and out irrationality. Certainly with World War One hovering over the heads of humanity, those were wistful and wise thoughts. Unfortunately, not to be, war resulted and la belle epoch had been consumed in bloodshed and the modern era, was born. The world of the Gatsbys, self made men, hucksters, and flim flam artists arose, the Great Depression resulted and the Second World War in which the old world powers were swept away leaving the Soviet Union and the United States to struggle for the hearts and minds of humanity, Equality, Liberty and Pursuit of Profits, over Equality, Community and Pursuit of Perfection. Profits, at least temporarily, won out over perfection, liberty and community still struggle with Equality, remaining a common value of the age, if only given lip service. From BRITISH ART SHOW 7: IN THE DAYS OF THE COMET. Anja Kirschner & David Panos. I found an interesting blog of Margherita Fiorello that deals in medieval astrology and it had a piece about an astrologer’s view of Hailey’s Comet’s appearance in 1301. This anonymous astrologer seems fairly typical, writing with some exactness about the position, and reading into it portents that can only be called interesting. About the comet motion from North to South, I believe it’s the attraction motion, i.e. the Comet is attracted by Mars, from which was generated. Mars in fact, which was not exceeding the zodiacal southern latitude, was in aspect with the Comet, whose latitude was more than 20 N degrees. For these reasons the Comet seemed to move from North to South toward East, so its eastern longitude grew and grew while northern latitude decreased and decreased. In the same way and for the same reasons its tail moved. In fact in the beginning of its appearance, its tail stretched toward North and following its motion moved Eastwards, inclining towards South to the stat which is called Altayr, i.e. Vultur Volans, which has a longitude of 21.15 Capricorn and a latitude of 29.25 N. And in this way, slowly, it moved towards Mars. So, after having carefully considered the nature and the temperament of the producing planet and of the receiving sign of the comet and its motion and every other detail about its nature, which I omit in order to be brief, I will go to the judgement. So I say that this comet, for its different and several causes, it means several accidents. It means in fact strong winds and earthquakes in the regions which are in familiarity and sometimes a dryness of the air preceding profuse rains, but this because of “accidens”, i.e. because southern and western winds, which will cause clouds and rains. And because of the corruption of the air, death and plague, famine and illness to the genitals, to the bladder and lungs and pains for parturient women and miscarriages and difficult deliveries and plenty of visions. It means that there will be many fights between powerful people, wars and murders, and the religion of Moors will be weaker, and on the Earth thieves and robbers will be more and more. It means wars, quarrels and massacre, the death of the kings, princes and nobles, the coming from the West of a King’s enemy and the King violence on his people and his lust for money. It means at last the destitution of courters and the unfairness of their acts that will correspond to a great hardship for them. So, Mars was in Scorpio, in which it has many rights because it has here the triplicity and the domicile: having 8 points (5 because of domicile and 3 of triplicity) will make stronger the meaning of the comet. These judgements are based on the most important astrologers, Ptolemy and Albumasar and Aly Habenragel. Giotto, Adoration of the Magi Giotto’s Comet. This beautiful fresco named Adoration of the Magi on the walls of the Scrovegni Chapel in Padua, completed by the great Florentine Giotto di Bondone (1267-1337) has always been regarded as the Halley’s comet in its 1301 apparition. In 1993 Hughes et al.(Q.J.R. Astr. Soc. 34. pp.21-32) suggested instead that it could be the comet seen at the beginning of 1304 (C/1304 C1). And many poets talked about the meanings of a comet. Virgil in facts wrote in the ninth book of the Aeneid: “Sanguinei lugubre rubent de nocte comete” Lugubre, gloomy: he used the name as an adverb. And he calls the comets “sanguineos”, bloody because they mean bloodshed. Claudianus adds, talking about the comet: “Et nunquam celo spectatum impune comete“ A comet was never seen in the sky without a disaster. And Lucanus, talking about the wonders when the war between Pompeus and Caesar was near, says that the appearance of a comet means a change in the kingdom (Margherita Fiorello). Drawing by Peter Apian of the Comet of 1532. A more pedesrian interpretation sums up the ancient and medieval view of comets. This is from “Unexpected Visitors: The Theory of the influence of Comets.” The ancient Greeks had a method to anticipate them from ingresses and ideas about their significance based on their colors and shapes but theirs was an astrology and astronomy of the naked eye and far freer of pollution and night light than ours. Atmospherics certainly played a role in their observations and their interpretation. The Greeks and later traditional Medieval and Renaissance astrologers thought them wholly malefic. These same astrologers held that comets and other portents in the heavens were fleeting appearances of the sublunary sphere. An event for our punishment or (very rarely) benefit from the Logos appearing in the space between the Earth and Moon. In their model of the heavens, change does not occur beyond the sphere of Luna except for the movements of the planets. The effects of comets were supposed to last for 1/8 of their period; to the ancients this would most likely have been their period of visibility, and to begin in earnest when the Sun or Mars transited their place of closest approach to the Sun or perihelion. Their appearance was heralded by disturbances in humans, animals, and the weather. The comets then dispensed, by perihelion position and their dispositor, their good or ill effects - usually ill. They also often heralded the rise of an agent. This agent could be a war leader but might, depending on the position of the comet, show a religious leader, reformer, or great trader (Jonathan Flanery). I couldn’t resist adding this image of the reputed cause of the most famous conflict in Florence and Italian medieval history. I don’t know if it was preceded by a comet, but Villani does refer to the fatal statue of Mars, and Villani is a firm believer in astrology. The Buondelmonte murder, from an illustrated manuscript of Giovanni Villani’s Nuova Cronica in the Vatican Library (ms. Chigiano L VIII 296 - Biblioteca Vaticana) “In the year of Christ 1215, M. Gherardo Orlandi being Podestà in Florence, one M. Bondelmonte dei Bondelmonti, a noble citizen of Florence, had promised to take to wife a maiden of the house of the Par. xvi. 136-144. Amidei, honourable and noble citizens; and afterwards as the said M. Bondelmonte, who was very charming and a good horseman, was riding through the city, a lady of the house of the Donati called to him, reproaching him as to the lady to whom he was betrothed, that she was not beautiful or worthy of him, and saying: “I have kept this my daughter for you;” whom she showed to him, and she was most beautiful; and immediately by the inspiration of the devil he was so taken by her, that he was betrothed and wedded to her, for which thing the kinsfolk of the first betrothed lady, being assembled together, and grieving over the shame which M.-122- Bondelmonte had done to them, were filled with the accursed indignation, whereby the city of Florence was destroyed and divided. Here’s Giovanni Villani himself. Florence lost this gifted (if not always nonpartisan) historian in the terrible Black Death of 1348. For many houses of the nobles swore together to bring shame upon the said M. Bondelmonte, in revenge for these wrongs. And being in council among themselves, after what fashion they should punish him, whether by beating or killing, Mosca de’ Lamberti said the Inf. xxviii. 103-111. Par. xvi. 136-138. evil word: ‘Thing done has an end’; to wit, that he should be slain; and so it was done; for on the morning of Easter of the Resurrection the Amidei of San Stefano assembled in their house, and the said M. Bondelmonte coming from Oltrarno, nobly arrayed in new white apparel, and upon a white palfrey, arriving at the foot of the Ponte Vecchio on Par. xvi. 145-147. this side, just at the foot of the pillar where was the statue of Mars, the said M. Bondelmonte was dragged from his horse by Schiatta degli Uberti, and by Mosca Lamberti and Lambertuccio degli Amidei assaulted and smitten, and by Oderigo Fifanti his veins were opened and he was brought to his end; and there was with them one of the counts of Gangalandi. For the which thing the city rose in arms and Cf. Par. xvi. 128. tumult; and this death of M. Bondelmonte was the cause and beginning of the accursed parties of Guelfs and Ghibellines in Florence, albeit long before there were factions among the noble citizens and the said parties existed by reason of the strifes and questions between the Church and the Empire; but by reason of the death of the said M. Bondelmonte all the families of the nobles and the other citizens of Florence were divided, and some held with the Bondelmonti, who took the side of the Guelfs, and were its leaders, and some with the Uberti, who were the leaders of the Ghi-123-bellines, whence followed much evil and disaster to our city, as hereafter shall be told; and it is believed that it will never have an end, if God do not cut it short. And surely it shows that the enemy of the human race, for the sins of the Florentines, had power in that idol of Mars, which the pagan Florentines of old were wont to worship, that at the foot of his statue such a murder was committed, whence so much evil followed to the city of Florence. The accursed names of the Guelf and Ghibelline parties are said to have arisen first in Germany by reason that two great barons of that country were at war together, and had each a strong castle the one over against the other, and the one had the name of Guelf, and the other of Ghibelline, and the war lasted so long, that all the Germans were divided, and one held to one side, and the other to the other; and the strife even came as far as to the court of Rome, and all the court took part in it, and the one side was called that of Guelf, and the other that of Ghibelline; and so the said names continued in Italy” (Villani). Image credit: NASA/JPL Woodcut showing destructive influence of a fourth century comet from Stanilaus Lubienietski’s Theatrum Cometicum (Amsterdam, 1668). The above will give the reader some insight into Florence. There was no lack of disasters in Italy and in that year, 1301, the occupation of Florence by the French representative of the Papal authority and the loss of power on the part of the White Guelphs of which Dante belonged, to be replaced by the Black Guelphs who allied themselves to the Papal legate in order to gain control of Florence and persecute their enemies, the Whites. I also find the reference to Virgil to be satisfying and intend to return to that revolutionary period in Roman history with an eye to the literary angle, and focusing so much on the politics. Certainly Virgil, initially something of a pacifist and spiritual idealist, going off to live in an Epicurean community in Naples, to escape the conflict between Caesar’s adherents and those of the old Republican order. In my own life, after the conflict in Vietnam had ended, I temporarily left the life of radical politics to retreat to a commune in Colorado in an attempt to create some idealized cooperative society under the sheltering parental guidance of a gnostic spiritual vision. I eventually rebelled at the direction of the community, a certain Maoist anti-intellectualism and my own impatience with sitting out of world affairs, at least as I saw it, by not participating in the radical politics of the day. Perhaps that is what drove Virgil into the affairs of state, or perhaps it was merely self interest, desiring to regain properties that had been confiscated and given to war veterans of the victorious Octavian in his native Mantua. Dante, had upon exile from his native Florence, joined briefly in White and Ghibellines conspiracies to regain control of the city. He soon became disillusioned with their vain efforts and spent the rest of his life writing his famous literary works and advocating for the return of a worthy Emperor to restore order to Italy. I am now in my own way retired from active battle, and doing my part as a literary warrior. Masonic initiation. Paris, 1745 I am still somewhat obsessed with medieval Florence. But this is about comets, and the times. Although I cannot say much about our own times, not aware of any particular comet, although I am sure there are comets galore with the advances in astronomy. Listening to an audiobook version of War and Peace as I write, I am captured from time to time by the plot and distracted from my writing. I found the descriptions of Pierre’s spiritual journey with the Masons, reminiscent of my own adventures with the Ministry. He also wanted to work on the political level rather than the boring and tedious task of self improvement. Youth wants change to be rapid and revolutionary, and for a young man to live in interesting times is not a curse but a relief. And as I have indicated previously, I in my own way continue my spiritual quest, expecting less, and with many regrets over failures especially in the personal realm of family. Family as Tolstoy constantly reminds us in his great work, is of such importance. Having just returned from the east coast and visiting my own mother and sister, confronting the remains of those youthful devils that still cling to the soul, like Pierre’s dream dogs biting at his heals (Tolstoy 408). Pierre Bezukhov at Noble Assembly - illustration by artist A.P. Apsit from book “Leo Tolstoy “War and Peace”, publisher - “Partnership Sytin”, Moscow, Russia, 1914. - stock photo But back to the comet issue Pierre riding home on a sleigh, observes the comet of the winter of 1811-1812, reflecting on its portending disaster, yet falling in some kind of love with the foolish Natasha, “in Pierre’s heart that bright comet, with its long luminous tail, aroused no feeling of dread” (Tolstoy 562). As well it should not have for the Russians, but for Napoleon, it was of course a very bad year. Now I must move on, leaving Pierre to his thoughts, and consider, could the plague have come from outer space, via comets? I love digging around on the web and finding all these other people who are pondering the different angles. Makes it hard for copyright protagonists and academics will decry such public pandering without any fees attached, but I use my access to university sites as a student for some material, and google for the rest, seeking other seekers. This is from Joseph and Wickramasinghe’s article “Comets and Contagion: Evolution and Diseases From Space.” [P]lagues are all bacterial diseases which are spread by infected fleas, by contact with the body fluids of infected people and animals, and by inhaling infectious droplets in the air. How did fleas come to be infected? Were they also contaminated by pathogens in the air? Bacteria and Viruses From Space? Yersinia pestis is one of the causative agents of plague. Yersinia pestis are anaerobic and must live within host cells during the infective phase of its life cycle (Brown et al., 2006; Perry and Fetherston 1997; Wickham et al., 2007). Infection takes place through a syringe-like apparatus by which the bacteria can inject bacterial virulence factors (effectors) into the eukaryotic cytosol of host cells. Yet, as they are anaerobic, Yersinia pestis (and other pathogenic bacteria) are completely dependent on their host species, and cannot be propagated over evolutionary time if the host dies (Brown et al., 2006). Thus it must be asked: what is the origin of these plague-inducing bacillus which periodically infect and kill huge populations over diverse areas, and then reemerge hundreds of years later to attack again? In fact, Yersinia pestis is the causative agent responsible for at least three major human pandemics: the Justinian plague (6th to 8th centuries), the Black Death (14th to 19th centuries) and modern plague (21st century). Yersinia pestis infected flea. The keys to unlocking this mystery may include the fact that these microbes are anaerobic (Brown et al., 2006), resistant to freezing (Torosian et al., 2009), and they periodically obtain many of their infective genes from other bacteria and viruses such that their genome is in flux and undergoes periodically rearrangement following the addition of these genes (Parkhill et al., 2001). A major anaerobic, freezing environment is located in space. Therefore, could these microbes have originated in space? A variety of microbes have been discovered in the upper atmosphere, including those who are radiation resistant (Yang et al., 2010), and at heights ranging from 41 km (Wainwright et al., 2010) to 77 km (Imshenetsky, 1978) and thus in both the stratosphere and the mesosphere which is extremely dry, cold (−85 degree C (−121.0 degree F;), and lacking oxygen. It is the mesosphere where meteors first begin to fragment as they speed to Earth (Wickramasinghe et al., 2010). Could these upper atmospheric microbes have originated in meteors or from other stellar debris? Or might they have have been lofted from Earth to the upper atmosphere?” (Joseph and Wickramasinghe). Contours of the spread of the Black Death. The Black Death (1334-1350AD) for example, has all the hallmarks of a space incident component or trigger. That this disease spread from city to city has been well documented (Kelly 2006; McNeill 1977). However, the progression of the disease did not follow contours associated with travel routes, displaying a patchiness of incidence including zones of total avoidance (Figure 6). Moreover, the pattern of infection appear to travel the course of prevailing winds (Figures 7 and 8). This does not accord with straightforward infection via a rodent/flea carrier as is conventional to assume. Hoyle and Wickramasinghe (1979) interpreted these patterns as indicative of a space incident bacterium. 1577 Great Comet Woodcut by Jiri Daschitzsky, Von einem Schrecklichen und Wunderbahrlichen Cometen so sich den Dienstag nach Martini M. D. Lxxvij. Jahrs am Himmel erzeiget hat (Prague (?): Petrus Codicillus a Tulechova, 1577). I am not so sure I am convinced by this, but it is from an academic source, and so should be taken seriously. It certainly puts a different twist hon my previous posting. I am not going to list all their sources, so if you want to see them go find the article on line, I have info about it below. I am also reading volume two of Hajo Holborn’s A History of Modern Germany, reading about the after effects of the Thirty Years War, the later I had read about in the last year or so. Incipient Germany and the shattered remains of the Holy Roman Empire, are such a complex jigsaw puzzle. It is impossible to read this history without recourse to a map, simply to place oneself in the setting, at least mentally. Only being a few chapters into the book, I find my pedantic side attracted to the satisfactory experience of placing the pieces of the German jigsaw in the appropriate places. And now I shall quote a line more or less at random, actually not, some comments about the aftereffects of the Thirty Years war reminds me of our own times. The miseries of war; No. 11, “The Hanging” Jacques Callot 1632 (published in 1633). In many respects it had been a new discovery to find that it was physically possible to siphon off so much money from the population. Public finance, including taxation, had been in its infancy before the war. Now it became a deliberate, if still clumsy, art. A century earlier it had been a widely held opinion that the prince was to defray the expenses of government with his own income from domains - mining rights, monopolies, tolls, etc. - usually called the ‘camerale,’ and that taxes were to be levied only for extraordinary purposes, such as defense… Throughout the war taxes had gone up, and even at the end of the war it was impossible to return to the earlier level. Payment of debts, resettlement of the population, land improvement, and maintenance of troops - all these called for revenue…The princes now demanded them as a matter of right and also claimed discretion in the use of the tax income (Holborn 43-44). Comet of 1618 was associated with the coming “end of the world” and spreading death and disease, during the Thirty Years War. Through concessions and compromises, the princes won the battle to establish standing armies. Once a standing army - a ‘miles perpetuus’ as it was called at the time - had been created with the assent of the estates, it became self perpetuating; it gave the prince a weapon that could be used against the estates, especially since it could sometimes be financed by foreign subsidies (45-46). Bonus Army marchers confront the police. This naturally brings to mind the military industrial complex, but even more, thinking back to history, Hoover called out the army to destroy the Bonus Marchers in 1932, who had Marched on Washington, DC to demand immediate payment of the Veteran’s Bonuses promised to soldiers who had participated in World War One. The DC police could not remove them from their encampments, and the army was called in led by Douglas MacArthur, who ordered Major Patton to clear the campsites. Patton did so with a cavalry charge followed by six tanks and then infantry who had fixed bayonets and used tear gas. Without a standing army this might have had a negotiated solution. Certainly it was one factor in Hoover’s defeat in that years elections. Roosevelt, the next year, upon another march, gave them a campsite, and meals. He sent Elanor Roosevelt to meet the marchers and she was able to offer them entry into the Civilian Conservation Corp. “One veteran commented: ‘Hoover sent the army, Roosevelt sent his wife.’” (Wikipedia Bonus Army). I am not even going to look for a comet to determine the fate of the Bonus March, perhaps an intrepid astrologer can look up predictions from the time and see if they can post-prognosticate on this. This photograph of Halley’s Comet was taken January 13,1986, by James W. Young, resident astronomer of JPL’s Table Mountain Observatory in the San Bernardino Mountains, using the 24-inch reflective telescope. “Bonus Army.” Wikipedia. Bonus Army-Wikipedia.Web. 12 Jan. 2014. Fiorello, Margherita. “A Medieval astrologer about Halley Comet in 1301.”heavenastrolabe.net. 24 Feb. 2009. Web. 12 Jan. 2014. Flanery, Johnathan. “Unexpected Visitors: The Theory of the Influence of Comets.” Web. 11 Jan. 2014. Holborn, Hajo. A History of Modern Germany 1648-1840. Princeton: Princeton U. P. 1964. Print. Joseph, Rhawn, and Wickramasinghe, Chandra. “Comets and Contagion: Evolution and Diseases From Space.” Journal of Cosmology. 7 (2010). 1750-1770. journalofcosmology.com. Web. 12 Jan. 2014. Tolstoy, Leo. War and Peace. Trans. Constance Garnett. New York: The Modern Library. 1931. Print. Villani, Giovanni. Villani’s Chronicle. Trans. Rose E. Selfe. The Project Gutenberg eBook, Ed. Philip H. Wicksteed. casasantapia.com/art/nuovacronica/nuovacronica.htm. Web. 12 Jan. 2014. Wells, H. G. In the Days of the Comet. London: The Century Co. 1906. In the Days of the Comet-Wikipedia. Web. 12 Jan. 2014.
<urn:uuid:077cabe0-fd0f-4f91-9455-cb01a842f76b>
CC-MAIN-2017-17
http://garyrumor.com/?cat=11
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917121000.17/warc/CC-MAIN-20170423031201-00542-ip-10-145-167-34.ec2.internal.warc.gz
en
0.956756
5,731
2.703125
3
S: Armenian ABC Culture Book BC: The Armenian ABC Culture Book | By Michelle Mayes FC: The Armenian ABC Culture Book 1: A is for Art | Armenian Art is layered with purpose, honor, and tales from a sorrowful past. Armenian art has a complexity that gives a look into the people and their struggles through song and cinema. With Armenia’s dark history of genocides, massacres, and controlling governments songs were the one thing that couldn’t be taken away from the people. (Dr. Kouymjian) The songs reflected their cry to God for help and salvation. The Armenian Art guides the culture with feelings of peace for the thoughts and lives of other cultures. Through Armenian cinema and film production the people are able to tell the story of their country to the world while being open to the ideas of others. (ARMENIAN CINEMA) Armenian cinema’s goal is to let the world remember the past, and keep what is dear close to the heart. The Armenians share a beauty with the world that is hard to detect because they are proud and unique people. Of all the boundaries and walls they have broken down, the art creates a connection that touches the world in ways the will continue to develop through out the years. 2: B is for Building | Armenian architecture was one of the first art mediums to be studied. The architecture receives more scholarly attention than all of Armenia’s other arts combined. It is not just size that makes this medium fascinating, but the fact that other works are nothing in comparison. The labor and time set into the creation of the buildings requires a creative and active mind. (Dr. Kouymjian) 3: C is for Communication | During the times of Soviet Union rule the USSR blocked all new world technology from Armenia’s people. The Soviet Union wanted complete dominant control over what the people felt, saw, and heard. The Soviet grip left a tight mark that cut off the Armenian people from developing their communication with the rest of the world. (CIA) After the fall of the Soviet Union and the declaration of Armenia’s Independence on September 21, 1991, new ways of communicating came flooding in from all angles. (Library of Congress) By 2004 the country had become monopolized by the cellular phone, television broadcasting, and radio broadcasting. (CIA) With new modern technology the Armenians were able to control what they felt, saw, and heard. This new power gave them the opportunity to become less dependent on their government for information. | Armenian Television Logo 4: D is for Dress | Armenian women had three goals in life, getting married, having babies, and raising and taking care of a happy family. From generation to generation young women have been in “training” for their wifely/motherhood duties. One form of Training that started three thousand years ago is the forum of dressmaking. Dressmaking is one of many symbols that characterize the women of Armenia. Armenian women have very few opportunities with the outside world, and still create their clothing similar to how the generations before them made their dresses. The Armenians utilized wool and fur and later cotton that was grown in the fertile valleys to make dresses. Royalty used silk imported from China, during the Urartian period. Later the Armenians cultivated silkworms and produced their own silk. (Armenian Heritage Organization) Dressmaking created a way to view women from all walks of life, and created yet another way for women to see the arts of the outside world. (Advantour) Armenian dressmaking creates another colorful branch on a culture proud of its heritage. 5: E if for Economy | Ever since the collapse of the USSR in 1991 there has been an on going decline in Armenia’s percentage growth rate. Under what use to be the Soviet Union, Armenia developed an industrial sector supplying machine tools, textiles, and other manufactured goods in exchange for raw materials and energy. Armenia’s farms are in need of updated technology, and many of their factories have been sold to the Greek. Today though many actions have been put into place to help economy growth. Recently the Armenians have been able to stabilize a currency system called the Dram. (Armenian Tour) 6: F is for Family | Tradition holds deep meaning in everyday live for the Armenian people. Generations after generation lessons and skills have been passed down to create a wholesome family life. An average Armenian family consists of a man a women and at least 2-3 children. The male’s task is to fulfill the material needs of the family. Making economic decisions, banking, household repairs, and taking out the garbage. (Markarian) The women’s task is to cook, clean, taking care of the children, and get groceries for the home. A female child is taught how to sew at a young age starting with making their dolls clothes. Also they are taught how to play house; which will later become their full time “job”. Also the oldest females are expected to take care of the younger children. A males job is to eat and grow strong, but also to get a good education. (Markarian) 7: G is for Government | The citizens of the Armenian Republic elect the President of Armenia for a 5-year term of office. The President makes sure the constitution is being followed, provides for regular functioning of legislative, executive, and judicial authorities. The Executive power is composed of the Prime Minister and Ministers. The president will elect the Prime Minister, or the person nominated by the largest number of National Assembly membership. (Armenian Government) The President appoints and discharges members of government on Prime Minister’s proposal. The National Assembly (Legislative Power) consists of 131 deputies /75 of which are elected on the basis of proportional representation and 56 - majority representation/. The National Assembly is elected through general elections for a term of five years. Parliamentary elections were last held in 2007. (Armenian Government) The Judicial power consists of courts operating in the Republic of Armenia are the first instance court of general jurisdiction, the courts of appeal, the Court of Cassation, as well as specialized courts in cases prescribed by the law. (Armenian Government) | Yerevan: Armenian Capitol 8: H is for History | What proud people come from a world filled with unresolved troubles. From 2100B.C. to present day horror and grace have swept through the Armenian people. The first records appeared when the Armenian land was claimed to be the spot were Noah's Ark settled back down from the great flood. With the knowledge of their roots tracing back to Noah and his ancestors, the Armenians knew they were an important cultural group. (Babayan) Around 66A.D. the first apostles of the Armenian culture, Thaddeus and Bartholomew, came to Armenia to preach the word of God. The apostles Thaddeus and Bartholomew were soon captured and murdered for their spread of Gods word. (Babayan) However the apostles were said to be respected men in Armenia, and a couple centuries later Armenia became the first Christian Nation. From 1915 through 1991 Armenia has had problems dealing with genocide and the Soviet Union rule. Many of the problems started from the root cause of Armenia not having a strong government system. Armenians were easily overturned and taken into capture. For years they suffered the stronghold of a communist government, and the massacre of almost all of the Armenian Christian population. In 1991 though after the fall of the Soviet Union, Armenia quickly seceded and became the Armenian Republic. Not even a month later the first presidential elections took place, and Levon Ter-Petrossian became the first elected president. (Bournoutian) 9: I is for Icons | Coat of Arms | Mount Ararat were Noah's Ark was stopped | Armenian Flag | Cinema Icon is Cher | Christianity 10: J is for Jobs | "The Centre for Gender Studies reports that men are routinely the preferred candidates in hiring, which is conducted by predominantly male bosses. An overwhelming number of women in Armenia occupy low-skilled positions. According to Barbara Merguerian of the Armenian International Women's Association, almost as a rule, even the most educated women are left out of the highest-paid and executive positions. Women predominate in non-managerial positions in manufacturing, primary and secondary education, and in health centres -- jobs which typically pay the lowest salaries. Women with higher education are often forced to work as restaurant cooks, provide cleaning services or do handicrafts.28 Moreover, there are no mechanisms to enforce the anti-discrimination labour laws -- labour rights violations are commonly not reported and no measures are being taken to improve the situation." (IWRAW) 11: K is for Knowledge | "According to the Constitution of the Republic of Armenia the secondary Education is compulsory and free of charge. Secondary Education in the republic contains 3 levels. Duration of study in secondary schools is 10 years. The first level is Elementary School (1-3 grades). The basic goal of this level is to provide literacy for pupils. The second level is Basic School (4-8 grades). The goal of secondary schools is implementation of general education. Students, who accomplish 8 grade, obtain the certificate of 8-year education (incomplete secondary education) and they can continue their study in the high school as well as in specialized secondary and technical secondary educational institutions. The third level is High school (9-10 grades). The high school offers complete secondary education, which is realized in the following forms: general education study; colleges (deepened study of some subjects); Graduate students of the high school obtain certificate of complete secondary education, which is called Attestat/Certificate of maturity "Hasunutian Vkaiakan" and can continue their study in the Higher educational institutions. There are 1418 secondary schools, 1 gymnasium, 1 lyceum, 25 state colleges, 4 academic schools, 11 private high schools, 15 private schools. There are deepened study forms in 197 high schools-colleges (74 -humanitarian, 68 - physics-mathematics, 53 - natural sciences, 22 economics) in the Republic of Armenia." (Ministry of Education and Science of Armenia) 12: L is for Language | Nearly 9 million Armenians in the Republic of Armenia and around the world speak Armenian. Because the language is based off the etymological characteristics, Armenian is considered to be a branch off of the Indo-Hittite language group. It is also said to be an independent branch within the family of Indo-European language. The Armenian alphabet contains 38 letters, and is one of the world’s richest languages. It is made up of 7 vowels and 31 consonants. With additional digraphs the sounds of the language totals up to 40. (Nenejian) 13: M is for Movement & Migration | Many Armenians migrated to the United States because of the horrors going on in their own country. During the first Turkish invasion in 1915 many Armenians fled the country to save their families lives and their own. Many men fled Armenia during this time because they did not want to be forced into the Turkish army. (Takooshian) Also a big migration period took place with the rise of the Soviet Union. The first record of immigration occurred when Noah’s Ark settled on Mount Ararat, and his decedents started the Armenian population. Through out the period of the Turkish invasion and the Soviet Union, immigration was almost unheard of, but in recent years the immigration numbers have increases. (Yeghiazaryan) 14: N is for National Pride | Throughout history Armenians have stood by the culture, and those who left never forgot where they came from. Armenians are proud and unique people. They have their own way of life, language, and art. From the beginning of time they have captured an essence that many Americans and other countries have missed. They are united by their common culture and background. 15: O is for Organization | Charitable United Armenian Fund Armenian General Benevolent Union (AGBU) Armenian Relief Society (ARS) Fund for Armenian Relief (FAR) Lincy Foundation Armenian EyeCare Project Fuller Center for Housing Armenia Habitat for Humanity (HfH) Cafesjian Family Foundation Fast For Armenia Armenian Educational Relief Foundation (AERF) | Religious Armenian Missionary Association of America (AMAA) Diocese of the Armenian Church of America Prelacy of the Armenian Apostolic Church of America Congregation of Mekhitarist of San Lazzaro, Italy Armenian Bible Church | Other HENQ Armenian Bone Marrow Donor Registry Armenian Black Belt Academy of Nagorno Karabakh Ararat-Eskijian Museum (Granada Hills, California) Armenian Center for National and International Studies (ACNIS) Armenian Educational Foundation (AEF) Armenian International Policy Research Group (AIPRG) Armenian Library and Museum of America (ALMA) Knights of Vartan Naregatsi Art Institute National Association for Armenian Studies and Research (NAASR) Armenian General Athletic Union and Scouts (Homenetmen) Project SAVE BOYCOTT TURKEY Campaign 16: P is for Population | Today the population of Armenia is about 3.5 million people. However, Armenian Diaspora worldwide totals more than 10 million: in the days of the genocide many Armenians were forced to leave their native land. Over 95 % of the population of Armenia is ethnic Armenians; the rest of the population is represented by Azerbaijanis, Greeks, Assyrians, and Russians. During the ethnic conflicts of 1989–1993 almost all Azerbaijanis left the country. 17: Q is for Quality | Literacy: definition: age 15 and over can read and write total population: 99.4% male: 99.7% female: 99.2% (2001 census) | Infant mortality rate: total: 19.5 deaths/1,000 live births country comparison to the world: 102 male: 24.16 deaths/1,000 live births female: 14.23 deaths/1,000 live births (2010 est.) | Sex ratio: at birth: 1.133 male(s)/female under 15 years: 1.15 male(s)/female 15-64 years: 0.88 male(s)/female 65 years and over: 0.62 male(s)/female total population: 0.89 male(s)/female (2010 est.) | Death rate: 8.42 deaths/1,000 population (July 2010 est.) country comparison to the world: 88 | Birth rate: 12.74 births/1,000 population (2010 est.) country comparison to the world: 159 | Population: 2,966,802 (July 2010 est.) country comparison to the world: 137 18: R is for Religion | Public Holidays 2010 January 1–2 (New Year), January 6 (Christmas), January 28 (Army Day), March 8 (Women’s Day), April 2–4 (Easter), April 24 (Armenian Genocide Commemoration Day), May 1 (Labor Day), May 9 (Victory and Peace Day), May 28 (Declaration of the First Armenian Republic Day), July 5 (Constitution Day), September 21 (Independence Day), December 31 (New Year's Eve). | Armenia's Cultural Religion is Christianity | "Armenia: Food and Holidays." World Geography: Understanding a Changing World. ABC-CLIO, 2010. Web. 13 Dec. 2010. | Referendum Day Celebrated on September 21, this official national holiday—also known as Independence Day—marks the anniversary of the date in 1991 when Armenians voted in a national referendum to establish the country's independence from the crumbling Soviet Union. 19: S is for Status | V.S. | Economy Downfall since 1991 20: T is for Taboos | Divorce & separation are new phenomenon and still a taboo | Many health topics remain taboo. Hypertension, diabetes or minor surgeries are talked about. Great efforts are deployed in order to hide any case of epilepsy, cancer, AIDS or psychiatric disorders (including depression) in the family. Such words are never pronounced out loud. | Sexual education is absolutely out of the question. Children are taught to behave, otherwise they will be taken to the doctor or the dentist and will get an injection! | A young woman cannot confide in her father for her personal problems. | It is still dangerous to hold public discussions on Armenian Genocide in Turkey | They never talk about child prostitution. It's a taboo subject 21: U is for Urban or Rural | The Armenian urban population consists of 64% of the population. Much of Armenia is urban consisting of many people, but many of the rural areas are poverty stricken. With Armenia being such a small country many urban and rural areas tend to overlap. (CIA) 22: V is for Vacation & Recreation | Athletics Basketball Fitness (Health) Football/Soccer Golf Martial Arts Motorsports Outdoors (Recreation) Racket Sports Scouting (Recreation) Sporting Goods (Shopping) Water Sports Winter Sports | Recreation | Geghard Monastery | Haghartsin Monastery | Haghpat Monastery | Tatev Monastery 23: W is for Ways of Life | Most city-dwellers live in apartment buildings that were built during the Soviet period; many of these are now dilapidated. Rural residents live mostly in single-family houses, and many members of an extended family often live together. Family and friends are the center of social life, and respect for elders links generations. (Countries Quest) | Most city-dwellers live in apartment buildings that were built during the Soviet period; many of these are now dilapidated. Rural residents live mostly in single-family houses, and many members of an extended family often live together. Family and friends are the center of social life, and respect for elders links generations. (Markarian) 24: X is for X-Marks the Spot | Natural hazards: occasionally severe earthquakes; droughts Environment - current issues: soil pollution from toxic chemicals such as DDT; the energy crisis of the 1990s led to deforestation when citizens scavenged for firewood; pollution of Hrazdan (Razdan) and Aras Rivers; the draining of Sevana Lich (Lake Sevan), a result of its use as a source for hydropower, threatens drinking water supplies; restart of Metsamor nuclear power plant in spite of its location in a seismically active zone Environment - international agreements: party to: Air Pollution, Biodiversity, Climate Change, Climate Change-Kyoto Protocol, Desertification, Hazardous Wastes, Law of the Sea, Ozone Layer Protection, Wetlands signed, but not ratified: Air Pollution-Persistent Organic Pollutants Geography - note: landlocked in the Lesser Caucasus Mountains; Sevana Lich (Lake Sevan) is the largest lake in this mountain range | 40 00 N, 45 00 E 25: Y is for Yum | APPETIZERS | SOOJOUKH - An Armenian air dried sausage | TOORSHI-2 - Armenian pickled vegetables | STUFFED ZUCCHINI WITH YOGURT SAUCE | HARPOOT KEUFTA - Armenian stuffed meatballs from the village of Harpoot! | EGGPLANT DOLMA - A taste sensation! | LULU KEBAB - Middle Eastern burger on a stick! | MEATS | BREADS | BRAIDED BREAD STICKS - An Armenian favorite | KATAH - Armenian hard sweet rolls | SIMIT CHORAG - An orange flavored Armenian sweet roll. 26: Z is for Ztuff 27: Bibliography | "Introduction, Arts of Armenia (c) Dr. Dickran Kouymjian , Armenian Studies Program at Cal State University, Fresno." Armenian Studies Program, California State University, Fresno. Web. 13 Dec. 2010. 28: C | "Armenia - Independence." Country Studies. Web. 13 Dec. 2010. 29: Markarian, By Shogher. "HyeEtch - The Armenians - The Armenian Family." HyeEtch - Armenian History, Culture, Art, Religion & Genocide. Web. 14 Dec. 2010. 30: "Armenian Cultural Traditions & Icons." The Armenian Chronicles. Web. 14 Dec. 2010. 31: L | M | N | "ARMENIAN LANGUAGE RESOURCES - Timeline." ARMENIAN LANGUAGE RESOURCES - Home. Web. 15 Dec. 2010. 32: O | P | Q | "Armenian Organizations." Armeniapedia.org. Web. 15 Dec. 2010. 33: R | S | T | "Armenia: Food and Holidays." World Geography: Understanding a Changing World. ABC-CLIO, 2010. Web. 13 Dec. 2010. | "The Religion." Armenia Information. Web. 34: "CIA - The World Factbook." Welcome to the CIA Web Site — Central Intelligence Agency. Web. 16 Dec. 2010. 35: X | Y | Z | "Armenian Geography | Armenia's Geography | Armenias Geography." Travel Blogs, Photos, Videos and Maps. Web. 16 Dec. 2010.
<urn:uuid:c664ca11-6625-45a4-9b20-e529a3868d4f>
CC-MAIN-2017-17
https://www.mixbook.com/photo-books/travel/armenia-5071533?vk=ubKJsMF1On
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917120694.49/warc/CC-MAIN-20170423031200-00130-ip-10-145-167-34.ec2.internal.warc.gz
en
0.948893
4,420
2.921875
3
In the field of physical health and psychological well-being, health psychology specializes in exploring biological, psychological, cultural, societal, and environmental factors of life, and how each of these effects physical health. There are some who embrace the spiritual or religious aspects to this design, however, traditionally the model includes biological, psychological, and social components. While it may be common knowledge among certain groups to understand the negative affects that a person’s emotional mindset can have on health, there continues to be a surprising amount of denial regarding the roles of the interactivity. Physical health can be directly influenced by the environment in which we live. What is Health Psychology? The central strategy practiced within health psychology is the bio-psycho-social design. The British Health Society explains that health and disease are the effects of a blending of biological, psychological and social factors. Biological determinants consist of genetic conditions and inherited personality traits. Psychological factors are anxiety levels, personality traits and lifestyle. Social determinants consist of cultural views, political beliefs, family relationships and support systems. Health Psychology’s origins are in the belief that everyone deserves proper medical and psychological care especially when daily habits, career, or family life problems contribute to the decrease in physical health and/or psychological well-being. The bio-psycho-social model views health, wellness and illness as being a result of several inter-related factors effecting a person’s life from biological characteristics, psychological aspects to behavioral and social conditions (Belloc, N. & Breslow, 1972). Psychological determinants that relating to health have been a general focal point since the beginning of the 20th century and results explain that those who eat regular meals, maintain a healthy weight, do not smoke, drink little alcohol, receive adequate sleep and exercise regularly are in better health, therefore live longer. Scientists at the time were also discovering associations between the psychological and physiological processes. These include the influence of anxiety on the cardiovascular and immune systems, also, discovering that the functioning of these systems could be effected by training. Thus began a growing awareness of the importance for the need to have sound educational and communication skills during office visits. The American Psychological Associations division 38 is health psychology which focuses on understanding the biological, psychological and sociological relationship between health and illness. This division concentrates on examining the determinants that influence health and the association contributes information to the health care management system. The three areas that relate to health psychology are: Research: Health psychologists conduct studies on a variety of health-related concerns. For instance, researchers may concentrate on investigating effective preventative measures, explore health promotion techniques, study the causes of health problems, investigate how to motivate people into seeking treatment, and analyzes ways to help people cope with an illness. Public Policy Work: Health psychologists may work in private or government settings and have a role in developing public policy on health related concerns. Their communications might point to advising executive groups on health care improvement, address disparities in health care, or lobby government agencies. Clinical Work: In medical and clinical health care environments, health psychologists regularly administer clinical and behavioral evaluations, participate in clinical interviews, conduct personality tests and provide therapy. They often participate in managing interventions with individuals or groups related to training people about anxiety reduction methods, offer addiction cessation advice and teach on how to avoid unhealthy ways of life. Physical health can be effected by the things people do, by the way they process information, career choice, family dynamics, life’s daily troubles and the environment in which they live. For example, someone living in a damp, mildew infested home has a good chance of developing respiratory or sinus problems and may develop allergies. Physical Health and Genetics Research has discovered that people whose parents suffer from certain diseases, such as diabetes, cancer, hypertension, and addictions are predisposed to getting these conditions. Biology certainly plays a central role in the health and well-being of everyone. However, psychological, environmental and cultural factors are also key areas that relate to any illness (Marks, Murray, Evans & Estacio, 2011). For example, when a mother is diagnosed with breast cancer most medical professionals would encourage her daughter to obtain a screening regularly once she reaches a certain age. It would not matter, if the daughter kept away from the damaging rays of the sun or if she did not smoke. She is a cancer risk because of the genetic predisposition for the disease that runs in the family history. However, there is no guarantee that daughters of mothers who were diagnosed with breast cancer will suffer the same. It simply means that the DNA (the genetic material that people share with their family) may include a marker that leaves her more susceptible to the disease than someone else who does not have this marker. People born from alcoholic parents tend to have more addictive personalities than those whose parents were not alcoholic. Some emotional and mental ailments directly link to abuse that a person has suffered in childhood, others tend to be more genetic in quality. However, psychological, social and environmental factors all play key prominent roles in managing addictions, along with this genetic tendency. Sometimes when people feel sick, tired, or overrun, or when they develop certain diseases it is not only in response to a virus or bacteria infiltrating the central nervous and immune systems, but rather a response to what is happening within the body, brain and subconscious mind. Catching a cold is only one example. Take heart, I am obviously not leaving out the biological component, meaning the virus that attacked the weakened immune system. Heart related conditions, respiratory illnesses, muscle and joint pain diseases and various physical ailments are common among those coping with the emotional and psychological stresses of modern day life. The release of the “stress” chemicals weakens the defenses in fighting a physical illness. The more people understand the powers of the brain and mind the more they will realize that physical and emotion health directly relates to thoughts, feelings and behaviors. One way to describe the basics of health psychology is by exploring the smoking addiction. Part of the smoking habit is the physical component of addiction to nicotine because withdrawal symptoms set in once the process of quitting begins. A typical physician will prescribe medicines to suppress the physical symptoms of withdrawal, treating the smoking addiction as a physical problem. However, studies show that there is a remarkably strong probability that the individual will just start smoking again. A chain smoker that uses a nicotine patch may have difficulty quitting if they continue to believe that smoking is not harmful or that smoking helps them to relax. In these cases, even with the patch, the individual may easily return to smoking. The average counselor or physician is only treating the physical withdrawal aspects of smoking. There is a psychological component because the smoker stands to gain rewards, no matter how temporary from each cigarette. Smoking may suppress the appetite, offer an opportunity to relax and unwind, or provide a momentary distraction from current stress. There is also a habitual behavioral aspect to smoking, such as always lighting up when getting in the car, having a cigarette right after dinner, or using a cigarette as a stress reliever. Every year scientists are discovering new insights into how the brain, body and mind inter-relate and the ways in which they link to each other in harmony. The human brain is one of the most intricate, mysterious, and powerful organs in the entire universe. Science has been able to conjure up ideas in the mind such as, concepts in mathematics and imagine worlds that at this time do not exist. These ideas stem from professionals wanting to explore the unknown aspects of the world and mind that has a relationship to the brain and the environment in which we live. There is a distinctive respect for what the mind is capable of achieving in relation to the brain, medicine and psychology. Science understands much more than even twenty years ago regarding the interaction between emotions and pain, of the thought processes involving healing and the remarkable healing powers of the human body. Society and Cultural Factors play key roles in physical health Mokdad et al., in 2004 reveal that fifty percent of all deaths in the United States can be associated to ways of life or other risk determinants that are for the most part preventable. Health psychologists work with people in hopes of eliminating these risk factors to decrease failing health and improve overall health. Expectations and gender roles can put a large amount of pressure on someone to behave and act in a distinct fashion. Racism, religion and political beliefs often are stresses and over time these pressures have an impact on overall health. For example, white, middle-class people tend to have better overall physical and emotional health than inner-city minorities. Health psychology explores the underlying factors that have a direct and indirect impact on someones quality of life (Cassileth et al., 1984). When assisting individuals develop a healthier lifestyle, career choice is another area that health psychology explores. There is a direct relationship between choices of work and physical and emotional health because the more frustrating the project the more risk we are placing ourselves into developing an emotional and/or psychological problem. When a person is under stress, the body produces chemicals and hormones that it does not require and some of these substances may be harmful. The difficult conditions and the release of these substances results in a weakening of the immune system. When an immune system is weak we are more susceptible to physical and mental ailments (Ader, R. & Cohen, N. 1975). The Bio-Psycho-Social Model as it Applies to Health Psychology Millions of people around the world are under tremendous amounts of stress maybe because their economy is suffering and unemployment remains high. Those who are employed are working longer hours and are taking on more responsibilities for less pay. People who have lost their jobs worry about paying their bills, feeding their families and holding onto what they have worked so hard to achieve in the past and some are wondering if they are normal. When health psychologists talk about the bio-psycho-social model, behaviors are key ingredients contributing to physical health. Do people smoke? Do they drink alcohol regularly? Do they eat junk food? Have a stressful job? Are finances tough? Do people exercise regularly? How is the family? How is the social life? These are just a few questions that a health psychologist may explore. There are behavioral and social conditions that directly or indirectly relate to the state of overall physical health. Stress derives from the instinctual nature of the desire to survive and the psychological community labels this concept the ‘fight or flight’ response. When the mind perceives a warning whether that threat is real or a product of the imagination the brain responds as if in danger. The brain calls for adrenaline to be pumped throughout the body, which allows it to run faster (away from the problem) or fight with a bit more strength than it naturally possesses (face the problem). The production of adrenaline in association with the flight or fight response is only intended to be for brief periods of time, for survival. When people are facing chronic stress at home or work, the physical body is under constant tension adapting to this “fight or flight response”. As a result, people tend to feel run down and tired more often, they may experience aching joints, muscle aches, lower back pain, headaches and increases in blood pressure. All of which are the common side effects of repeated stress and increases in adrenaline. While some do not tend to think about stress as being abnormal, it does indeed take a heavy toll on a person, both physically and emotionally. One key factor in lowering stress level is determining how to recognize and respond to stress and how this relates to your behavior. Health Psychologists work in clinical settings promoting behavioral change that relates to the everyday anxieties of life. They inform the public, provide therapy, conduct research, teach at universities and work in the field of sports medicine. Clinical Health Psychology attempts to provide answers to the following questions: - What is the relationship between emotional health, physical wellness and illness? - What is the connection between the body, mind and environment? - What role does psychology play in relationship to health and disease? - How should a particular illness be treated? The world of health psychology is changing lives one day at a time and with some expert guidance and support people can experience the healthy, vibrant life that they desire, and all it takes is unlocking the secrets of the brain, the body, the mind, and behavior. A simple fact of life is that human beings are extraordinarily complex and an illness can be the result of a myriad of factors. These factors emerge from biological, psychological and environmental facets of everyday life. Most often medications alone will not provide the positive results necessary for people to achieve maximum health, but, just because medicines do not fully aid in recovery or reduce the pain does not mean that all options for improvement have been exhausted. Although improving, health psychology principles clearly have not been fully utilized or recognized at this point in time, by conventional western medicine. Health Psychology, Pain and Illness Physical ailments are real, people will say, “they are not in my head.” Some patients and physicians view health psychology concepts as being a personal affront to our gaining knowledge and do not believe that pain relates to overall emotional well-being. Others fear that people working in the health psychology fields will judge them or their pain as being “abnormal.” A few medical professionals, attempt to discredit patient complaints of pain and intimidate people into making them think that the problem simply does not exist and that the discomfort is all a figment of the imagination. Science is evolving and the problem may simply not be medically understood or the location of the pain cannot be found in the body at the time. There is a relationship between the brain, the mind and pain. Health psychology strives to find strategies to decrease and do away with pain, as well as have knowledge of pain peculiarities such as analgesia, causalgia, neuralgia, and phantom limb pain. Despite the fact that measuring and reporting pain are questionable, the McGill Pain Questionnaire has helped make improvements (Melzack, R. 1975). Popular treatments for pain are patient-administered analgesia, acupuncture, biofeedback, and cognitive behavior therapy. Do not be tricked into believing that an illness is a figment of imagination as this belief may cause psychological problems and increase physical symptoms. The above thoughts are generalized examples of what it means when people say healing comes, in part, from the underlying psychological aspects of the mind (thoughts and feelings), behaviors and the brain. Clinical health psychologists identify this way of thinking as being a bio-psycho-social model. The standard encourages a positive shift in the way people think about health, illness, and healing. Imagine by changing the way people think and cope about a problem in life that can move you in a direction towards pain-free living or assist in decreasing blood pressure. You can achieve this by learning a few techniques. Applying and believing in this theory will increase your quality of life. While healing with “health psychology” is certainly much more complicated than just changing a thought or behavior, most people do not believe in this concept and until recently it has been overlooked by those in the medical community. Health psychologists attempt to assist in the manner of communication between doctors and patients during medical consultations. There are many difficulties in this process, with patients showing a significant lack of understanding of many medical terms (Boyle, C.M. 1970). One central area of investigation relates to “doctor-centered” consultations, which are directive and involves the one seeking help answer questions and performing less of a part in decision-making. Many people object to the sense of authority or disregard that this spurs and favor patient-centered consultations which focus on the patient’s needs. Patient centered consultations involve listening to the person completely before reaching a decision and the individual seeking help plays an active role in the process of choosing treatment. A difficult task for health psychologists is motivating people to adhere to medical direction and follow the treatment plan. This lack of adherence is possibly do to treatment side effects or life circumstances and some ignore taking medicines or consciously stop. Compliance measures are hard to quantify, however, studies explain that this could improve by tailoring medication schedules to an individual’s daily life. Health Psychologists have advanced training in a variety of research designs allowing them to conduct investigations, provide expert consultation or collaborate in research. They conduct investigations to clarify puzzling questions such as: - How is anxiety connected to heart disease? - What are the impacts or influences on healthy eating? - What are the emotional consequences of genetic testing? - In what ways can therapists help people reach their goals and change health habits to improve health? - They concentrate on how an illness affects a person’s emotional happiness. Stress can lead to depression, reduced self-esteem and anxiety. - Health psychology also concerns itself with improving the lives of those with terminal illness. When there is little hope of recovery, these therapists can improve the quality of life by helping recover thoughts and feelings associated with psychological well-being. These therapists also identify the best ways to provide therapeutic services for the bereaved (O’Brien, J.M.; Forrest, L.M. & Austin, A.E. , 2002). In conclusion, health psychology is a relatively new sub-category of psychology and is not well known to many people. These clinical psychologists take a more effective approach by exploring the physical, psychological, and behavioral aspects, and consider the problem in a holistic fashion. The practice of using health psychology principles significantly improves the likelihood of successfully quitting any addiction. Health psychology can help people become more physically fit, assist with decreasing chronic pain, improve the quality of life with those diagnosed with a terminal illness, prevent further complications of any serious physical ailment and assist in learning new ways to cope with the tensions and transitions that govern everyday life. For more information please visit the store and read the articles, What is Therapy and How to Achieve Success in Counseling. © Dr. Cheryl MacDonald Health Psychology for Everyday Life the book To ask a question schedule an appointment, seminar or lecture go here or feel free to call 1 669-200-6033 Ader, R. & Cohen, N. (1975). Behaviorally conditioned immunosuppression. Psychosomatic Medicine, 37, 333–340. Belloc, N. & Breslow. (1972). Relationship of physical health status and health practices. Preventive Medicine, 1, 409–421 Berman, B.; Singh B.B.; Lao, L.; Langenberg, P.; Li, H.; Hadhazy, V.; Bareta, J. & Hochberg, M. (1999). A randomized trial of acupuncture as an adjunctive therapy in osteoarthritis of the knee. Rheumatology, 38, 346–54. Boyle, C.M. (1970). Difference between patients’ and doctors’ interpretation of some common medical terms. British Medical Journal, 2, 286–89. Cassileth, B.R.; Lusk, E.J.; Strouse, T.B.; Miller, D.S.; Brown, L.L.; Cross, P.A. & Tenaglia, A.N. (1984). Psychosocial status in chronic illness. New England Journal of Medicine, 311, 506–11. Cohen, L.M.; McChargue, D.E.; & Collins, Jr. F.L. (Eds.). (2003). The health psychology handbook: Practical issues for the behavioral medicine specialist. Thousand Oaks, CA: Sage Publications Dowsett, S.M.; Saul, J.L.; Butow, P.N.; Dunn, S.M.; Boyer, M.J.; Findlow, R. & Dunsmore, J. (2000). Communication styles in the cancer consultation: Preferences for a patient-centred approach. Psycho-Oncology, 9, 147–56. Lander, D.A. & Graham-Pole, J.R. (2008). Love medicine for the dying and their caregivers: The body of evidence. Journal of Health Psychology, 13, 201–12. Melzack, R. (1975). The McGill Pain Questionnaire: Major properties and scoring methods. Pain, 1, 277–99. O’Brien, J.M.; Forrest, L.M. & Austin, A.E. (2002). Death of a partner: Perspectives of heterosexual and gay men. Journal of Health Psychology, 7, 317–28. The British Psychological Society (2011) What is Health Psychology? A guide for the public.07 March 2011.
<urn:uuid:747a0849-bed1-4966-81f4-39782a089534>
CC-MAIN-2017-17
http://healthpsychology.org/what-is-health-psychology/
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917118950.30/warc/CC-MAIN-20170423031158-00070-ip-10-145-167-34.ec2.internal.warc.gz
en
0.938287
4,260
3.171875
3
- How Does A Rotating Screw Jack Work? In Brief: When the worm shaft is rotated the lead screw rotates in the body of the screw jack at the same rate as the worm gear. The nut on the lead screw moves in a linear direction along the screw when fixed to a structure that prevents it from rotating with the screw. This design is also available with a “Safety Nut”. Back to Top In Detail: When a screw jack unit is operated, the rotation of the worm shaft causes the worm gear to rotate. For rotating screw jacks the lead screw is fixed to the worm gear and they rotate at the same speed. As the worm gear turns, the friction forces on the screw thread act to turn the nut also. The greater the load on the screw jack unit, the greater the tendency of the nut to turn. It is obvious that if the nut turns with the screw, it will not raise the load. Therefore the nut needs to be fixed to a structure to prevent rotation. The restraining torque required for the structure, also known as the “lead screw key torque” can be found in the E-Series Screw Jacks brochure (P77). - How Does A Translating Screw Jack Work? In Brief: The lead screw translates through the body of the screw jack when the lead screw is prevented from rotating with the worm gear. This is typically done by fixing the end of the lead screw to the structure that needs to be moved linearly.Back to Top In Detail: When a screw jack unit is operated, the rotation of the worm shaft causes the worm gear to rotate. For translating screw jacks the worm gear is threaded to accommodate the lead screw thread. As the worm gear turns, the friction forces on the screw thread act to turn the screw also. The greater the load on the screw jack unit, the greater the tendency of the screw to turn. It is obvious that if the screw turns with the nut (worm gear), it will not raise the load. In those cases where a single unit is used, and where the load cannot be restrained from turning, it is necessary to screw jack with an anti-rotation mechanism (keyed screw jack). Lead screw key torque (refer to E-Series Screw Jacks brochure (P77)) must be checked as excessively heavy unguided loads could break the Anti-rotation mechanism (key). - When Do I Use An Anti-Backlash Screw Jack? For reduced axial backlash of the lead screw in the screw jack select a model with the “Anti-Backlash” mechanism. This is typically used when the load direction changes from tension to compression and minimal axial backlash is required. This design is only available for translating screw jacks. It can be combined with Anti-Rotation mechanism as well.Back to Top - What is the Lifting Torque Required for a Screw Jack? The input torque for a single screw jack depends on the load, the worm gear ratio, type of screw (machine screw, ball screw or roller screw) and the pitch of the lifting screw. Torque values are listed in the individual product specification charts based on capacity loads. For loads from 25% to 100% of screw jack model capacity, torque requirements are approximately proportional to the load.Back to Top - What is The Maximum Input Power & Speed for a Screw Jack? The input power to the screw jacks should not exceed the power rating shown in the specifications table. Maximum input speed in rpm (revolutions per minute) to a screw jacks worm shaft should not exceed 1800 rpm for E-Series and M-Series screw jacks, however the high performance S-Series screw jacks can operate at up to 3000 rpm. Power Jacks cannot accept responsibility for the overheating and rapid wear that may occur should these limits be exceeded. Power increases in direct proportion to the speed, and the motor size will be out of proportion to the screw jack model design rating should the speed become excessively high. When selecting the maximum permissible speed for a screw jack arrangement, always check to see that the power rating of the screw jack model is not exceeded.Back to Top - What is the Efficiency of a Screw Jack? Screw Jack model efficiencies are listed in the individual product specification charts.Back to Top - What is the Expected Life of a Screw Jack? The life expectancy of a screw jacks lead screw, bearings, nut and worm gear set varies considerably due to the extent of lubrication, abrasive or chemical action, overloading, excessive heat, improper maintenance, etc. For detailed life calculations consult Power Jacks Ltd.Back to Top - Does the Input Torque of a Screw Jack Differ Between Translating and Rotating Screw Models? The input torque, as well as the efficiency and side load ratings, is the same for both translating screw and rotating screw jacks.Back to Top - When Do I Use A Screw Jack with Anti-Rotation (Keyed) Mechanism? This design is only available for translating screw jacks. If the structure/object connected to the lead screw is not prevented from rotating or the lead screw is not always in contact with the structure then a screw jack with an “Anti-Rotation” mechanism (keyed) should be used. For machine screw jacks the mechanism uses an internal key to prevent the lead screw from rotating. For ball screw jacks a guided element fixed to the ball screw is used to prevent the ball screw from rotating. What other design features are available? Other designs & features available include: - Bellows boot screw protection - Safety nut (translating or rotating screw types) - Motor adapter kit - Double Clevis - reinforced cover pipe with clevis - Anti-corrosion protection. As well as stainless steel various platings and paint finishes are available - Trunnion mounts for base of screw jack - Limit switch kits (mechanical, proximity and rotary cam types) - Thrust rings for screw jacks that need to withstand high shock loads - Reinforced gearbox housing - Upgraded rotating screw jack for improved column strength for compressive load Have a look at our E-Series Screw Jacks brochure for lots more ideas and product specifications. If your requirement is not listed then just ask Power Jacks if it possible as chances are we have done it before.Back to Top - Can a Screw Jack be Supplied with a Lifting Screw to Prevent Rotation? For all machine screw jacks a keyed lifting screw is available. Note the keyway in the screw causes greater than normal wear on the internal threads of the worm gear. For ball screw jacks the lifting screw cannot be keyed, as the keyway would interrupt the ball track, permitting loss of the recirculating balls. Instead the option of a square anti-rotation tube can be fitted to ball screw jacks to prevent rotation (refer to E-Series Screw Jacks brochure (P47) For further details consult Power Jacks Ltd.Back to Top - For Standard Screw Jacks How Do I Prevent The Load from Rotating? We recommend the following methods for preventing the rotation of the lifting screw on standard screw jacks. For multiple screw jack systems, fix the lead screw end fittings (e.g. top plate or clevis) to the common member being lifted by all the units. For single screw jack applications, bolt the lead screw end fitting (e.g. top plate or clevis) to the load and ensure the load is guided to prevent rotation. A guided load is always recommended to ensure that the screw jack does not receive any side load and so guidance can be scaled suitably for the load without altering the screw jack design unnecessarily. It should also be noted that an external guidance system can provide a higher restraining “key” torque than compared to an anti-rotation mechanism in a screw jack.Back to Top - Are Screw Jacks Self-Locking? Screw Jacks with 24:1 and 25:1 gear ratios are considered self-locking in most cases. The following screw jack models are considered not to be self-locking: - All Metric and Imperial ball screw jacks - The M2555 (1/4 ton) screw jacks with 5:1 gear ratio - The E2625 (5kN) & M2625 (1/2 ton) screw jacks with 5:1 gear ratio - The E2501 (5kN) & M2501 (1-ton) screw jacks with 5:1 gear ratio - In some cases the E1802 (25kN), M1802 & M9002 (2 ton) screw jacks with 6:1 gear ratio. For advice on an application basis consult Power Jacks Ltd - In some cases the E1805 (50kN) & M1805 (5 ton) screw jacks with 6:1 gear ratio. For advice on an application basis consult Power Jacks Ltd - In some cases the E1810 (100kN) & M1810 (10 ton) screw jacks with 8:1 gear ratio. For advice on an application basis consult Power Jacks Ltd - In some cases the M1815 (15 ton) screw jacks with 8:1 gear ratio. For advice on an application basis consult Power Jacks Ltd All screw jacks with double start lifting screws are considered not to be self-locking. Screw Jacks considered not self-locking will require a brake or other holding device (refer to E-Series Screw Jacks brochure (P69)). If vibration conditions exist, refer to E-Series Screw Jacks brochure (P86). However for detailed advice and analysis consult Power Jacks LtdBack to Top - Are Shock Loads Permissible on a Screw Jack? Shock loads should be eliminated or reduced to a minimum, if they cannot be avoided, the screw jack model selected should be rated at twice the required static load. For severe shock load applications, using the E-Series, S-Series, and M-series screw jacks, the load bearings should be replaced with heat-treated steel thrust rings which is an option available from Power Jacks. Note this will increase the input torque by approximately 100 percent.Back to Top - What is the Backlash in a Screw Jack? Standard machine screw jacks, machine screw jacks with anti-backlash and ball screw jacks must be considered separately, as the normal backlash will vary due to different constructions. Backlash in Standard Machine Screw Jacks Machine screw jacks have backlash due not only to normal manufacturing tolerances, but to the fact that there must be some clearances to prevent binding and galling when the screw jack unit is under load (for values refer to E-Series Screw Jacks brochure (P25)). Usually, the backlash is not a problem unless the load on the screw jack unit changes between compression and tension. If a problem does exist, then a unit with the anti-backlash feature should be considered. Screw Jacks with the Anti-Backlash Device The anti-backlash device reduces the axial backlash in the lifting screw and nut assembly to a regulated minimum. As the backlash will increase as the lifting screw thread on the gear wears the anti-backlash device can be adjusted to remove this normal condition. Ball Screw Jacks Ball screw jacks do not have an anti-backlash option similar to the machine screw jacks. Instead for zero or reduced axial play ball screw jacks can be ordered with a pre-loaded ball nut (refer to E-Series Screw Jacks brochure (P45)).Back to Top - How Does the Anti-Backlash Device Work? When the screw (1) is under a compression load, the bottom of its thread surfaces are supported by the top thread surfaces of the worm gear (2) at point (A). The anti-backlash nut (3), being pinned to the worm gear and floating on these pins and being adjusted downward by the shell cap, forces its bottom thread surfaces against the upper thread surfaces of the lifting screw at point (B). Thus, backlash between worm gear threads is reduced to a regulated minimum (for values refer to E-Series Screw Jacks brochure (P25)). When wear occurs in the worm gear threads and on the load carrying surfaces of the lifting screw thread, the load carrying thickness of the worm gear thread will be reduced. This wear will create a gap at point (B) and provide backlash equal to the wear on the threads. Under compression load, the lifting screw will no longer be in contact with the lower thread surface of the anti-backlash nut. Under this condition, backlash will be present when a tension load is applied. The anti-backlash feature can be maintained simply by adjusting the shell cap until the desired amount of backlash is achieved. To avoid binding and excessive wear do not adjust lifting screw backlash to less than 0.013mm (0.0005”). This will reduce the calculated separation (C) between the anti-backlash nut and worm gear and will reduce the backlash between the worm gear threads and the lifting screw to the desired minimum value. When separation (C) has been reduced to zero, wear has taken place. Replace the worm gear (2) at this point. This feature acts as a built in safety device which can be used to provide wear indication for critical applications. Ball Screw Jacks Ball Screw JacksFor zero or reduced axial play on ball screw jacks a pre-loaded ball nut should be requested (refer to E-Series Screw Jacks brochure (P45)).Back to Top - What is the Column Strength of the Screw Jack? The column strength of a screw is determined by the relationship between the length of the screw and its diameter. Column strength nomographs are included in the E-Series Screw Jacks brochure (P72).Back to Top - What Side Loads are Permitted on a Screw Jack? Screw jacks are designed primarily to raise and lower loads and any side loads should be avoided. The units will withstand some side loads, depending on the diameter of the lifting screw and the extended length of the lifting screw. Where side loads are present, the loads should be guided and the guides, rather than the screw jacks, should take the side loads - particularly when long raises are involved. Even a small side load can exert great force on the housings and bearings and increase the operating torque and reduce the life expectancy. Side Load Rating Charts are included in the E-Series Screw Jacks brochure (P78)Back to Top - What is the Maximum Raise or Working Stroke for a Screw Jack? Generally, standard strokes / raises are: - Up to 1000mm on 5kN E-Series metric screw jacks - Up to 2500mm on 10kN E-Series metric screw jacks - 18 inches on 1/4 tom M-Series imperial (inch) screw jacks - 40 inches on 1/2 ton M-Series imperial (inch)screw jacks - 98 inches on 1 ton M-Series imperial (inch) screw jacks - 55000mm on 25kN and above E & S Series metric screw jacks - 215 inches on 2 Ton and above M-Series imperial (inch) screw jacks Larger Screw Jacks have their maximum raise / stroke available limited only by the available length of bar stock from suppliers (note - special steel production runs can be organised for special applications) and the practical ability to handle, machine and transport the lead screw and the complete screw jack. Practical lengths will also be affected by whether the screw is to be subjected to compression or tension loads. Depending on diameter the length can be limited due to deformation of material in the machining process or column strength of the screw when subjected to compression loads. Long raise applications should be checked with Power Jacks for the following:a) Side loads on extended screw.b) Column strength of screw.c) Thermal rating of screw and nut.Power Jacks recommend guides be used on all applications. The longer the raise, the more important this becomes.Back to Top - What is the Allowable Duty Cycle of a Worm Gear Screw Jack? Because of the efficiency of conventional metric and imperial (inch) worm gear screw jacks, the duty cycle is intermittent at rated load. At reduced loading, the duty cycle may be increased. The high performance S-Series metric screw jacks have higher thermal efficiencies due to their design allowing generally 50% higher duty cycles than conventional worm gear screw jacks. For detailed analysis consult Power Jacks Ltd.Back to Top - At What High Temperatures Can Worm Gear Screw Jacks Operate? Screw Jacks are normally suitable for operation at ambient temperatures of up to 90°C. Operations above 90°C will require special lubricants. For temperatures above 90°C, the life of even special lubricants is limited. Therefore consult Power Jacks on your application. For temperatures above 90°C, advise Power Jacks of full particulars of the duration of such temperatures. In some cases, it may be necessary to furnish an unlubricated unit, then the customer can supply the lubricant of his own choice. Power Jacks suggest that a lubricant manufacturer be consulted for type of grease and lubrication schedule. As a general rule, the screw jacks unit should be shielded to keep ambient temperatures to 90°C or less. Seals for temperatures above 120°C are expensive. Instead, Power Jacks can substitute bronze bushings for seals in these cases. If bellows boots are used, special materials will be required for temperatures above 90°C. Power Jacks can manufacture special screw jacks for high operating temperatures above 120°C. Consult Power Jacks Ltd on an application basis. Power Jacks have manufactured products that can operate at temperatures up to +250°C.Back to Top - At What Low Temperatures Can Worm Gear Screw Jacks Operate? With the standard lubricant and materials of construction, the screw jacks are suitable for use at sustained temperatures of -20°C. Below -20°C, low temperature lubricant should be used. Also, at temperatures below -20°C, if there is any possibility of shock loading, special materials may be required due to notch sensitivity of the standard materials at lower temperatures. Power Jacks application engineers must be consulted in these instances for a recommendation. Screw Jacks with standard material of construction and lubrication may be safely stored at temperatures as low as -55°C.Back to Top - What is The Thermal/Heat Build-Up in a Screw Jack as it is operated? The duty cycle, the length of the screw, the magnitude of the load, and the efficiency of the screw jack unit all have a direct influence on the amount of heat generated within the screw jack. Since most of the power input is used to overcome friction, a large amount of heat is generated in the worm gear set in both ball screw and machine screw jacks, and in the lifting screw of machine screw jacks. Long lifts can cause serious overheating. High duty S-Series screw jacks have oil lubricated cubic gearbox housing specifically designed to dissipate heat more efficiently with increased surface area and mass, allowing increased duty capabilities.Back to Top - Can Continuous Duty Screw Jacks be Supplied? Recommendation should be obtained from Power Jacks on this type of application and a completed application analysis form submitted. In general, semi-continuous operation can be permitted where load is light as compared to screw jack rated capacity. Units so used should be lubricated frequently and protected against dust and dirt. High duty screw jacks such as the S-Series, are oil-lubricated and are designed for maximum duty cycles. Special designs of screw jacks fitted with ball screws or roller screws may also suit high duty applications, consult Power Jacks Ltd for details.Back to Top - How Do I Mount Bellows Boots on an Inverted Screw Jack? E-Series and M-Series inverted screw jacks with bellows boots must incorporate an allowance in the length of the lifting screw for both the closed height of the boot and structure thickness. Since Power Jacks can make no provision for attaching a boot on the underside of the customers structure, Power Jacks suggest that a circular plate similar to the lifting screw top plate be welded or bolted to the bottom of the structure supporting the screw jack, thereby making it possible to use a standard bellows boot. (refer to E-Series Screw Jacks brochure (P60)). S-Series cubic screw jacks allow mounting from two sides instead of one and allow mounting on the same side as the bellows boot with only an access hole required in the structure for the lifting screw and bellows boot.Back to Top - Can I use Screw Jacks to Pivot a Load? A screw jack can be built to pivot a load by two methods: Double Clevis Screw Jack The screw jack can be furnished with a clevis at both ends (commonly referred to as a double clevis screw jack). The bottom clevis is welded to the bottom end of an extra strong cover pipe, which is fitted to the base of the screw jack. This bottom pipe still performs its primary function of encasing the lifting screw in its retracted portion. See the double clevis model illustrations on the dimensional drawings (refer to E-Series Screw Jacks brochure (P28)). Clevis - Trunnion Mounting The screw jack is fitted with the standard clevis end on the lifting screw and a trunnion mount adapter is bolted to the screw jacks base plate. For trunnion mount detail refer to E-Series Screw Jacks brochure (P68). The design of the structure in which these type of screw jacks is to be used must be constructed so that screw jack can pivot at both ends. Use only direct compression or tension loads, thereby eliminating side load conditionsBack to Top - Can Screw Jacks be Supplied with Corrosion Resistant Properties? Screw Jacks can be supplied with alternative materials and/or paint specifications for high corrosive areas. These options include stainless steel, chrome plating, Electro-nickel plating, epoxy paint, etc.Back to Top - What Type of Lubricants do the Actuators Use? The standard screw jacks are grease (EP2) lubricated for the lifting screw and gearbox assemblies. The high duty “Sym-metric” screw jack is oil lubricated for the gearbox and grease (EP2) lubricated for the lifting screw. All screw jacks can be supplied with industry specific lubricants, such as food or nuclear grade grease.Back to Top - Can Screw Jacks be used within Rigid Structures or Presses? Power Jacks recommend that the screw jack selected has a greater capacity than the rated capacity of the press or of the load capacity of the structure. We also recommend that a torque clutch or similar device be used to prevent overloading of the screw jack unit. Unless these precautions are taken, it is possible to overload the screw jack without realising it.Back to Top - Will the Screw Jack Drift after its Motor is Switched Off? The screw jack will drift after the motor drive is switched off unless a brake of sufficient capacity is used to prevent it. The amount of drift will depend upon the load on the screw jack and the inertia of the rotor in the motor. Due to different construction, the ball screw jack must be considered separately (refer to E-Series Screw Jacks brochure). Machine screw jacks require approximately one-half as much torque to lower the load as they do to raise the load. For machine screw jacks with no load, the amount of drift will depend upon the size and speed of the motor. For example, a 1500 RPM input directly connected to a screw jack without a load will give on average 35mm to 60mm of drift; a 1000 RPM input will give about 1/2 as much drift. Note that the drift varies as the square of the velocity (RPM). The drift of the screw jacks screw can be controlled by using a magnetic brake on the motor. Variations of drift will also be seen if the motor drives the screw jack via a reduction gearbox.Back to Top - Can Screw Jacks Operate where Vibration is Present? Screw Jacks will operate in areas with vibration, however the vibration may cause the lifting screw to creep or inch down under load. For applications involving slight vibration, select the higher of the worm gear ratios. Should considerable vibration be present, use a drive motor equipped with a magnetic brake, which will prevent the screw jack from self-lowering / back-driving.Back to Top - Can Screw Jacks be Supplied with an Emergency Stop Discs, Pin or Nut? To prevent over travel of the lifting screw a stop disc, pin or nut can be fitted to a screw jack that is hand operated. For motor driven units it is possible for the full capacity of the screw jack or even a greater force (depending on the power of the motor) to be applied against the stop. These stops are called “full power stop nuts”. They must only be used as an emergency device and if such a conditions occurs an assessment made to discover why it happened in order to carry out preventative action. Should the full power stop nut be used at full load in an emergency it might be driven into the unit jamming so tightly that it must be disassembled in order to free it. It is recommended that external stops are fitted where possible, however they must only be used as a last resort (Note - limit switches are one possible solution to constrain screw jack movement safely - consult Power Jacks for system advice). Under ideal conditions where a slip clutch or torque limiting device is used, a stop pin or stop nut may be used - but Power Jacks should be consulted. Note that the standard stop disc used on the end of the ball screw on ball screw jacks prevents the ball screw from running out of the ball nut during shipping and handling, thereby preventing loss of the recirculating balls. It should not be used as a full power stop.Back to Top - Can Screw Jacks be built into Multiple System Arrangements? Perhaps the greatest single advantage of Power Jacks screw jacks is that they can be linked together mechanically, to lift and lower in unison. Typical arrangements involving the screw jacks, bevel gear boxes, motors, reducers, shafting and couplings are shown in application section of the web site and in the E-Series Screw Jacks brochure (P8) Typical mechanical system arrangements link 2, 4, 6 or 8 screw jacks together and are driven by one motor. As an alternative screw jacks can be individually driven by electric motors and with suitable feedback devices such as encoders be synchronised electronically by a control system.Back to Top - How Many Screw Jacks Can be Connected in Series? This will be limited by input torque requirements on the first worm shaft in the line. The torque on the worm shaft of the first screw jack should not exceed 300% of its rated full load torque on the machine screw jacks (this does not include the E1820, S-200, or M1820 units which are rated at 150%).Back to Top - What is the Efficiency of a Multiple Screw Jack System? In addition to the efficiencies of the screw jacks and the bevel gearboxes, the efficiency of the screw jack arrangement must be taken into consideration. The arrangement efficiency allows for misalignment due to slight deformation of the structure under load, for the losses in couplings and bearings, and for a normal amount of misalignment in positioning the screw jacks and gearboxes. For efficiency values refer to E-Series Screw Jacks brochure (P21).Back to Top - Can a Screw Jack be Equipped with a Position Indicator? A visual position indicator for a screw jack can be provided in several ways, for example: - Screw Jack with rotary limit switch and position transducer (refer to E-Series Screw Jacks brochure (P65)) However, it is suggested that you consult Power Jacks for recommendations based on your particular application.Back to Top
<urn:uuid:0163be72-8aa9-4253-ae53-b38a7e46162d>
CC-MAIN-2017-17
https://powerjacks.com/faqs/screw-jacks/
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917119120.22/warc/CC-MAIN-20170423031159-00129-ip-10-145-167-34.ec2.internal.warc.gz
en
0.911191
5,935
2.96875
3
-Life in a Central Coast California Garden Butterflies are also known as Lepidoptera, or Scaly wing, as their wings are covered with variously colored scales Moths have different kinds of antenna- feathery, etc., but NOT clubbed. Food for larvae (most important) is critical! Larva of butterflies eat by chewing leaves, buds, and flowers of certain plants and they are very particular about what kinds of plants they eat! Food for butterflies is helpful also, but adult butterflies are completely different, only sipping nectar from flowers of many different kinds of plants. They are not very particular. Water is useful for butterflies that sip mud (for nutrients and liquid water). A tiny pond (or a large pond) will work, with muddy, shallow edges. Sun, shade- A garden needs to have areas of sun and shade, and filtered light. Many butterflies prefer full sun in open areas. At the same time, they rest in shade (of foliage). Also, some butterflies that live in riparian areas live and eat in dappled shade (of deciduous shrubs and trees). Shelter, protection from wind- Butterflies seem to prefer feeding in areas that are not windy. So, a sunny , flowery area surrounded by shrubs and trees for wind protection is great. For example, many areas of California receive rainfall from November through April (this is when you can water extra! If rainfall is not forthcoming) In summer there is fog drip in coastal areas and some rain showers in the northern portions of California. So you do likewise. For most areas, sprinkle the foliage and mulch in summer every so often (if needed, once a week for young plants and once a month for more established plants). These conditions are what most California butterflies prefer. In DESERT areas, most rainfall occurs in spring (this is when you can water extra! If the rainfall is not cooperating). In summer, to simulate summer showers you can spritz plants once in a while. Monarch (Danaus plexippus)- Adults move south in fall, to Pacific Grove, Pismo Beach, Morro Bay, and other coastal areas. Larval Food- Asclepias spp., or Milkweed. Formerly lived in native pines, but now also live in ornamental trees planted along the coast (many eucalyptus). Crown Fritillary, Zerene Fritillary-(Speyeria coronis, S. zerene) brownish-orange butterflies with silver spots in rows on underside (Crown); silver spots placed randomly on underside (Zerene)- Larval Food- Violets- Viola spp. Viola pedunculata, etc. California Tortoise Shell-(Nymphalis californica) medium-sized butterfly, orange and black, with wing edges appearing raggedy. Larval Food Plant- Ceanothus thyrsiflorus, Ceanothus cuneatus, California Lilacs Ladies- medium-sized, orange and black butterflies, very similar to each other in coloration West coast Lady (Vanessa annabella)-Orange bar on forewings Larval Food Plant- Malva spp., Althea rosea (Hollyhock), California Checker Mallow (Sidalcea spp. ) and Desert Mallow (Spheralcia spp.), Stinging Nettle (Urtica spp.) Peacock Butterfly (Junonia coenia)- Brown butterfly with large, beautiful peacock spots on their wings (one per wing section) Larval Food Plant- Plantain (Plantago major,) Monkeyflower (Mimulus spp). Snapdragon (Antirrhinum species.) California Sister (Adelpha bredowii californica)- These butterflies have a very distinctive coloration on their undersides (colored with blue, brown, pale orange, white) as compared to their upperside (black, orange, white); you think you are seeing two different butterflies. They are seen flying in treetops, very swift, do not linger in one place very long, and usually only sip mud, not flower nectar. Larval Food Plant-Canyon Live Oak, (Quercus chrysolepis,) Coast Live Oak, (Quercus agrifolia.) (They also set in tree and stick tongue out at you when you try to photograph them.) Swallowtail- very large, showy yellow and black butterflies Anise Swallowtail (Papilio zelicaon)-Larval Food Plant- Foeniculum vulgare (Fennel), Wild carrot (Lomatium spp) Western Tiger Swallowtail (Papilio rutulus)- Larval Food Plant- Cottonwood, (Populus fremontii, Populus trichocarpa) willow (Salix spp.), White alder (Alnus rhombifolia), Sycamore (Platanus racemosa), apple (Malus sylvestris). Pale Swallowtail (Papilio eurymedon)- Larval Food Plant-Coffeeberry, (Rhamnus californica), Redberry (Rhamnus ilicifolia and R. crocea), Wild Lilac (Ceanothus cuneatus), Holly leaf Cherry (Prunus ilicifolia) Sara Orange Tip (Anthocharis sara)-small white butterfly with orange spot at tip of wing- Larval Food Plant- Mustard Family ( Tansy Mustard, (Descurainea spp.); Lace Pod, Thysanocarpus species; Rock Cress, Arabis species.) Dog Face -yellow, Medium- sized butterfly with a side- view profile of a dog's face upon its wings (male); all yellow (female)- Larval Food Plant- False Indigo, Amorpha californica. Their favorite nectar plants seem to be Salvia 'Pozo Blue' and Monardellas. Hairstreaks (subfamilies Theclinae and Eumaeinae) - small butterflies of different colors, hindwings usually, lobed, and many have a tiny, hairlike "tail"- Larval Food Plant- Golden bush (Haplopappus linearifolius), California Juniper, (Juniperus californica), Mountain Mahogany, (Cercocarpus betuloides), Blue Oak, (Quercus douglasii), Willow (Salix spp.), Common mistletoe, (Phoradendron flavescens) Blues (subfamily Polyommatinae)- small butterflies, many are blue - Larval Food Plant- Bush Lupines (Lupinus albifrons, for example). Deer Weed, (Lotus scoparius), Clover, Trifolium spp., vetch (Vicia spp.), other Legume plants, and Buckwheat, (Eriogonum spp). Skippers (Family Hesperiidae)- small to medium-sized butterflies with chunky bodies, of dull colors, browns, tawnys, etc.- Larval Food Plant- grass family plants, Poaceae, Poa secunda ssp. secunda (One-sided blue grass), Oat Grass (Danthonia californica), and many more. Moths that look similar to butterflies that you may see flying in daylight hours- Hawkmoths- chunky- bodied moths with striped bodies that appear bee-like and wings that move swiftly as a hummingbird (also called hummingbird moths) love to visit evening primrose, (Oenothera californica) flowers for nectar. Then there's beeflies... There are so many native species and cultivated species that I will list those that seem to be most popular with butterflies and easiest to grow in our central California area: Daisy family (Asteraceae) and mint family (Lamiaceae) flowers seem to be the ones preferred over all others in most cases. Examples of Asteraceae- Encelia californica (Encelia), Helianthus gracilentus (Bush Sunflower), Erigeron glaucus (Seaside Daisy), Haplopappus linearifolius (Golden Bush), Isocoma menziesii (Golden Bush), Red Thistle (Cirsium proteanum) Other butterfly plants include: Carpenteria californica- Bush Anemone Gilia capitata- Globe Gilia Dichelostemma capitatum- Wild Hyacinth Philadelphus lewisii- California Mock Orange Note: Using insecticides, will sabotage your plan to attract butterflies to your garden. If you don't spray, those ugly caterpillars will metamorphose into lovely butterflies! California Quail (Callipepla californica) - primary food- legume seeds and other seeds, (Lupines, Deerweed, Clovers, ), also eats green leaves, stems, grasshoppers, katydids (arthropods), grains, fruits. Does not prefer weedy, grassy areas, but likes low herbaceous, native vegetation mixed with low shrubs, medium shrubs and trees.(They will also feed in dry, mowed, areas with a mixture of native and non-native vegetation of herbs, forbs and grasses. Hummingbird- Black-Chinned (Archilochus alexandri), Anna's (Calypte anna)(overwinters), Allen's (Selasphorus sasin), - Food- nectar from flowers, also, insects, spiders. Loves flowers of fuchsia- flowering gooseberry (Ribes speciosum), other gooseberries and currants, Manzanitas (Arctostaphylos species), California Fuchsia (Epilobium canum, Zauschneria californica), Scarlet Bugler, (Penstemon centranthifolius), Cardinal Flower, (Lobelia cardinalis), Scarlet Monkeyflower (Mimulus cardinalis), and other red, tubular flowers. Seems to enjoy picking insects from Chaparral Mallow, (Malacothamnus fasciculatus). Woodpeckers- need standing, diseased, or dead trees (snags) where the wood is soft enough to excavate a nest hole. (Some politicians should wear a hat!) Acorn Woodpecker (Melanerpes formicivorus) is a communal species, aunties help to raise young ones- Food- acorns, flying insects, and tree sap in spring. Excavates nests in decayed, living pine trees or dead, standing trees (snags). Stores acorns in individually drilled holes in bark of trees. Dependent on Oaks, especially large, for their existence; primarily Coast Live Oak, (Quercus agrifolia); Blue Oak, (Q. douglasii), Black Oak (Q. kelloggii), Valley Oak, (Q. lobata). Nuttall's Woodpecker (Picoides nuttallii) has a black and white striped back- Food- Primarily insects (mostly beetles), fruits, nuts, tree sap. These woodpeckers live in riparian areas where Sycamore, (Platanus racemosa), cottonwood, (Populus fremontii) and willow (Salix spp.) live, and also in areas with oaks. Downy Woodpecker (P. pubescens) is very similar to Hairy Woodpecker, black, with white back, red bar on head except bill is larger. Food- beetles, ants, caterpillars mostly; also fruits, seeds-lives in streamside areas, and adjacent woodlands. Hairy Woodpecker (Picoides villosus) eats insects (ants, beetles, grasshoppers, caterpillars, spiders, aphids), acorns, dogwood fruits, and pine nuts (a very nutritious and varied diet!). Lives in stream side habitats where Sycamore, (Platanus racemosa), cottonwood, (Populus fremontii) and willow (Salix spp.) grow, and adjacent areas of conifers. These birds provide a natural control for bark beetles because they eat the baby beetles (larvae). Black Phoebe (Sayornis nigricans)- A slender black bird with black breast, the phoebe lives in stream side habitats, and flies over these areas, catching insects in mid-air. When perched, he/she distinctively moves tail. (Phoebes and swallows are excellent replacements for pesticides in gardens, and are fun to watch.) Swallows (family Hirundinidae)- Food- 80% of diet consists of insects caught in mid-air. Swallows are a means of excellent natural insect control. Scrub Jay (Aphelocoma coerulescens)- Omnivorous- acorns, fruits, insects, bird eggs,- lives in coastal scrub, chaparral and oak woodland. Major helper in planting acorns for the next generation of oaks. Thick, grassy (alien grasses ) areas under oaks slow the scrub jay down, compete with oaks, replace the mulch layer so essential to the nutrition requirements of the oak. Remove those alien grasses (pulling by hand or with herbicides, NOT with a hulahoe or a tiller, because you want minimum soil disturbance) Plain Titmouse ( Parus inornatus)- Associated with oak trees- Food- insects, fruits, acorns. Picks insects off leaves, twigs, trunks of trees and shrubs. Bushtits are really innocents, I've never seen them be aggressive or mean to any other animal, except aphids, moths and small insects. Bushtit (Psaltriparus minimus)- This tiny grayish, blackish bird lives in coastal sage scrub, oak woodland, chaparral, towns, and suburban areas. Food is mostly insects and spiders; and picks insects off foliage, and twigs of trees and shrubs. They especially like Mountain mahogany, to forage in. In late summer and fall bushtits can be seen traveling in little groups of birds, chattering and twittering and moving quickly (I have seen on average, groups of 10-20 birds) White-breasted Nuthatch ( Sitta carolinensis)- Eats mostly insects and spiders by picking them off the trunk and large branches of trees, live or dead standing; also acorns during the off season. Needs large live and dead trees for survival. Nuthatches live in oak/pine woodlands. If you see a funny, little bird moving up and around a tree trunk in a spiral fashion, with little jerky movements, that is a nuthatch. Wrens: Rock Wren (Salpinctes obsoletus) rocky hillsides, Bewick's Wren (Thryomanes bewickii) all habitats, House Wren (Troglodytes aedon) all habitats in spring & summer - Lives in areas with brushy understory: chaparral and streamside thickets, woodland with dense understory, etc. forages on rocks, logs, shrubs, perennials for insects and spiders mostly. Wrens love to hide in thick brush Western Bluebird (Sialia mexicana)- Is seen mostly in open woodlands- Needs an area with at least a few trees. Sits on a low perch, flys out to catch insects (grasshoppers, caterpillars, beetles, ants), on the ground; will also catch bugs on the wing. They like Mahonia nevenii and Elderberries. Benefits greatly from nestboxes. For more on Bluebirds see the bluebird page. Swainson's Thrush (Catharus ustulatus)- see in summer in our area. Another insect eater! Great insect control for the garden. Needs dense understory in a woodland or riparian area for cover. Eats mostly insects and spiders by searching the mulch, and picking from bushes. Hermit Thrush (C. guttatus)- You will observe this thrush in winter in areas with dense cover of shrubs and trees. Eats insects and berries (especially poison oak) in same manner as Swainson's Thrush. American Robin (Turdus migratorius)- Eats earthworms, snails, caterpillars, beetles, grasshoppers; eats berries, fruits, in off season. Needs water regularly; needs mud to construct nests. Searches for insects on ground. They like Toyon berries. Does well in suburban areas as moist, open, areas with trees and perennials for understory. Mockingbird (Mimus polyglottos)- Eats insects, earthworms, snails, berries, fruits. Lives well alongside man and his habitations. Eats many ornamental fruits. Shrubs and trees needed for cover. California Thrasher (Toxostoma redivivum)- is a brown bird with curved bill; not born for flying, awkward, does short flights from bush to bush. Lives in chaparral and riparian areas that possess dense thickets. Does not venture more than a few feet from cover. They like Mahonia nevenii and Ribes aureum gracillimum. Eats insects, worms, etc., some fruits and acorns. Rakes mulch and ground with curved bill to extract food. Phainopepla (Phainopepla nitens)- is a slender black bird with top-knot. Eats mostly small berries, some insects. Most important: berries of mistletoe, Elderberry (Sambucus mexicana), Grape (Vitis californica, V. girdiana), Coffeeberry (Rhamnus californica), poison-oak (Toxicodendron diversilobum). Needs trees and large shrubs for cover, medium density. California Towhee (Pipilo crissalis) is a plain, brown bird, Spotted Towhee (Rufous-sided towhee), Pipilo maculatus is a shade lover with red eyes, both eat insects, seeds, fruits by scratching in ground and mulch, leaf litter. California Towhee prefers open areas near brush for cover, Spotted Towhee loves more in the brush and woodland, they live together in the interface. An Interface of native plants, mulched appropriately, some water source(s) with openings and clumps of heavy brush(Ceanothus, manzanitas, or in desert areas Creosote or Atriplex) and some trees will create an excellent habitat for you and the birds. Mother nature appears mixed up, but there's a plan there, patterns of open and closed. Sparrows ( subfamily Emberizinae) such as Lark, Rufus-crowned, Sage, Song, Savvana, Fox, Chipping, Gold Crowned, White Crowned, etc., eat seeds of grasses and herbs, and insects and spiders. Sparrows feed on the ground and on low vegetation. They love any expensive seeds you plant! Goldfinches (American, Carduelis tristis, Lawrence's Goldfinch, Carduelis lawrencei, and Lesser, Carduelis psaltria )- Primarily seed eaters, but eat insects at certain times of the year. Prefer thistles, fiddleneck, other daisy-like flowers. House Finch (Carpodacus mexicanus)- Not much of an insect eater, Mostly a seed eater, and eats fruits, and buds. Lives in urban, farmland areas as well as open areas of woodlands, chaparral, streamside habitats. First, WATER, a regular source, must be available; in addition to filling their needs, it is such fun to watch them drink and bathe. FOOD throughout the year. A. Insects and spiders are attracted to a variety of plants, especially ones in the daisy, carrot, poppy, evening primrose, rose, potato, and mint families, to name a few. C. A variety of plants bearing seeds attractive to birds. SHELTER: Birds need cover for nesting and to rest and hide from predators. Number one on the list are Pines and oaks, then various associated and understory shrubs and trees. For coastal areas Cambria Pine (Pinus radiata), Bishop Pine (Pinus muricata), Coast Live Oak (Quercus agrifolia), Scrub Oak (Quercus berberidifolia); for inland areas Blue Oak (Q. douglasii), Valley Oak (Q. lobata), Scrub Oak, (Q. berberidifolia), Gray Pine (Pinus sabiniana); For streamside habitats or wet areas, Black or Fremont cottonwood (Populus balsamifera ssp trichocarpa, P. fremontii ssp fremontii), sycamore, willow, alder. Open areas combined with dense brushy understory areas, dotted with large trees, and with a water feature, will attract the greatest variety of birds. and other daisy family members, Legumes, Mint family, Grass family, Pines, in other words almost all native plants. (acorns) fir the quail and larger birds. Elderberry, poison oak and other small fruits are favorites of the Western Bluebird. It's ironic to see a hillside with the poison oak completely removed, and bluebird houses everywhere. Blackberries are liked by raccoons, bear, foxes, etc. Practically any plant in the daisy family. Examples include: daisy, zinnia, encelia, sunflower, aster, thistle, butterweed, dandelion, tidy-tips, pincushion flower (chaenactis), golden bush, golden yarrow, coreopsis, coyote brush, etc., etc. and Monardella, sage, hedge nettle, California licorice mint (agastache), Malacothamnus fasciculatus, Footsteps of spring, Lomatium, to name a few. Butterflies need the nectar souses to be extended as long as they are about. If there's a butterfly there, make sure there's a flower there. Note: If you can use the "natural" insecticides (birds, mammals, reptiles and amphibians) your garden will be better off in the long run and you will attract more of a variety of birds. Also: Cats are a real threat to the birds, small mammals, reptiles, and some of the butterflies, well-fed or not, according to studies undertaken in gardens in the United States. Remember: All you have to do is provide the plants, the water, and the ambiance, and the birds and butterflies will just start showing up.
<urn:uuid:12735e09-7439-4e20-9dfe-945d7d4f4922>
CC-MAIN-2017-17
http://www.laspilitas.com/classes/cuesta97.htm
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917122933.39/warc/CC-MAIN-20170423031202-00132-ip-10-145-167-34.ec2.internal.warc.gz
en
0.871064
4,906
3.609375
4
The British honours system is a means of rewarding individuals' personal bravery, achievement, or service to the United Kingdom. The system consists of three types of award: honours, decorations and medals: Although the Anglo-Saxon monarchs are known to have rewarded their loyal subjects with rings and other symbols of favour, it was the Normans who introduced knighthoods as part of their feudal government. The first English order of chivalry, the Order of the Garter, was created in 1348 by Edward III. Since then the system has evolved to address the changing need to recognise other forms of service to the United Kingdom. As the head of state, the Sovereign remains the "fount of honour", but the system for identifying and recognising candidates to honour has changed considerably over time. Various orders of knighthood have been created (see below) as well as awards for military service, bravery, merit, and achievement which take the form of decorations or medals. Most medals are not graded. Each one recognises specific service and as such there are normally set criteria which must be met. These criteria may include a period of time and will often delimit a particular geographic region. Medals are not normally presented by the Sovereign. A full list is printed in the "order of wear", published infrequently by the London Gazette. Honours are split into classes ("orders") and are graded to distinguish different degrees of achievement or service. There are no criteria to determine these levels; various honours committees meet to discuss the candidates and decide which ones deserve which type of award and at what level. Since their decisions are inevitably subjective, the twice-yearly honours lists often provoke criticism from those who feel strongly about particular cases. Candidates are identified by public or private bodies, by government departments or are nominated by members of the public. Depending on their roles, those people selected by committee are submitted either to the Prime Minister, Secretary of State for Foreign and Commonwealth Affairs, or Secretary of State for Defence for their approval before being sent to the Sovereign for final approval. Certain honours are awarded solely at the Sovereign's discretion, such as the Order of the Garter, the Order of the Thistle, the Royal Victorian Order, the Order of Merit and the Royal Family Order. A complete list of approximately 1350 names is published twice a year, at New Year and on the date of the Sovereign's (official) birthday. The awards are then presented by the Sovereign or her designated representative. The Prince of Wales and The Princess Royal have deputised for The Queen at investiture ceremonies at Buckingham Palace. By convention, a departing Prime Minister is allowed to nominate Prime Minister's Resignation Honours, to reward political and personal service. As of 2009, Tony Blair has not taken up this privilege. |Complete name||Ranks / Letters||Established||Founder||Motto||Awarded to/for||Associated awards| |The Most Noble Order of the Garter||KG/LG||1348||King Edward III||Honi soit qui mal y pense ("shame upon him who thinks evil of it")||Relating to England and Wales||None| |The Most Ancient and Most Noble Order of the Thistle||KT/LT||1687||James II||Nemo me impune lacessit ("No one provokes me with impunity")||Relating to Scotland||None| |The Most Honourable Order of the Bath||GCB, |18 May 1725||George I||Tria iuncta in uno ("Three joined in one")||Civil division: senior civil servants; Military division: senior military officers||None| |The Most Distinguished Order of Saint Michael and Saint George||GCMG, |28 April 1818||The Prince Regent||Auspicium melioris ævi ("Token of a better age")||Diplomats||None| |The Distinguished Service Order||DSO (plus bars)||6 September 1886||Queen Victoria||None||Military officers in wartime||None| |The Royal Victorian Order||GCVO, |21 April 1896||Queen Victoria||Victoria ("Victory")||Services to the Crown||The Royal Victorian Medal, The Royal Victorian Chain| |The Order of Merit||OM||1902||King Edward VII||For merit||Military, science, art, literature, culture||None| |The Imperial Service Order||ISO||August 1902||King Edward VII||For faithful service||Civil servant for 25 years (in administrative or clerical capacity)||The Imperial Service Medal| |The Most Excellent Order of the British Empire||GBE, |4 June 1917||King George V||For God and the Empire||Miscellaneous (military and civil)||The British Empire Medal| |The Order of the Companions of Honour||CH||June 1917||King George V||In action faithful and in honour clear||Arts, science, politics, industry, religion||None| Orders were created for particular reasons at particular times. In some cases these reasons have ceased to have any validity and orders have fallen into abeyance, primarily due to the decline of the British Empire during the twentieth century. Reforms of the system have sometimes made other changes. For example the British Empire Medal ceased to be awarded in the UK in 1993, as was the companion level award of the Imperial Service Order (although its medal is still used). These changes were made because it was believed they perpetuated "class" differences. The Royal Guelphic Order, also known as the Hanoverian Guelphic Order, was a three-class honour founded in 1815. Awards were made in two divisions (civil and military). In the UK it was used only briefly until 1837 when the death of William IV ended the personal union with Hanover. These orders, relating to the British Raj or the British Indian Empire, are also defunct. The senior order, the Order of the Star of India, was divided into three grades, Knight Grand Commander, Knight Commander and Companion, of which the first and highest was conferred upon the Princes and Chiefs of Indian states and upon important British civil servants working in India. Women were not eligible to receive the award. The junior order, the Order of the Indian Empire, was divided into the same ranks and also excluded women. The third order, the Order of the Crown of India, was used exclusively to honour women. Its members, all sharing a single grade, consisted of the wives and close female relatives of Indian Princes or Chiefs; the Viceroy or Governor-General; the Governors of Bombay, Madras and Bengal; the Principal Secretary of State for India; and the Commander-in-Chief in India. Upon Indian independence in 1947, appointments to all these orders ceased. H.H. Maharaja Tej Singh Prabhakar Bahadur of Alwar, who was a KCSI and the last surviving member of the Order of the Star of India, died in February 2009, aged 97. The Order of the Indian Empire has a single surviving member H.H. Maharaja Meghrajji III of Dhrangadhra-Halvad, a KCIE. Queen Elizabeth II was appointed to the Order of the Crown of India (then as Princess Elizabeth) and is the last surviving former member of that order. The Queen remains also the Sovereign of the Indian orders as they have never been abolished. The Order of Burma was created in May 1940 by King George VI of the United Kingdom to recognise subjects of the British colony Burma. This order had one class which entitled the member to the postnominal letters OB but no title. It was originally intended to reward long and faithful service by military and police. In 1945 the Royal Warrant was altered to allow for membership for acts of gallantry as well as meritorious service. The Order was one of the rarest awarded with only 33 appointments by the time appointments were discontinued in 1948 when Burma declared independence. The decorations awarded are, in order of wear: The last three (3) have not been awarded since 1947. On 1 July 2009, BBC News reported that the Queen had approved a new award, the Elizabeth Cross, to honour those killed in action or by terrorist attack since World War II. This award would obviously be posthumous, and the cross itself given to the family of the honoured. There are five ranks of hereditary peerage: Duke, Marquess, Earl, Viscount and Baron. Until the mid 20th century, peerages were usually hereditary (bar legal peerages - see below) and, until the end of the 20th century, English, British and UK peerages (except, until very recent times, those for the time being held by women) carried the right to a seat in the House of Lords. Hereditary peerages are now normally only given to members of the Royal Family. The most recent was the grant to the Queen's youngest son, the Earl of Wessex, on his marriage in 1999. No hereditary peerages were granted to commoners after the Labour Party came to power in 1964, until Margaret Thatcher tentatively reintroduced them by two grants to men with no sons in 1983, respectively the Speaker of the House of Commons George Thomas and the former Deputy Prime Minister William Whitelaw. Both these titles died with their holders. She followed this with an Earldom in 1984 for the former Prime Minister Harold Macmillan not long before his death, reviving a traditional honour for former Prime Ministers. Macmillan's grandson succeeded him on his death in 1986. No hereditary peerages have been created since, and Thatcher's own title is a life peerage (see further explanation below). The concession of a baronetcy (i.e. hereditary knighthood), was granted to Margaret Thatcher's husband Denis following her resignation (explained below, see Baronetcy). Modern life peerages were introduced under the Appellate Jurisdiction Act 1876, following a test case (the Wensleydale Peerage Case) which established that non-statutory life peers would not have the right to sit in the House of Lords. At that time, life peerages were intended only for Law Lords, there being a desire to introduce legal expertise into the chamber to assist appellate law work, without conferring rights on future generations of these early working peers because the future generations might contain no legal experts. Subsequently, under the Life Peerages Act 1958, life peerages became the norm for all new grants outside the Royal Family, this being seen as a modest reform of the nature of the second legislative chamber. However, its effects were gradual because hereditary peers, and their successors, retained until recently their rights to attend and vote with the life peers. All hereditary peers except 92 – chosen in a secret ballot of all hereditary peers – have now lost their rights to sit in the second chamber. All hereditary peers retain dining rights to the House of Lords, retaining its title as "the best club in London". All life peers hold the rank of Baron and automatically have the right to sit in the House of Lords. The title exists only for the duration of their own lifetime and is not passed to their heirs (although the children even of life peers enjoy the same courtesy titles as hereditary peers). Some life peerages are created as an honour for achievement, some for the specific purpose of introducing legislators from the various political parties (known as working peers) and some under the Appellate Jurisdiction Act 1876, with a view to judicial work. There is a discreet number appointed as "People's Peers", on recommendation of the general public. Twenty-six Church of England bishops as of right have a seat in the House of Lords. As a life peerage is not technically an "honour under the Crown", it cannot be withdrawn once granted. Thus, while knighthoods have been withdrawn as "honours under the Crown", convicted criminals who have served their sentences have returned to the House of Lords. In the case of Lord Archer of Weston-super-Mare, he has chosen only to exercise dining rights and has yet to speak following his release from his conviction for perjury. A hereditary honour carrying the title Sir. Baronetcies are not peerages; they usually considered a species of knighthood. When a baronetcy becomes vacant on the death of a holder, the heir, if he wishes to be addressed as "Sir", is required to register the proofs of succession. The Official Roll of Baronets is kept at the Home Office by the Registrar of the Baronetage. Anyone who considers that he is entitled to be entered on the Roll may petition the Crown through the Home Secretary. Anyone succeeding to a baronetcy therefore must exhibit proofs of succession to the Home Secretary. A person who is not entered on the Roll will not be addressed or mentioned as a baronet or accorded precedence as a baronet, effectively declining the honour. The baronetcy can be revived at any time on provision of acceptable proofs of succession. There will at any time be numerous baronets who intend proving succession, but who have yet to do so. About 83 baronetcies are listed as awaiting proofs of succession. Notable examples include Jonathon Porritt, lately of Friends of the Earth; Ferdinand Mount, the journalist; and Francis Dashwood [title created 1707]. As with hereditary peerages, baronetcies ceased to be granted after the Labour Party came to power in 1964. The sole subsequent exception was a baronetcy created for the husband of Margaret Thatcher, Sir Denis Thatcher, in 1991, which was inherited by her son, Mark Thatcher, after his father's death. Descended from mediaeval chivalry, knights exist both within the orders of chivalry and in a class known as Knights Bachelor. Regular recipients include High Court judges and senior civil servants. Knighthood carries the title Sir; the female equivalent Dame only exists within the orders of chivalry. Members of the Royal Order of Chivalry the Most Venerable Order of St John of Jerusalem (founded 1888) may wear the Order's insignia but the ranks within the Order of St. John do not confer official rank on the order of precedence and, likewise, the abbreviations or postnominal initials associated with the various grades of membership in the Order of St. John do not indicate precedence among the other orders. Thus someone knighted in the order does not take precedence with the knights of other British orders nor should they be addressed as "Sir" or "Dame." Other British and Commonwealth orders, decorations and medals which do not carry titles but entitle the holder to place post-nominal letters after his or her name also exist, as do a small number of Royal Family Orders. Citizens of countries which do not have the Queen as their head of state sometimes have honours conferred upon them, in which case the awards are "honorary". In the case of knighthoods, the holders are entitled to place initials behind their name but not style themselves "Sir". Examples of foreigners with honorary knighthoods are Billy Graham, Bill Gates, Bob Geldof, Bono and Rudolph Giuliani, while Arsène Wenger and Gérard Houllier are honorary OBEs. Honorary knighthoods arise from Orders of Chivalry rather than as Knights Bachelor as the latter confers no postnominal letters. Recipients of honorary awards who later become subjects of Her Majesty may apply to convert their awards to substantive ones. Examples of this are Marjorie Scardino, American CEO of Pearson PLC, and Yehudi Menuhin, the American-born violinist and conductor. They were granted an honorary damehood and knighthood respectively while still American citizens, and converted them to substantive awards after they assumed British nationality, becoming Dame Marjorie and Sir Yehudi. Menuhin later accepted a life peerage with the title Lord Menuhin. Tony O'Reilly, who holds both British and Irish nationality, uses the style "Sir", but has also gained approval from the Irish Government to accept the award as is necessary under the Irish Constitution. Elisabeth Schwarzkopf, the German soprano, became entitled to be known as "Dame Elisabeth" when she took British nationality. Irish-born Sir Terry Wogan was initially awarded an honorary knighthood, but by the time he collected the accolade from the Queen in December 2005, he had obtained dual nationality and the award was upgraded to a substantive knighthood. Bob Geldof is often erroneously referred to as "Sir Bob", though he does not have British nationality and does not appear in the British Knightage. There is no law in the UK preventing foreigners from holding a peerage, though only Commonwealth and Irish citizens may sit in the House of Lords. This has yet to be tested under the new arrangements. However, some other countries have laws restricting the acceptances of awards by foreign powers, in Canada, where the Canadian House of Commons has opposed the granting of titular honours with its Nickle Resolution, then Prime Minister Jean Chrétien advised the Queen not to grant Conrad Black a titular honour while he remained a Canadian citizen. Each year, around 2,600 people receive their awards personally from The Queen or a member of the Royal Family. Approximately 22 Investitures are held annually in Buckingham Palace, one or two at the Palace of Holyroodhouse in Edinburgh and one in Cardiff. There are approximately 120 recipients at each Investiture. The Queen usually conducts the Investitures, although The Prince of Wales and The Princess Royal also hold some Investitures on behalf of the Queen. During the ceremony, The Queen enters the Ballroom of Buckingham Palace attended by two Gurkha Orderly Officers, a tradition begun in 1876 by Queen Victoria. On duty on the dais are five members of The Queen's Body Guard of the Yeomen of the Guard, which was created in 1485 by Henry VII; they are the oldest military corps in the United Kingdom. Four Gentlemen Ushers are on duty to help look after the recipients and their guests. The Queen is escorted by either the Lord Chamberlain or the Lord Steward. After the National Anthem has been played, he stands to the right of The Queen and announces the name of each recipient and the achievement for which they are being decorated. The Queen is given a brief background by her Equerry of each recipient as they approach to receive their award. Those who are to be knighted kneel on an investiture stool to receive the Accolade, which is bestowed by The Queen using the sword which her father, George VI used when, as Duke of York, he was Colonel of the Scots Guards. Occasionally an award for Gallantry may be made posthumously and in this case The Queen presents the decoration or medal to the recipient's next-of-kin in private before the public Investiture begins. After the award ceremony, those honoured are ushered out of the Ballroom into the Inner Quadrangle of Buckingham Palace, where the Royal Rota of Photographers are stationed. Here, recipients are photographed with their awards. In some cases, members of the press may interview some of the more well-known who have received honours. A small number of people each year refuse the offer of an award, usually for personal reasons; conversely, honours are sometimes removed (forfeited) if a recipient is convicted of a criminal offence or for political reasons. In 2009, Gordon Brown confirmed that the process remains as set out in 1994 by the then Prime Minister John Major in a written answer to the House of Commons: The statutes of most orders of knighthood and the royal warrants of decorations and medals include provision for the Queen to "cancel and annul" appointments and awards. Cancellation is considered in cases where retention of the appointment or award would bring the honours system into disrepute. There are no set guidelines for cancellations, which are considered on a case-by-case basis. Since 1979, the London Gazette has published details of cancellations of 15 appointments and awards—three knighthoods, one CBE, five OBEs, four MBEs and two BEMs. Notable examples of persons who forfeited their honours include: Honours, decorations and medals are arranged in "order of wear", an official list which describes the order in which they should be worn. Additional information on the social events at which an award may be worn is contained in the box. The list places the Victoria and George Crosses at the top, followed by the orders of knighthood arranged in order of date of creation. Individuals of a higher rank precede those of a lower rank. For instance, a Knight Grand Cross always precedes a Knight Commander. For those of equal rank, members of the higher-ranked Order take precedence. Within the same Order, precedence is accorded to that individual who received the honour earlier. Not all orders have the same number of ranks. The Order of Merit, the Order of the Companions of Honour, the Distinguished Service Order and the Imperial Service Order are slightly different, being single-rank awards, and have been placed at appropriate positions of seniority. Knights Bachelor come after knights in the orders, but before those with the rank of Commander or lower. Decorations are followed by medals of various categories, being arranged in date order within each section. These are followed by Commonwealth and honorary foreign awards of any level. Miscellaneous details are explained in notes at the bottom of the list. The order of wear is not connected to and should not be confused with the Order of precedence. For peers, see Forms of address in the United Kingdom. For baronets, the style Sir John Smith, Bt (or Bart) is used. Their wives are styled simply Lady Smith. The rare baronetess is styled Dame Jane Smith, Btss. For knights, the style Sir John Smith, [ postnominals ] is used, attaching the proper postnominal letters depending on rank and order (for knights bachelor, no postnominal letters are used). Their wives are styled Lady Smith, with no postnominal letters. A dame is styled Dame Jane Smith, [postnominals]. More familiar references or oral addresses use the first name only, e.g. Sir Alan, or Dame Judy. Wives of knights and baronets are officially styled Lady Smith as a courtesy title only. Recipients of orders, decorations and medals receive no styling of Sir or Dame, but they may attach the according postnominal letters to their name, e.g. John Smith, VC. Recipients of gallantry awards may be referred to in Parliament as "gallant", in addition to "honourable", "noble", etc: The honourable and gallant Gentleman. Bailiffs or Dames Grand Cross (GCStJ), Knights/Dames of Justice/Grace (KStJ/DStJ), Commander Brothers/Sisters (CStJ), Officer Brothers/Sisters (OStJ), Serving Brothers/Sisters (SBStJ/SSStJ)and Esquires (EsqStJ) of the Order of St. John do not receive any special styling with regards to prenominal address i.e. Sir or Dame. They may, however, attach the relevant postnominal initials. Reforms of the system occur from time to time. In the last century notable changes to the system have included a Royal Commission in 1925 following the scandal in which Prime Minister David Lloyd George was found to be selling honours, and a review in 1993 when Prime Minister John Major created the public nominations system. In July 2004, the Public Administration Select Committee (PASC) of the House of Commons and, concurrently, Sir Hayden Phillips, Permanent Secretary at the Department of Constitutional Affairs, both concluded reviews of the system. The PASC recommended some radical changes; Sir Hayden concentrated on issues of procedure and transparency. In February 2005 the Government responded to both reviews by issuing a Command paper detailing which of the proposed changes it had accepted. These included diversifying and opening up the system of honours selection committees for the Prime Minister's list and also the introduction of a miniature badge. It has been revealed recently by the Sunday Times newspaper that every donor who has given £1,000,000 or more to the Labour Party since 1997 has been given a Knighthood or a Peerage. On top of this, the government has given honours to 12 of the 14 individuals who have given Labour more than £200,000 and of the 22 who donated more than £100,000, 17 received honours. Eighty percent of the money raised by individuals for the Labour Party is from those who have received honours. In 1976, the Harold Wilson era was mired by a similar controversy over the 1976 Prime Minister's Resignation Honours, which became known as the "Lavender List".
<urn:uuid:68d2df10-0710-4bd1-9d20-2dd7927f19cb>
CC-MAIN-2017-17
http://www.thefullwiki.org/New_Year_Honours
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917124371.40/warc/CC-MAIN-20170423031204-00251-ip-10-145-167-34.ec2.internal.warc.gz
en
0.962602
5,181
3.625
4
Mass spectrometry (MS) is an analytical technique for the determination of the elemental composition of a sample or molecule. It is also used for elucidating the chemical structures of molecules, such as peptides and other chemical compounds. The MS principle consists of ionizing chemical compounds to generate charged molecules or molecule fragments and measurement of their mass-to-charge ratios. In a typical MS procedure: 1. a sample is loaded onto the MS instrument, and undergoes vaporization. MS instruments consist of three modules: an ion source, which can convert gas phase sample molecules into ions (or, in the case of electrospray ionization, move ions that exist in solution into the gas phase); a mass analyzer, which sorts the ions by their masses by applying electromagnetic fields; and a detector, which measures the value of an indicator quantity and thus provides data for calculating the abundances of each ion present. The technique has both qualitative and quantitative uses. These include identifying unknown compounds, determining the isotopic composition of elements in a molecule, and determining the structure of a compound by observing its fragmentation. Other uses include quantifying the amount of a compound in a sample or studying the fundamentals of gas phase ion chemistry (the chemistry of ions and neutrals in a vacuum). MS is now in very common use in analytical laboratories that study physical, chemical, or biological properties of a great variety of compounds. The word spectrograph has been used since 1884 as an "International Scientific Vocabulary". The linguistic roots are a combination and removal of bound morphemes and free morphemes which relate to the terms spectr-um and phot-ograph-ic plate. Early spectrometry devices that measured the mass-to-charge ratio of ions were called mass spectrographs which consisted of instruments that recorded a spectrum of mass values on a photographic plate. A mass spectroscope is similar to a mass spectrograph except that the beam of ions is directed onto a phosphor screen. A mass spectroscope configuration was used in early instruments when it was desired that the effects of adjustments be quickly observed. Once the instrument was properly adjusted, a photographic plate was inserted and exposed. The term mass spectroscope continued to be used even though the direct illumination of a phosphor screen was replaced by indirect measurements with an oscilloscope. The use of the term mass spectroscopy is now discouraged due to the possibility of confusion with light spectroscopy. Mass spectrometry is often abbreviated as mass-spec or simply as MS. Thomson has also noted that a mass spectroscope is similar to a mass spectrograph except that the beam of ions is directed onto a phosphor screen. The suffix -scope here denotes the direct viewing of the spectra (range) of masses. In 1886, Eugen Goldstein observed rays in gas discharges under low pressure that traveled away from the anode and through channels in a perforated cathode, opposite to the direction of negatively charged cathode rays (which travel from cathode to anode). Goldstein called these positively charged anode rays "Kanalstrahlen"; the standard translation of this term into English is "canal rays". Wilhelm Wien found that strong electric or magnetic fields deflected the canal rays and, in 1899, constructed a device with parallel electric and magnetic fields that separated the positive rays according to their charge-to-mass ratio (Q/m). Wien found that the charge-to-mass ratio depended on the nature of the gas in the discharge tube. English scientist J.J. Thomson later improved on the work of Wien by reducing the pressure to create a mass spectrograph. The first application of mass spectrometry to the analysis of amino acids and peptides was reported in 1958. Carl-Ove Andersson highlighted the main fragment ions observed in the ionization of methyl esters. Some of the modern techniques of mass spectrometry were devised by Arthur Jeffrey Dempster and F.W. Aston in 1918 and 1919 respectively. In 1989, half of the Nobel Prize in Physics was awarded to Hans Dehmelt and Wolfgang Paul for the development of the ion trap technique in the 1950s and 1960s. In 2002, the Nobel Prize in Chemistry was awarded to John Bennett Fenn for the development of electrospray ionization (ESI) and Koichi Tanaka for the development of soft laser desorption (SLD) and their application to the ionization of biological macromolecules, especially proteins. The earlier development of matrix-assisted laser desorption/ionization (MALDI) by Franz Hillenkamp and Michael Karas has not been so recognized despite the comparable (arguably greater) practical impact of this technique, particularly in the field of protein analysis. This is due to the fact that although MALDI was first reported in 1985, it was not applied to the ionization of proteins until 1988, after Tanaka's report. Schematics of a simple mass spectrometer with sector type mass analyzer. This one is for the measurement of Carbon dioxide isotope ratios (IRMS) as in the carbon-13 urea breath test The following example describes the operation of a spectrometer mass analyzer, which is of the sector type. (Other analyzer types are treated below.) Consider a sample of sodium chloride (table salt). In the ion source, the sample is vaporized (turned into gas) and ionized (transformed into electrically charged particles) into sodium (Na+) and chloride (Cl-) ions. Sodium atoms and ions are monoisotopic, with a mass of about 23 amu. Chloride atoms and ions come in two isotopes with masses of approximately 35 amu (at a natural abundance of about 75 percent) and approximately 37 amu (at a natural abundance of about 25 percent). The analyzer part of the spectrometer contains electric and magnetic fields, which exert forces on ions traveling through these fields. The speed of a charged particle may be increased or decreased while passing through the electric field, and its direction may be altered by the magnetic field. The magnitude of the deflection of the moving ion's trajectory depends on its mass-to-charge ratio. Lighter ions get deflected by the magnetic force more than heavier ions (based on Newton's second law of motion, F = ma). The streams of sorted ions pass from the analyzer to the detector, which records the relative abundance of each ion type. This information is used to determine the chemical element composition of the original sample (i.e. that both sodium and chlorine are present in the sample) and the isotopic composition of its constituents (the ratio of 35Cl to 37Cl). Ion source technologies The ion source is the part of the mass spectrometer that ionizes the material under analysis (the analyte). The ions are then transported by magnetic or electric fields to the mass analyzer. Techniques for ionization have been key to determining what types of samples can be analyzed by mass spectrometry. Electron ionization and chemical ionization are used for gases and vapors. In chemical ionization sources, the analyte is ionized by chemical ion-molecule reactions during collisions in the source. Two techniques often used with liquid and solid biological samples include electrospray ionization (invented by John Fenn) and matrix-assisted laser desorption/ionization (MALDI, developed by K. Tanaka and separately by M. Karas and F. Hillenkamp). Inductively coupled plasma (ICP) sources are used primarily for cation analysis of a wide array of sample types. In this type of Ion Source Technology, a 'flame' of plasma that is electrically neutral overall, but that has had a substantial fraction of its atoms ionized by high temperature, is used to atomize introduced sample molecules and to further strip the outer electrons from those atoms. The plasma is usually generated from argon gas, since the first ionization energy of argon atoms is higher than the first of any other elements except He, O, F and Ne, but lower than the second ionization energy of all except the most electropositive metals. The heating is achieved by a radio-frequency current passed through a coil surrounding the plasma. Others include glow discharge, field desorption (FD), fast atom bombardment (FAB), thermospray, desorption/ionization on silicon (DIOS), Direct Analysis in Real Time (DART), atmospheric pressure chemical ionization (APCI), secondary ion mass spectrometry (SIMS), spark ionization and thermal ionization (TIMS). Ion Attachment Ionization is a newer soft ionization technique that allows for fragmentation free analysis. Mass analyzer technologies Mass analyzers separate the ions according to their mass-to-charge ratio. The following two laws govern the dynamics of charged particles in electric and magnetic fields in vacuum: (Newton's second law of motion in non-relativistic case, i.e. valid only at ion velocity much lower than the speed of light). Here F is the force applied to the ion, m is the mass of the ion, a is the acceleration, Q is the ion charge, E is the electric field, and v x B is the vector cross product of the ion velocity and the magnetic field Equating the above expressions for the force applied to the ion yields: This differential equation is the classic equation of motion for charged particles. Together with the particle's initial conditions, it completely determines the particle's motion in space and time in terms of m/Q. Thus mass spectrometers could be thought of as "mass-to-charge spectrometers". When presenting data, it is common to use the (officially) dimensionless m/z, where z is the number of elementary charges (e) on the ion (z=Q/e). This quantity, although it is informally called the mass-to-charge ratio, more accurately speaking represents the ratio of the mass number and the charge number, z. There are many types of mass analyzers, using either static or dynamic fields, and magnetic or electric fields, but all operate according to the above differential equation. Each analyzer type has its strengths and weaknesses. Many mass spectrometers use two or more mass analyzers for tandem mass spectrometry (MS/MS). In addition to the more common mass analyzers listed below, there are others designed for special situations. A sector field mass analyzer uses an electric and/or magnetic field to affect the path and/or velocity of the charged particles in some way. As shown above, sector instruments bend the trajectories of the ions as they pass through the mass analyzer, according to their mass-to-charge ratios, deflecting the more charged and faster-moving, lighter ions more. The analyzer can be used to select a narrow range of m/z or to scan through a range of m/z to catalog the ions present. The time-of-flight (TOF) analyzer uses an electric field to accelerate the ions through the same potential, and then measures the time they take to reach the detector. If the particles all have the same charge, the kinetic energies will be identical, and their velocities will depend only on their masses. Lighter ions will reach the detector first. Quadrupole mass filter Quadrupole mass analyzers use oscillating electrical fields to selectively stabilize or destabilize the paths of ions passing through a radio frequency (RF) quadrupole field created between 4 parallel rods. Only the ions in a certain range of mass/charge ratio are passed through the system at any time, but changes to the potentials on the rods allow a wide range of m/z values to be swept rapidly, either continuously or in a succession of discrete hops. A quadrupole mass analyzer acts as a mass-selective filter and is closely related to the Quadrupole ion trap, particularly the linear quadrupole ion trap except that it is designed to pass the untrapped ions rather than collect the trapped ones, and is for that reason referred to as a transmission quadrupole. A common variation of the quadrupole is the triple quadrupole. Triple quadrupole mass spectrometers have three consecutive quadrupoles arranged in series to incoming ions. The first quadrupole acts as a mass filter. The second quadrupole acts as a collision cell where selected ions are broken into fragments. The resulting fragments are analyzed by the third quadrupole. Three-dimensional quadrupole ion trap The quadrupole ion trap works on the same physical principles as the quadrupole mass analyzer, but the ions are trapped and sequentially ejected. Ions are trapped in a mainly quadrupole RF field, in a space defined by a ring electrode (usually connected to the main RF potential) between two endcap electrodes (typically connected to DC or auxiliary AC potentials). The sample is ionized either internally (e.g. with an electron or laser beam), or externally, in which case the ions are often introduced through an aperture in an endcap electrode. There are many mass/charge separation and isolation methods but the most commonly used is the mass instability mode in which the RF potential is ramped so that the orbit of ions with a mass a > b are stable while ions with mass b become unstable and are ejected on the z-axis onto a detector. There are also non-destructive analysis methods. Ions may also be ejected by the resonance excitation method, whereby a supplemental oscillatory excitation voltage is applied to the endcap electrodes, and the trapping voltage amplitude and/or excitation voltage frequency is varied to bring ions into a resonance condition in order of their mass/charge ratio. The cylindrical ion trap mass spectrometer is a derivative of the quadrupole ion trap mass spectrometer. Linear quadrupole ion trap A linear quadrupole ion trap is similar to a quadrupole ion trap, but it traps ions in a two dimensional quadrupole field, instead of a three-dimensional quadrupole field as in a 3D quadrupole ion trap. Thermo Fisher's LTQ ("linear trap quadrupole") is an example of the linear ion trap. Fourier transform ion cyclotron resonance Fourier transform mass spectrometry (FTMS), or more precisely Fourier transform ion cyclotron resonance MS, measures mass by detecting the image current produced by ions cyclotroning in the presence of a magnetic field. Instead of measuring the deflection of ions with a detector such as an electron multiplier, the ions are injected into a Penning trap (a static electric/magnetic ion trap) where they effectively form part of a circuit. Detectors at fixed positions in space measure the electrical signal of ions which pass near them over time, producing a periodic signal. Since the frequency of an ion's cycling is determined by its mass to charge ratio, this can be deconvoluted by performing a Fourier transform on the signal. FTMS has the advantage of high sensitivity (since each ion is "counted" more than once) and much higher resolution and thus precision. Ion cyclotron resonance (ICR) is an older mass analysis technique similar to FTMS except that ions are detected with a traditional detector. Ions trapped in a Penning trap are excited by an RF electric field until they impact the wall of the trap, where the detector is located. Ions of different mass are resolved according to impact time. Very similar nonmagnetic FTMS has been performed, where ions are electrostatically trapped in an orbit around a central, spindle shaped electrode. The electrode confines the ions so that they both orbit around the central electrode and oscillate back and forth along the central electrode's long axis. This oscillation generates an image current in the detector plates which is recorded by the instrument. The frequencies of these image currents depend on the mass to charge ratios of the ions. Mass spectra are obtained by Fourier transformation of the recorded image currents. Similar to Fourier transform ion cyclotron resonance mass spectrometers, Orbitraps have a high mass accuracy, high sensitivity and a good dynamic range. Toroidal Ion Trap The toroidal ion trap is visualized as a linear quadrupole curved around and connected at the ends or as a cross section of a 3D ion trap rotated on edge to form the toroid, donut shaped trap. The trap can store large volumes of ions by distributing them throughout the ring-like trap structure. This toroidal shaped trap is a configuration that allows the increased miniaturization of an ion trap mass analyzer. Additionally all ions are stored in the same trapping field and ejected together simplifying detection that can be complicated with array configurations due to variations in detector alignment and machining of the arrays. The final element of the mass spectrometer is the detector. The detector records either the charge induced or the current produced when an ion passes by or hits a surface. In a scanning instrument, the signal produced in the detector during the course of the scan versus where the instrument is in the scan (at what m/Q) will produce a mass spectrum, a record of ions as a function of m/Q. Typically, some type of electron multiplier is used, though other detectors including Faraday cups and ion-to-photon detectors are also used. Because the number of ions leaving the mass analyzer at a particular instant is typically quite small, considerable amplification is often necessary to get a signal. Microchannel plate detectors are commonly used in modern commercial instruments. In FTMS and Orbitraps, the detector consists of a pair of metal surfaces within the mass analyzer/ion trap region which the ions only pass near as they oscillate. No DC current is produced, only a weak AC image current is produced in a circuit between the electrodes. Other inductive detectors have also been used. Mass resolving power The mass resolving power is the measure of the ability to distinguish two peaks of slightly different m/z. The mass accuracy is the ratio of the m/z measurement error to the true m/z. Usually measured in ppm or milli mass units. The mass range is the range of m/z amenable to analysis by a given analyzer. Linear dynamic range The linear dynamic range is the range over which ion signal is linear with analyte concentration. Speed refers to the time frame of the experiment and ultimately is used to determine the number of spectra per unit time that can be generated. Tandem mass spectrometry A tandem mass spectrometer is one capable of multiple rounds of mass spectrometry, usually separated by some form of molecule fragmentation. For example, one mass analyzer can isolate one peptide from many entering a mass spectrometer. A second mass analyzer then stabilizes the peptide ions while they collide with a gas, causing them to fragment by collision-induced dissociation (CID). A third mass analyzer then sorts the fragments produced from the peptides. Tandem MS can also be done in a single mass analyzer over time, as in a quadrupole ion trap. There are various methods for fragmenting molecules for tandem MS, including collision-induced dissociation (CID), electron capture dissociation (ECD), electron transfer dissociation (ETD), infrared multiphoton dissociation (IRMPD) and blackbody infrared radiative dissociation (BIRD). An important application using tandem mass spectrometry is in protein identification. Tandem mass spectrometry enables a variety of experimental sequences. Many commercial mass spectrometers are designed to expedite the execution of such routine sequences as single reaction monitoring (SRM), multiple reaction monitoring (MRM), and precursor ion scan. In SRM, the first analyzer allows only a single mass through and the second analyzer monitors for a single user defined fragment ion. MRM allows for multiple user defined fragment ions. SRM and MRM are most often used with scanning instruments where the second mass analysis event is duty cycle limited. These experiments are used to increase specificity of detection of known molecules, notably in pharmacokinetic studies. Precursor ion scan refers to monitoring for a specific loss from the precursor ion. The first and second mass analyzers scan across the spectrum as partitioned by a user defined m/z value. This experiment is used to detect specific motifs within unknown molecules. Another type of tandem mass spectrometry used for radiocarbon dating is Accelerator Mass Spectrometry (AMS), which uses very high voltages, usually in the mega-volt range, to accelerate negative ions into a type of tandem mass spectrometer. Common mass spectrometer configurations and techniques When a specific configuration of source, analyzer, and detector becomes conventional in practice, often a compound acronym arises to designate it, and the compound acronym may be better known among nonspectrometrists than the component acronyms. The epitome of this is MALDI-TOF, which simply refers to combining a matrix-assisted laser desorption/ionization source with a time-of-flight mass analyzer. The MALDI-TOF moniker is more widely recognized by the non-mass spectrometrists than MALDI or TOF individually. Other examples include inductively coupled plasma-mass spectrometry (ICP-MS), accelerator mass spectrometry (AMS), Thermal ionization-mass spectrometry (TIMS) and spark source mass spectrometry (SSMS). Sometimes the use of the generic "MS" actually connotes a very specific mass analyzer and detection system, as is the case with AMS, which is always sector based. Certain applications of mass spectrometry have developed monikers that although strictly speaking would seem to refer to a broad application, in practice have come instead to connote a specific or a limited number of instrument configurations. An example of this is isotope ratio mass spectrometry (IRMS), which refers in practice to the use of a limited number of sector based mass analyzers; this name is used to refer to both the application and the instrument used for the application. Chromatographic techniques combined with mass spectrometry An important enhancement to the mass resolving and mass determining capabilities of mass spectrometry is using it in tandem with chromatographic separation techniques. A common combination is gas chromatography-mass spectrometry (GC/MS or GC-MS). In this technique, a gas chromatograph is used to separate different compounds. This stream of separated compounds is fed online into the ion source, a metallic filament to which voltage is applied. This filament emits electrons which ionize the compounds. The ions can then further fragment, yielding predictable patterns. Intact ions and fragments pass into the mass spectrometer's analyzer and are eventually detected. Similar to gas chromatography MS (GC/MS), liquid chromatography mass spectrometry (LC/MS or LC-MS) separates compounds chromatographically before they are introduced to the ion source and mass spectrometer. It differs from GC/MS in that the mobile phase is liquid, usually a mixture of water and organic solvents, instead of gas. Most commonly, an electrospray ionization source is used in LC/MS. There are also some newly developed ionization techniques like laser spray. Ion mobility spectrometry/mass spectrometry (IMS/MS or IMMS) is a technique where ions are first separated by drift time through some neutral gas under an applied electrical potential gradient before being introduced into a mass spectrometer. Drift time is a measure of the radius relative to the charge of the ion. The duty cycle of IMS (the time over which the experiment takes place) is longer than most mass spectrometric techniques, such that the mass spectrometer can sample along the course of the IMS separation. This produces data about the IMS separation and the mass-to-charge ratio of the ions in a manner similar to LC/MS. The duty cycle of IMS is short relative to liquid chromatography or gas chromatography separations and can thus be coupled to such techniques, producing triple modalities such as LC/IMS/MS. Data and analysis Mass spectrometry produces various types of data. The most common data representation is the mass spectrum. Certain types of mass spectrometry data are best represented as a mass chromatogram. Types of chromatograms include selected ion monitoring (SIM), total ion current (TIC), and selected reaction monitoring chromatogram (SRM), among many others. Other types of mass spectrometry data are well represented as a three-dimensional contour map. In this form, the mass-to-charge, m/z is on the x-axis, intensity the y-axis, and an additional experimental parameter, such as time, is recorded on the z-axis. Mass spectrometry data analysis is a complicated subject that is very specific to the type of experiment producing the data. There are general subdivisions of data that are fundamental to understanding any data. Many mass spectrometers work in either negative ion mode or positive ion mode. It is very important to know whether the observed ions are negatively or positively charged. This is often important in determining the neutral mass but it also indicates something about the nature of the molecules. Different types of ion source result in different arrays of fragments produced from the original molecules. An electron ionization source produces many fragments and mostly single-charged (1-) radicals (odd number of electrons), whereas an electrospray source usually produces non-radical quasimolecular ions that are frequently multiply charged. Tandem mass spectrometry purposely produces fragment ions post-source and can drastically change the sort of data achieved by an experiment. By understanding the origin of a sample, certain expectations can be assumed as to the component molecules of the sample and their fragmentations. A sample from a synthesis/manufacturing process will probably contain impurities chemically related to the target component. A relatively crudely prepared biological sample will probably contain a certain amount of salt, which may form adducts with the analyte molecules in certain analyses. Results can also depend heavily on how the sample was prepared and how it was run/introduced. An important example is the issue of which matrix is used for MALDI spotting, since much of the energetics of the desorption/ionization event is controlled by the matrix rather than the laser power. Sometimes samples are spiked with sodium or another ion-carrying species to produce adducts rather than a protonated species. The greatest source of trouble when non-mass spectrometrists try to conduct mass spectrometry on their own or collaborate with a mass spectrometrist is inadequate definition of the research goal of the experiment. Adequate definition of the experimental goal is a prerequisite for collecting the proper data and successfully interpreting it. Among the determinations that can be achieved with mass spectrometry are molecular mass, molecular structure, and sample purity. Each of these questions requires a different experimental procedure. Simply asking for a "mass spec" will most likely not answer the real question at hand. Interpretation of mass spectra Since the precise structure or peptide sequence of a molecule is deciphered through the set of fragment masses, the interpretation of mass spectra requires combined use of various techniques. Usually the first strategy for identifying an unknown compound is to compare its experimental mass spectrum against a library of mass spectra. If the search comes up empty, then manual interpretation or software assisted interpretation of mass spectra are performed. Computer simulation of ionization and fragmentation processes occurring in mass spectrometer is the primary tool for assigning structure or peptide sequence to a molecule. An a priori structural information is fragmented in silico and the resulting pattern is compared with observed spectrum. Such simulation is often supported by a fragmentation library that contains published patterns of known decomposition reactions. Software taking advantage of this idea has been developed for both small molecules and proteins. Another way of interpreting mass spectra involves spectra with accurate mass. A mass-to-charge ratio value (m/z) with only integer precision can represent an immense number of theoretically possible ion structures. More precise mass figures significantly reduce the number of candidate molecular formulas, albeit each can still represent large number of structurally diverse compounds. A computer algorithm called formula generator calculates all molecular formulas that theoretically fit a given mass with specified tolerance. A recent technique for structure elucidation in mass spectrometry, called precursor ion fingerprinting identifies individual pieces of structural information by conducting a search of the tandem spectra of the molecule under investigation against a library of the product-ion spectra of structurally characterized precursor ions. Isotope ratio MS: isotope dating and tracking Mass spectrometry is also used to determine the isotopic composition of elements within a sample. Differences in mass among isotopes of an element are very small, and the less abundant isotopes of an element are typically very rare, so a very sensitive instrument is required. These instruments, sometimes referred to as isotope ratio mass spectrometers (IR-MS), usually use a single magnet to bend a beam of ionized particles towards a series of Faraday cups which convert particle impacts to electric current. A fast on-line analysis of deuterium content of water can be done using Flowing afterglow mass spectrometry, FA-MS. Probably the most sensitive and accurate mass spectrometer for this purpose is the accelerator mass spectrometer (AMS). Isotope ratios are important markers of a variety of processes. Some isotope ratios are used to determine the age of materials for example as in carbon dating. Labeling with stable isotopes is also used for protein quantification. (see protein characterization below) Trace gas analysis Several techniques use ions created in a dedicated ion source injected into a flow tube or a drift tube: selected ion flow tube (SIFT-MS), and proton transfer reaction (PTR-MS), are variants of chemical ionization dedicated for trace gas analysis of air, breath or liquid headspace using well defined reaction time allowing calculations of analyte concentrations from the known reaction kinetics without the need for internal standard or calibration. An atom probe is an instrument that combines time-of-flight mass spectrometry and field ion microscopy (FIM) to map the location of individual atoms. Pharmacokinetics is often studied using mass spectrometry because of the complex nature of the matrix (often blood or urine) and the need for high sensitivity to observe low dose and long time point data. The most common instrumentation used in this application is LC-MS with a triple quadrupole mass spectrometer. Tandem mass spectrometry is usually employed for added specificity. Standard curves and internal standards are used for quantitation of usually a single pharmaceutical in the samples. The samples represent different time points as a pharmaceutical is administered and then metabolized or cleared from the body. Blank or t=0 samples taken before administration are important in determining background and insuring data integrity with such complex sample matrices. Much attention is paid to the linearity of the standard curve; however it is not uncommon to use curve fitting with more complex functions such as quadratics since the response of most mass spectrometers is less than linear across large concentration ranges. There is currently considerable interest in the use of very high sensitivity mass spectrometry for microdosing studies, which are seen as a promising alternative to animal experimentation. Mass spectrometry is an important emerging method for the characterization of proteins. The two primary methods for ionization of whole proteins are electrospray ionization (ESI) and matrix-assisted laser desorption/ionization (MALDI). In keeping with the performance and mass range of available mass spectrometers, two approaches are used for characterizing proteins. In the first, intact proteins are ionized by either of the two techniques described above, and then introduced to a mass analyzer. This approach is referred to as "top-down" strategy of protein analysis. In the second, proteins are enzymatically digested into smaller peptides using proteases such as trypsin or pepsin, either in solution or in gel after electrophoretic separation. Other proteolytic agents are also used. The collection of peptide products are then introduced to the mass analyzer. When the characteristic pattern of peptides is used for the identification of the protein the method is called peptide mass fingerprinting (PMF), if the identification is performed using the sequence data determined in tandem MS analysis it is called de novo sequencing. These procedures of protein analysis are also referred to as the "bottom-up" approach. As a standard method for analysis, mass spectrometers have reached other planets and moons. Two were taken to Mars by the Viking program. In early 2005 the Cassini-Huygens mission delivered a specialized GC-MS instrument aboard the Huygens probe through the atmosphere of Titan, the largest moon of the planet Saturn. This instrument analyzed atmospheric samples along its descent trajectory and was able to vaporize and analyze samples of Titan's frozen, hydrocarbon covered surface once the probe had landed. These measurements compare the abundance of isotope(s) of each particle comparatively to earth's natural abundance.. Also onboard the Cassini-Huygens spacecraft is an ion and neutral mass spectrometer which has been taking measurements of Titan's atmospheric composition as well as the composition of Enceladus' plumes. A Thermal and Evolved Gas Analyzer mass spectrometer was carried by the Mars Phoenix Lander launched in 2007. Mass spectrometers are also widely used in space missions to measure the composition of plasmas. For example, the Cassini spacecraft carries the Cassini Plasma Spectrometer (CAPS), which measures the mass of ions in Saturn's magnetosphere. Respired gas monitor Mass spectrometers were used in hospitals for respiratory gas analysis beginning around 1975 through the end of the century. Some are probably still in use but none are currently being manufactured. Found mostly in the operating room, they were a part of a complex system, in which respired gas samples from patients undergoing anesthesia were drawn into the instrument through a valve mechanism designed to sequentially connect up to 32 rooms to the mass spectrometer. A computer directed all operations of the system. The data collected from the mass spectrometer was delivered to the individual rooms for the anesthesiologist to use. The uniqueness of this magnetic sector mass spectrometer may have been the fact that a plane of detectors, each purposely positioned to collect all of the ion species expected to be in the samples, allowed the instrument to simultaneously report all of the gases respired by the patient. Although the mass range was limited to slightly over 120 u, fragmentation of some of the heavier molecules negated the need for a higher detection limit. * Mass spectrometry software 1. ^ a b c d Sparkman, O. David (2000). Mass spectrometry desk reference. Pittsburgh: Global View Pub. ISBN 0-9660813-2-3. * Tureček, František; McLafferty, Fred W. (1993). Interpretation of mass spectra. Sausalito, Calif: University Science Books. ISBN 0-935702-25-3. http://books.google.com/?id=xQWk5WQfMQAC&printsec=frontcover. * ASMS American Society for Mass Spectrometry
<urn:uuid:a8c257ff-b970-4088-8db1-45cabae4cbb3>
CC-MAIN-2017-17
http://www.scientificlib.com/en/Spectroscopy/MassSpectrometry.html
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917122174.32/warc/CC-MAIN-20170423031202-00603-ip-10-145-167-34.ec2.internal.warc.gz
en
0.912144
7,383
3.546875
4
Home > Preview The flashcards below were created by user on FreezingBlue Flashcards. Structural Collapse during firefighting can be expected to increase. Three factors are: Age of buildings, abandonment and lightweight construction material, will increase the number of burning building collapse A building, like a person, has a life span of seventy-five or a hundred year Over the past two decades, abandonment of buildings in the Northeast adn Midwest has increase the collapse danger to firefighters Very few person, other than firefighters, are killed by burning buildings collapse, only firefighters are close to a burning building when it has been weakened by flames to the point of collapse danger. When a burning building collapse kills or seriously injures a firefighter, a post fire investigation and analysis should be conducted. - A curved masonry structure used as a support over an open space. - The removal or destruction of any part of an arch will cause the entire arch to collapse. Three basic methods of construction - 1. Balloon - 2. Brace Frame - 3. Platform - 1. Exterior walls have studs extending continuously from the structure's foundation sill to the top near the attic. - 2. Concealed space between these studs can spread, fire, smoke, and heat from the cellar area, or the intermediate floors to the attic space. - 3. If a non-bearing wall collapse during a fire, the continuous studs will cause the wall to fall straight outward, in one section, at a 90-degree angle. - 4. If the bearing wall collapse it can cause a second collapse of the floors it supports A horizontal structural member, subject to compression, tension, and shear, support by one of three methods. Cantilever Beam Support - A beam supported or anchored at only one end, which is considered a collapse hazard during fire exposure. - Examples: ornamental stone cornice, a marquee, a canopy, a fire escape, and an advertising sign attached perpendicularly to a wall. - It has the least amount of structural stability during a fire. Continuous Beam Support - A beam supported at both ends and at the center. - During a fire, it has the greater structural stability Simple Support Beam - A beam supported at both ends - If the deflection at the center of such a beam becomes excessive, a collapse may occur - A simple supported beam is more stable under fire conditions than a cantilever beam but less than a continuous supported beam. Brace Fame Construction - Sometimes called "post and girt" construction. Vertical timbers called posts reinforce each of the four corners of the structure, and horizontal timbers call girts reinforce each floor level - Posts and girts are connected by fastening called mortise and tenon joist - During a fire walls often fails in an inward/outward collapse. The walls break apart with the top collapsing inward on top of the pancaked floors, and the bottom part collapsing outward on to the street. - A wall reinforcement or braced build on the outside of a structure, sometimes call a "wall column". - On masonry wall, a buttress is a column of bricks built into the wall. - When separated from the wall and connected by an arch at the top, it is called a flying buttress. - The presence on an exterior wall can indicate the point where roof trusses or girders are supported by a bearing wall. - A buttress on the inside of a wall is called a pilaster. - The failure of any portion of a structure during a fire - A section of falling plaster ceiling, a broken fire escape step, a falling coping stone and the collapse of several tons of brick walls are all structural failures and should be classified as structural collapse. Curtain Fall Wall Collapse - Occurs when an exterior masonry wall drops like a falling curtain cut loose at the top. - THe collapse of a brick veneer, b rick cavity, or masonry backed stone wall often occurs in a curtain fall manner. - The impact of an aerial platform master stream striking a veneer wall at close range can cause a curtain fall collapse of bricks. - The collapse of an exterior wall that breaks apart horizontally. The top collapse inward, back on top of the structure, the bottom collapse outward on to the street. - Wood brace frame constructed building collapse in this manner, and a timber truss roof collapse can cause a secondary collapse of a front wall in this manner. Lean Over Collapse A type of wood frame building collapse indicated by the burning structure slowly starting to tilt or lean over to one side. Lean to Floor Collapse - A floor collapse in which one end of the floor beams remains partially supported by the bearing wall, and the other end of the floor beams collapses on the floor below or collapse but remains unsupported. - A lean to collapse can be classified as supported or unsupported, depending upon the position fo the collapsed beam ends. Ninety Degree Angle Wall Collapse - A type of burning building wall collapse. The wall falls straight out as a monolithic piece at a 90 degree angle, similar to a falling tree. - The top of the falling wall strikes the ground a distance from the base of the wall that is equal to the height of the fall section. - Bricks or steel lintels may bounce or roll out beyond this distance. Pancake Floor Collapse - The collapse of one floor section down upon the floor below in a flat, pancake like configuration. - When floor beams pull loose or collapse at both ends, a pancake collapse occurs. The collapse of portions of burning taller structures on to smaller structures, causing the collapse of the smaller building. Tent Floor Collapse - A floor collapse in the shape of a tent. - When a floor collapse and an interior partition or wall holds up the center of the fallen floor, a tent floor collapse occurs. V Shape Floor Collapse - The collapse of a floor a the center of the floor beams. - The broken center of the floor section collapse down upon the floor below, and both ends of the floor section remain partially supported or rest up against the outer bearing walls. A vertical structural member subject to compressive forces. Columns and bearing walls Girders and beams - The top masonry tile or stone of a parapet wall, designed to carry off rainwater. Sometimes called a "capstone", it weighs between five and fifty pounds. - A coping stone can be dislodged and fall from a parapet under the impact of a high pressure master stream or when struck by a retracting aerial ladder or aerial platform. - A bracket or extension of masonry that projects from a masonry wall. - It can be a decorative ornament on the top of a parapet front wall,or it can be used on the inside of a brick wall as a support for a roof beam end. Corbel Ledge or Corbel Shelf - Used on the inside of a masonry wall to support a beam. - Under a weigh of a firefighter, a roof beam end that is resting on corbel ledge can roteate off its support if the center of the beam has been burned away. A horizontal surface covering supported by a floor or a roof beam A bend, twist, or curve of a structural element under a load. The front or face of a building Fire Cut Beam A gravity support beam end designed to release itself from the masonry wall during a collapse. The measure of maximum heat release when all combustible material in a given area is burned. The cause of a motion, a change in motion, or a stoppage of motion. - Dead Load - Live Load - Wind Load - Impact Load - Also know as Load Stress - Compressive stress - Tension stress - Shear stress A structural member that supports a floor or roof beam. Primary Structural Members - Bearing Walls A metal fastener in the form of a flat plate used to connect structural members - A support used to reinforce an opening in the floor of a wood frame, ordinary, or heavy timber building. - Placed between two trimmer beams and supports the shorter cut off beams called tail beams Hierarchy of Building Elements Horizontal and vertical structural elements fo a building arranged in collapse hierarchy. - Hierarchy of Collapse - Structural Framing Seriousness - Decks Less - Beams/Floors and roofs | - Girders | - Columns v - Bearing Walls More - A piece of lumber used as a floor or roof beam. - The term joist, beam and rafter are used interchangeable. - Supports a roof or floor deck and is ofter supported by a girder. - One KIP equals a thousand pounds - KIP is used to simplify the figures - K.S.I. = KIPS per square inch - A horizontal pieces of timber, stone, or steel placed over an opening in a wall. - A load bearing structural element that supports and redistributes the load above the opening Forces acting upon a structural Passes through the center of a structural and is the most efficient manner by which a load can be transmitted through a structural support like a column or a bearing wall. A load applied at one point or within a limited area of a structural. A static or fixed load created by the structure itself and all permanent equipment within. A load transmitted off center or unevenly through a structural members. A load applied to a structure suddenly, such as a shock wave or vibrating load. Any type of load applied to an upright structure from a direction parallel to the ground. A transient or movable load, such as a building's content, the occupants, the weight of firefighters, the weight of fire equipment, and the water discharge from hose streams A load the remains constant, applied slowly. A load that creates a twisting stress on a structural member. A lateral load imposed on a structural by wind. A structural connection often used in braced wood from construction, it is a hole cut into a timber that receives a tenon. Open Web Steel Bar Joists - A lightweight steel truss used as a floor or roof beam. It is made from a steel bar, bent a 90 degree angles, welded between angle irons at the top and bottom bar bends. - Used for floor and roof beams in non combustible buildings - A masonry column bonded to adn build as an integral part of the inside of a masonry wall. - Can carry the load of a girder or timber, or it can be designed to provide lateral support to a wall Platform Wood Frame Construction A building of this construction has one complete level of two by four inch wood enclosing walls raised and nailed together, the floor beams and deck for the next level are constructed on top of these walls. The next level of two by four inch wood enclosing wall are then constructed on top of the first competed level. Primary Structural Member A structure that supports another structural member in the same building, such as a bearing wall, a column, or a girder. Restrained Beam End A welded, nailed, bolted or cemented end of a floor or roof beam. A horizontal timber that frames the highest point of a peak roof. Roof rafters are fasten to the ridgepole. the sheltering structure of a building that protects the interior spaces from natural elements The quotient of the load that will cause a structure ot collapse divided by the load as structure is designed to support. - A force exerted upon a structural member that strains or deforms its shape. - Stress and load are often used interchangeability. A force pressing or squeezing a structure together. A stress causing a structure ot collapse when contacting parts or layers of the structure slide pass on another. Stress placed on a structural members by the pull of forces causing extension. A ceiling built several inches or feet below the supporting roof or floor beams above, sometimes called a "hanging" or "dropped" ceiling. The concealed space above the ceiling. A projecting, reduced portion of a timber designed to be inserted into the mortise hole of another timber. - A polished floor covering made of small marble chips set in several inches of cement. - It adds weight to floor beam, conceals the heat of serious fire below, and, because it is watertight, allows water to accumulate and build up to dangerous proportions. A wood beam constructed around the perimeter of a floor opening. A trimmer beam supports the header beam, which in turns supports the tail beams - A brace arrangement of steel or wood frame work made with triangular connecting members. - They suffer early collapse during a fire because it exposed surface area is greater than the exposed surface area of a solid beam spanning the same distance. - Truss roof beams are spaced farther apart than solid beams, creating large areas of unsupported roof deck. Three Common Types of Trusses - Parallel Chord Types of Walls - Area Wall - Bearing Wall - Fire Wall - Free-Standing Wall - Parapet Wall - Party Wall - Spandrel Wall - Veneer Wall A free standing masonry wall surrounding or partly surrounding an area. - An interior or exterior wall that supports a load in addition to its own weight. - Most often supports the floors and roof of a building. A non bearing, self supporting wall designed to prevent the passage of fire from one side to another. Free Standing Wall A wall exposed to the elements on both sides and the top, such as a parapet wall, a property enclosing wall, an area wall, and a newly constructed exterior wall left standing without roof beams or floors. - The continuation of a party wall, an exterior wall, or a fire wall above the roof line. - Are considered free standing walls A bearing wall that supports floors and roofs of two buildings. That portion of an interior wall between the top of one window opening and the bottom of another. A finished or facing bricks or stone wall used on the outside of a building. Type of Wall - Free Standing Wall - Non Bearing Wall - Bearing Wall Less to More Horizontal Collapse Zone The horizontal measurement of the wall. When establishing a collapses zone, estimate this measurement in addition to the outward area that the wall may cover if it falls. Vertical Collapse Zone - The expected ground area that a falling wall will cover when it collapses. - Generally that distance away from the wall equal to the height of the wall. - Heavy stones will fall farther than this distance. Five construction types when considering collapse or fire resistance of buildings - 1. Fire Resistive construction - 2. non combustible/limited combustible construction - 3. ordinary brick-and-joist construction - 4. heavy timber construction - 5. wood-frame construction Major problem in a fire-resistive building and why - Central air conditioning system - In a structure with a central air conditioning system, every fire barrier in the building is penetrated - The entire building may be interconnected by a network of holes, concealed spaces and voids through which flame and smoke can spread Problem associated with non combustible/limited combustible building The flat, steel roof deck covering that can ignite during a fire Major fire problem in an ordinary constructed building fire and smoke spread throughout concealed spaces Unlike concealed spaces in fire resistive or non combustible buildings, those in ordinary constructed buildings contain large amounts of combustible material in the form of wood lath, wood furring strips, cross bridging, wood joist, and wood two by four wall studding In a ordinary constructed building most serious concealed space is the common cockloft. For a building to qualify as heavy timber construction a wood column cannot be less than eight inches thick in any dimension and a wood girder cannot be less than six inches thick The major fire problem of this brick enclosed timber structure is The large wooden interior timber framework Wood Frame construction is the only one of the five types with combustible exterior walls Seven sides of the fire area in a wood frame constructed building - 1. above the fire - 2. below the fire - 3. the four sides of the fire - 4. combustible exterior walls 2 basic types of fire resistive construction - Reinforced concrete buildings - Structural steel building Of the five construction types, fire resistive buildings are the most stable and have the best collapse record. A fire resistive building does suffer structural failure during serious fires, and it collapse danger lies in concrete. - Reinforced concrete buildings, heated concrete ceiling collapse on top of FF - Steel skeleton building concrete floors explode upward The rapid expansion of heated moisture inside the concrete. Nation's most widely used construction type Non Combustible/limited combustible construction Three basic types of Non Combustible construction - 1. Metal frame structure covered by metal exterior walls - 2. Metal frame structure enclosed by concrete block - 3. Non bearing walls; the concrete block bearing walls supporting a metal roof structure the collapse danger to a FF from a non combustible building is roof cave in, the collapsing material, the unprotected steel open web bar joist. The main advantage of the lightweight steel roof support is The non combustibility - the bar joist does not add fuel to the fire Has exterior bearing walls of masonry with wood floor and roof Ordinary constructed building called "brick and joist" The structural hazard of an ordinary constructed building is the parapet wall, the portion of the masonry wall that extends above the roof level. The hazards of heavy timber building Falling masonry walls which crash to the ground and spray chunks of bricks and mortar along the street or payment. The structural hazard of a wood frame building is the combustible bearing wall constructed of two by four inch wall studs 3 ways a masonry exterior building wall by collapse - 1. 90 degree angle - 2. Curtain fall - 3. Inward/outward collapse Wood joist floor system can collapse in 3 different ways when attacked by fire - 1. Only the wood deck burns through - 2. Several floor joist may fail, causing a localized failure of a section of a floor - 3. A large section or the entire floor level, sometimes triggering the subsequent collapse of floors below or enclosing walls One room in a residential building which frequently subject to floor joist collapse is Several reasons why floor joist collapse in bathroom - Bathroom room fixtures - dead load or static fix load - Finished tile floor - Fire destruction and rotting 3 most common types of sloping roofs are - 1. gable roof - 2 hip roof - 3. gambrel roof Type of roof with sides sloping up from 2 walls Type of roof with sides sloping up from four walls Type of roof with 2 slopes on each of two sides, with the lower slope streeper than the upper Primary structural members of a flat roof 2 - two bearing walls Primary structural members gable roof 3 - Two bearing walls and one ridge rafter Primary Structural members of Hip Roof 7 - Two bearing walls, one ridge rafter and four hip rafters. 3 most common types of wood construction used for sloping roofs - 1. Timber truss - 2. Plank and beam - 3. rafter construction What is a timber truss - Timber is a wooden construction larger than two by fours finches but not large enough to be classified as heavy timber or mill construction. - truss is a structural composition of a large wooden members joined together in a group of triangle and arranged in a single plane so loads applied at points of intersecting members will cause only direct stress Most common type of timber truss 3 Common types of suspended ceilings - 1. Wood grid system with permanently affixed ceiling - 2. Metal grid system with a permanently affixed ceiling - 3. Lightweight metal grid system with a removable panel ceiling. Least hazardous type of suspended ceiling to FF Lightweight metal grid system with removable ceiling panels Most Hazardous type of suspended ceiling to FF Wood grid system with permanently affixed ceiling or metal grid system iwth permanently affixed ceiling 3 basic stair types - 1. Straight Run - 2. U - return stair - 3. L - shape stair 3 ways in which a wood frame building can collapse - 1. 90 degree angle - 2. Lean over and collapse - 3. All four wood enclosing walls may crack apart and fall in a inward/outward collapse
<urn:uuid:ff76edf5-2d03-4763-9f6f-98a54f1581fe>
CC-MAIN-2017-17
https://www.freezingblue.com/flashcards/print_preview.cgi?cardsetID=96662
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917122955.76/warc/CC-MAIN-20170423031202-00485-ip-10-145-167-34.ec2.internal.warc.gz
en
0.878804
4,392
3.453125
3
2DecolonizationDecolonization began after WWII when the European nations could no longer maintain control of their colonial empires.Decolonization began on Aug. 15, 1947, when India declared its independence from the British empire.This created a domino effect throughout the empire. 3Palestine1947: Britain announced it was withdrawing from Palestine, leaving its future in the hands of the UN.In response, the UN partitioned Palestine into Arab and Jewish homelands.May 14, 1948: Israel declared independence and was immediately attacked by the Arab nations.Israel won the war with American aid. (1st Arab-Israeli war) 4EgyptAlthough Egypt had been independent since 1922, Britain had economically maintained a degree of influence.Abdul Nasser (Egyptian Pres. after WWII) wanted this to end, believing that Britain’s significant influence was detrimental to the future development of Egypt. 5Suez Crisis1956: Egypt announced the nationalization of the Suez Canal.In response, Britain, France, and Israel planned a surprise attack on Egypt.The USSR announced it would back Egypt, and the US ordered the Western powers to withdraw.***This event illustrated the fact that the western European powers had little ability to take action w/o American approval. 6Sub-Saharan Africa1957: Ghana (British) declared independence, and was set free.Shortly thereafter, Nigeria, Sierra Leone, Uganda, and Kenya also declared independence and were freed from the British empire.The British let these places go without much of a fight, because there were few British settlers in any of the nations. 7Rhodesia Rhodesia had many British settlers. 1965: White British settlers formed their own white-supremacist government and declared independence from Britain.1980: After much warfare, the Africans finally won control of their nation.It was renamed Zimbabwe. 8The Dutch East IndiesFrance and the Netherlands wanted to maintain control of their colonies, as a matter of national honor, after WWII.The Dutch fought a costly and ultimately unwinnable war in the Dutch East Indies, finally losing in 1949.The Dutch East Indies became Indonesia. 9French Indo-ChinaThe Viet Minh (a nationalist group founded by Ho Chi Minh) was formed to fight for Vietnamese independence from the Japanese during WWII.After the war, the Viet Minh fought against the French, when the French attempted to restore their colonial authority.This was a bitter and costly war for the French, which they eventually lost.The US was funding the French war effort. 10VietnamAfter the French were defeated in the battle of Dien Bien Phu, they agreed to divide Vietnam into two states.North Vietnam was a communist led nation headed by Ho Chi Minh.South Vietnam was a “democratic” nation headed by President Diem and dominated by the United States. (an anti-communist military dictatorship)1975: The two nations were united following the Vietnam War. 11AlgeriaAlgeria had been a French possession since 1830 and was the home of over one million native French persons.France almost erupted into civil war over the Algerian question (to keep it or to fight to hold on to it).1958: due to the skillful work of Charles de Gaulle, Algeria received its independence and French stability was established. 12The Cold WarThe Cold War was a diplomatic crisis which occurred between the United States (and its Western bloc) and the USSR (and its Eastern bloc).The Cold War resulted from a variety of disagreements and problems which surfaced after the end of WWII. 13The “Iron Curtain”1946: Churchill called the Soviet domination of E. Europe the “Iron Curtain.”Stalin held a series of unfair elections and coups to install communist puppets in most of the E. European nations.Poland: 1947Czechoslovakia: 1948Hungary, Bulgaria, Romania, and Yugoslavia: 14The West Takes a StandThe USSR was supporting communist rebels in Greece & Turkey.Truman asked Congress for money to aid the governments to withstand the rebels’ assaults.This became the Truman Doctrine, stating that the US would provide aid to any free nation fighting off communism.The Truman Doctrine became the basis of the US policy of “containment.” 15Military AlliancesThe lines between the Western Bloc and the Eastern Bloc were formally drawn with the creation of two alliances.1949: NATO (North Atlantic Treaty Organization): designed to protect W. Europe from Communist aggression1955: Warsaw Pact: designed to protect E. Europe from capitalist influence. 16The Marshall Plan: The US provided $9.4 billion in economic assistance to Western Europe to help Europe rebuild after WWII.This aid was provided, in part, so that western European nations could resist the pull of communism. 17The Division of Germany The Big three agreed at Potsdam on the division of Germany.Britain, France, the US, and the USSR each controlled one zone of occupation.The western powers wanted to see the economic and political restructuring of Germany, while the USSR wanted to maintain Germany as a communist buffer state. 18Crisis in GermanySpring, 1948: The western powers introduced a new currency into their zones and requested the reunification of the zones.Stalin refused to allow a democratic Germany and withheld his zone from the German constitutional convention.The western powers decided to proceed without him and continued to help Germany construct a new constitution. 19The Berlin BlockadeStalin responded to western actions by blockading the city of West Berlin.The allies responded to the blockade with a massive airlift which supplied the city for 321 days.Stalin was forced to withdraw his blockade in a major defeat for the Soviets. 20Two GermaniesIn response to the Berlin blockade, the western powers joined their zones into a free nation: the Federal Republic of Germany.Stalin later made his zone into the German Democratic Republic, another Soviet puppet state. 21West GermanyBy the 1950’s, West Germany had evolved into a stable two-party democracy [Christian Democratic Union (CDU) and Social Democratic Party (SPD)].Konrad Adenauer (CDU) (Chancellor: ) led W. Germany towards closer ties with the US and the other W. European nations. 22West Germany, continued Following the death of Adenauer, Willy Brandt (SPD) took over and began a process called Ostpolitik, which meant he tried to open diplomatic contacts and with Eastern Europe.Brandt formally recognized E. Germany and accepted the post-war settlements in the east, thus easing tensions with the USSR, Poland and Czechoslovakia. 23Post-war ItalyFollowing WWII, Italy adopted a new constitution which brought the Italian monarchy to an end and created a democratic republic (which still is there today).Two major parties dominated the new government: the communists (because they had been anti-fascist during the war) and the Christian Democratic Party.Italy remained in the W. European bloc. 24Post-war FranceThe 4th French Republic was formed after WWII, but it was plagued by the frequent changes in government ministries and by factionalism.France had many small parties and so they all had to rely on multi-party coalitions to implement their policies.Women in France voted in parliamentary elections for the first time in 1946. 25Fifth French RepublicUsing the Algerian crisis as a pretext, DeGaulle created the 5th French Republic in 1958, giving the French President much more power.DeGaulle used his power to build an independent France and to try to make France somewhat independent of America. 26Economic Recovery in Western Europe Marshall Plan aid was used to provide the financial underpinnings for the post-war economic recovery and expansion of W. Europe.This growth lasted until the economic downturn of the early 1970’s. 27Economic RecoveryFor approximately a decade after the war, worker’s wages failed to keep up with economic growth.To offset the potential social problems this could have caused, most W. European governments provided “cradle-to-grave” social welfare protection programs for their citizens. 28Post-war Great Britain The British Labor Party tried to direct national policy toward solving many problems, such as inadequate housing for workers, poor safety standards and wages in industries, and lack of security in employment.The Labor Party concentrated on many issues that had been big problems since the industrial revolution. 29Britain, continuedTo avoid social unrest, the government enacted a variety of reforms.The British government nationalized the Bank of England, the railways, the airlines, and the coal & steel industries.The government also established old-age pensions, unemployment insurance, allowances for child-rearing, and the National Health Service. 30Reforms in EuropeFrance and West Germany also faced many of the same social and economic problems that were found in Britain.The French communist party was somewhat powerful after WWII and forced many socialist reforms.West Germany also adopted many similar reforms to bring recovery and stability after the war. 31The Cost of ReformThe economic cost of these social & economic reforms was long debated.Because the 1990’s process of globalization often had a negative effect for the nations of W. Europe, (with their high wages and very comprehensive social welfare programs), they often found it much harder to compete in the global marketplace.Under Margaret Thatcher, there was a significant rollback of the Br. welfare state. 32Economic Trends in Europe Two major economic trends have been important in Western Europe in the post-war period:Economic IntegrationEuropean UnionFrance has taken a lead in these movements, partly because they believe that tying Germany to the rest of Europe is necessary for French national security. 33Implementation of Economic Reforms 1951: Formation of the European Coal & Steel Community.Goal: to coordinate the production of coal & steel and to prevent some of the economic competition that had served as a cause for previous 20th century wars. 34Economic Reforms, cont.1958: Formation of the European Common Market (now the European Economic Community--EEC)The EEC was established to eliminate custom duties among the participating nations and to establish a common tariff on imports from the rest of the world.The EEC is still in existence, today. 35More Reforms 1962: Creation of a European Parliament Goal: to implement common social and economic programs in the various member states.**Duties were nearly non-existent until the passage of the Maastrict treaty in 1991. 36European Union1991: Members of the European Union (European Parliament) signed the Maastrict treaty in 1991 in Maastrict, Netherlands.Goal: to establish a common European currency and a central banking structure by 1999.The Euro is currently in use in member nations. 37The Eastern European Satellites Following WWII, the USSR set as a priority the establishment of a system of satellite states in E. Europe.The USSR created the Warsaw Pact in 1955 to establish military control of its satellites and COMECON to link and control the E. European economies.Economic conditions remained poor in most E. European nations, due to a lack of capital for economic development. 38East Germany1953: East German workers demonstrated in the streets to protest the government’s plan to increase productivity (at the cost of the worker’s benefits).This economic protest soon turned into a call for greater political freedom and directly contradicted Soviet policies.Soviet-supported E. German troops put down the revolt and economic life remained grim for E. Germans. 39The Berlin WallPolitical and Economic conditions in E. Germany and many other Eastern bloc nations remained so poor that millions were fleeing through West Berlin to freedom in western nations.The Berlin Wall was built in 1961 to stop the flow of refugees to the west.This was seen and publicized as a barbaric move and became a visible symbol of the cold war conflicts. 40Poland1956: Economic and political conditions similar to those found in E. Germany set off a series of strikes in Poland.The Polish government, working with the USSR, sent its troops into the streets to stop the strikers.This protest brought a slight raise in workers wages and was viewed as a success by the people, despite the bloodshed. 41Hungary1956: Inspired by the Polish revolt of 1956, Imre Nagy of Hungary encouraged a variety of reforms.Reforms included the creation of a multi-party state with Nagy as premier, a call for respect of human rights, the ending of political ties with the USSR, the release of many political prisoners, the creation of Hungary as a neutral nation, and the removal of Hungary from the Warsaw Pact. 42Hungary, continuedIn response to Nagy’s demonstrations, the Soviets decided to make an example of Hungary to prevent it from threatening their control of their whole system of satellite states.The Soviets invaded Hungary, killing thousands and setting up a police state. Reprisals were brutal, and >200,000 refugees fled from Hungary. Nagy was hanged. 43DestalinizationFollowing a power struggle after Stalin’s death in 1953, Nikita Khrushchev took control of the Soviet government.1956: At the Communist Party’s 20th National Congress, Khruschev announced his program of destalinization which attacked the “crimes” of Stalin and condemned him, claiming that Stalin had deviated from the intentions of Marxist-Leninism. 44American-Soviet Tensions Despite a visit to the US in 1959, tension was high between the superpowers.1959: Sputnik1960: U-2 Incident1961: Bay of Pigs Invasion1961: Berlin Wall1962: Cuban Missile Crisis 45DetenteSince the Cuban Missile Crisis had brought the superpowers so close to war, both sides decided to embrace a degree of détente, or peaceful coexistence.HotlineNuclear Atmospheric Test Ban TreatyMissile negotiationsDétente was seen as a sign of weakness in the USSR and Khruschev was ousted by 1964. 46The Brezhnev YearsBrezhnev replaced Khruschev in 1964 and ruled the USSR until his death in 1982.Although he did not reinstate the terror of the Stalin era, he did seek to once again strengthen the role of the Communist party bureaucracy and the KGB.Brezhnev also clamped down on reform movements in the E. European satellite states and called for a “new cold war.” 47Eastern Europe1968: Prague Spring: led by Alexander Dubcek, this reform movement in Czechoslovakia attempted to bring about “socialism with a human face,” while still remaining in the Soviet Bloc.Brezhnev saw this as a threat to the entire Warsaw Pact and initiated the Brezhnev Doctrine [The USSR would support with all means necessary (including military) any E. European communist state threatened by internal strife or external invasion.]This was used as justification for the invasion of Czechoslovakia, ending reform. 48Poland1978: Karol Wojtyla, a Polish Catholic cardinal was elected Pope John Paul II.1980: A massive strike occurred at the Lenin shipyard in Gdansk, where workers demanded the right to form an independent trade union.1980: Solidarity formed by Lech Walesa.1980 +: Solidarity survived the declaration of martial law and being outlawed by going underground, in part with the aid of the Catholic Church. 49Poland, continued: Solidarity operated during these years, attempting to get better pay and political rights for workers in Poland.Solidarity leaders were periodically harassed and arrested by communist authorities.By 1989: The Polish economy was in shambles and this forced the government to negotiate with Lech Walesa and Solidarity. 50Poland1989: Polish government negotiations with Walesa and Solidarity resulted in the promise of multiparty elections.October 1989: Multiparty elections resulted in the election of Walesa to the Presidency and the defeat of all Communist candidates.This election ushered in an era of reform that continues to this day. 51Revolution in E. EuropeReform policies of Mikhail Gorbachev prevented the USSR from interfering in E. European internal affairs.This led to a series of revolutions in 1989 in Hungary, Czechoslovakia, Bulgaria, Albania, East Germany, and Romania.These nations started on the road to democracy and market economies and faced many political and economic struggles in the 1990’s. 52East GermanyA flood of refugees traveled from E. Germany to Hungary where Hungary allowed their free passage to W. Germany.The fall of the Berlin Wall in November, 1990 marked the end of the Communist regime that had oppressed many since 1945.1990: Reunification of East and West Germany. 53RomaniaWhile the majority of revolutions in E. Europe were relatively peaceful, the one in Romania was not.The violent dictator, Nicolae Ceausescu refused to give in to the will of the people and used his own private police force to desperately cling to power.He and his equally repugnant wife, Elena, were executed on Christmas Day, 1989. 54The USSRGorbachev’s policies of glastnost and perestroika combined with the political transformation of the Soviet satellites to create a desire for change in the Soviet population.Disasters such as the Soviet invasion of Afghanistan and the Chernobyl nuclear accident revealed the deplorable state of affairs within the nation. 55Problems in the USSRGorbachev saw the need for change but wanted the Communist party to lead and control the changes.His economic changes were very slow and reformers, such as Boris Yeltsin, wanted him to speed up the process.1990: The Soviet government was forced to allow the political participation of non-Communist parties. 56More ProblemsAs the political and economic structure of the USSR began to collapse, nationalist movements throughout the USSR also popped up, beginning with the declaration of independence by Lithuania.Other republics, such as Estonia, Latvia, Ukraine, Belarus, Georgia, Kazakistan, and Uzbekitan soon followed.By 1992, 17 republics had broken away. 57Revolution in RussiaDecember 1990: Gorbachev appointed a few hard-liners to government positions hoping to stop the tide of rebellion.Hard liners were very concerned about the break away republics and wanted to stop the secessionist movement.This move backfired and started a rivalry between Gorbachev and Yeltsin (a reformer and Chairman of the Russian Parliament) 58The coup d’etatAugust 1991: While Gorbachev was on vacation, the hard-line communists staged a coup and placed him under house arrest in his summer home in the Crimea.This was done because the hard-liners feared that Gorbachev’s policies were threatening the existence of the Communist party.Yeltsin bravely stood atop a tank outside the parliament building and led the resistance, thus becoming the popular hero of the revolution. 59The Coup FailsAs a result of Yeltsin’s leadership and the popular support for the reform movement, the coup failed, and the hard-liners were discredited.August 1991-December 1991: More of the Soviet republics continued to break away, further weakening the USSR.December 1991: The USSR was dissolved and Gorbachev resigned. 60Problems in RussiaThe Commonwealth of Independent States was formed in 1992, but was ineffective and short-lived because break-away republics feared that Russia had too much power in the confederacy.The new Russian Republic faced serious political, social, and economic challenges, many of which still continue, today.The mob became very influential in Russia and many break-away republics, as well. 61YugoslaviaFollowing WWII, the nation of Yugoslavia was formed under the control of Josip Tito.Under his leadership, the nation was an independent communist country.He was able to control most of the ethnic and nationalistic rivalries within the nation.After his death, an ineffective government was formed that was unable to deal with the rivalries. 62Yugoslavia, continuedBy the early 1990’s, ethnic problems got so bad that Slovenia and Croatia seceded from Yugoslavia.The Serbian government of Yugoslavia let Slovenia go peacefully because it had an extremely small Serbian population.The secession of Croatia caused the Serbs more concern because of the larger Serbian population that lived there.This led to a war that began in 1991. 63The Bosnian CrisisBy 1992, the Bosnian Muslims and Croats feared the Serbs and seceded from Yugoslavia.This was an outrage to the Serbian/Yugoslavian government, since 1/3 of the Bosnian population were Serbs.War broke out between the Bosnian Serbs and the Bosnian Muslims and Croats.The Bosnian Serbs were supported by the Yugoslavian government. 64The Crisis ContinuesThe Bosnian Serbs did not want to be a part of a Bosnian government in which they would not be the majority ethnic group.With the help of Yugoslavian President Slobodan Milosevic, they carried out the policy of “ethnic cleansing.”This involved the forced removal of non-Serb populations from Bosnia and included executions and concentration camps.Serbs bombed Red Cross relief caravans, and shelled Sarajevo particularly on market days. 65The Bosnian Settlement Due to the atrocities that were being done by the Serbs, the US and other NATO nations got involved to stop the killing.This led to the US-brokered Dayton Accords of 1995 which ushered in an era of precarious peace in Bosnia.The US and UN sent peacekeepers to protect the Bosnian Muslims.War Crimes trials were held to convict those responsible for the ethnic cleansing. 66YugoslaviaBesides Slovenia, Croatia, and Bosnia, Macedonia also seceded from Yugoslavia.Yugoslavia now consists mainly of what was once the state of Serbia.Many people refer to Yugoslavia as “Serbia.”1999: Kosovo crisis: The Serbs, using the scorched earth policy decided to run the ethnic Albanians out of Kosovo.Many Kosovars fled to neighboring Albania and Macadonia where they went to refugee camps.NATO activity & bombings ended this crisis. 67Philosophy and Religion ExistentialismRoman CatholicismProtestantism 68ExistentialismTheistic – Søren Kierkegaard, Martin Buber, Paul Tillich, Gabriel Marcel, Karl JaspersAtheistic – Paul Sartre, Simon de Beauvoir, Friedrich Nietzche, Martin Heidegger, Albert Camus 69Key Themes Freedom: We are condemned to be free Responsibility: because we have freedom in our fundamental projects and attitudes we are responsible for the people we become 70Key ThemesAngst/Dread/Anguish/Anxiety: When we reflect on our freedom we experience anxiety.Bad Faith: Those who refuse to take responsibility for themselves are living an inauthentic existence in bad faith; they are self deluded. 71The Keyest Theme: Existence Precedes Essence What is meant here by saying that existence precedes essence?It means that, first of all, man exists, turns up, appears on the scene, and, only afterwards, defines himself. 72Kierkegaard Part of the revolt against reason. Mid-19th c. Leap on the dark—leap of faithTruths of Christianity are not revealed in organized religion or in doctrine, but in experiences of individuals facing crises in their lives 73Jean Paul Sartre Atheist Human existence has no transcendent significance fundamentally absurdhumans are free to make choices. in choices, humans can give life meaning and purpose 74RCC John XXIII (r. 1958-1963) A new era with his papacy Mater et Magistra: reaffirmed Ch’s commitment to econ. and social reform. Called for increased assist to developing nationsVatican II 75Vatican II Movement for renewal and aggiornamento Reform Church’s liturgy. Vernacular mass instead of Latin. Lay participation increase. More open expression, Condemns anti-Semitism. Ecumenical movementPaul VI (r ): Humanae Vitae: reaffirms Ch’s opposition to artificial birth control 76Protestantism Karl Barth: Neoorthodoxy. Rejected religious modernism. Reaffirms Reformation theologyBiblical authority. Revelation of God in Jesus. Human dependence on God 77Another Protestant thinker Paul TillichGod=ultimate truth. “Ground of Being”Original sin, atonement, immortality—symbolicecumenical
<urn:uuid:0cd19917-e4fe-4c77-8f3f-4479a773dd4a>
CC-MAIN-2017-17
http://slideplayer.com/slide/764709/
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917121153.91/warc/CC-MAIN-20170423031201-00248-ip-10-145-167-34.ec2.internal.warc.gz
en
0.958698
5,086
3.890625
4
- Venter definition, the abdomen or belly. See more. Venter (ˈvɛntə) —n (John) Craig. born 1946, US biologist: founder of the Institute for Genomic Research (1992) whose work contributed greatly to the mapping of the human genome. Collins English Dictionary - Complete & Unabridged 10th Edition. — “Venter | Define Venter at ”, - Definition of VENTER. 1 : a wife or mother that is a source of offspring. 2 [New Latin, from Latin] : a protuberant and often hollow anatomical structure: as a : the undersurface of the abdomen of an arthropod b : the swollen basal portion of an archegonium in which an egg develops. — “Venter - Definition and More from the Free Merriam-Webster”, merriam- - Definition of venter in the Medical Dictionary. venter explanation. Information about venter in Free online English dictionary. What is venter? Meaning of venter medical term. What does venter mean?. — “venter - definition of venter in the Medical dictionary - by”, medical- - Facts and figures about Craig Venter, taken from Freebase, the world's database. — “Craig Venter facts - Freebase”, - Craig Venter [b. Salt Lake City, Utah, October 14, 1946] Venter developed a method of deciphering genomes known as whole-genome shotgun sequencing. — “Craig Venter: Biography from ”, - I'm heading off to the Foresight Conference and then a pilgrimage to the Venter Institute. (This photo by Ronnie Antik is from TED earlier this year.) Full disclosure: in all of my prior writing and blogging about Craig Venter (from TED, our. — “Craig Venter | Flickr - Photo Sharing!”, - Steve Kroft profiles famous microbiologist J. Craig Venter, whose scientists have already mapped the human genome and created what he calls "the 60 Minutes Tonight Preview for 11/21: Viktor Bout, J. Craig Venter and Mark. — “Craig Venter pictures, news and more - famous hot people”, - Dr. Craig Venter on CBS TV's 60 Minutes this Sunday, Nov. 21. "The microbiologist whose scientists have already mapped the human genome and created what he calls 'the first synthetic species' says the next breakthrough could be a flu vaccine that takes hours rather than months to produce. — “Craig Venter | Regator”, - *Update: We've included the Science interview with Venter after the break. Craig Venter wants to program life the way we program computers, and today he. — “Venter Creates First Synthetic Self-Replicating Bacteria from”, - Definition of venter from Webster's New World College Dictionary. Meaning of venter. Pronunciation of venter. Definition of the word venter. Origin of the word venter Law the womb: used in designating maternal parentage, as in children of the first venter, meaning "children of the first wife". — “venter - Definition of venter at ”, - Craig Venter's formal education was at the University of California at San Diego, where he earned his doctorate in physiology and pharmacology in Venter entered academia at the State University of New York at Buffalo. — “Craig Venter”, - Craig Venter has been praised and criticized for his creation of the first artificial organism. But what could his work mean for real businesses?. — “How Venter's creation could change business - May. 27, 2010”, - Research in structural, functional and comparative ***ysis of genomes and gene products. Features databases, gene indices, education, and software at Rockville and San Diego, USA. Multinational Research Team Led by J. Craig Venter Institute's Ewen Kirkness Sequence. — “JCVI: Home”, - is dedicated to genealogy of Venter's and also offers personalized email address. — “ | Welcome”, - J. CRAIG VENTER is regarded as one of the leading scientists of the 21st century for his invaluable contributions in genomic research, most notably for the first Dr. Venter is also the key leader in the field of synthetic genomics. — “Edge: J. Craig Venter”, - John Craig Venter (born October 14, 1946) is an American biologist and entrepreneur, most In 2000, Venter and Francis Collins of the National Institutes of. — “Craig Venter - Wikipedia, the free encyclopedia”, - Definition of venter in the Online Dictionary. Meaning of venter. Pronunciation of venter. Translations of venter. venter synonyms, venter antonyms. Information about venter in the free online English dictionary and encyclopedia. — “venter - definition of venter by the Free Online Dictionary”, - This web site is committed to uncovering the history of the greater Venter clan, and extends an invitation to others to join the quest. It is hoped that you will find the information contain herein of some interest and that it will encourage you to pursue your own origins. — “Venter Home Page”, - Venter family tree, crest, places, photos, history. — “Venters of South Africa”, venter.ws - Venter: Of the lottery? Or the losers? I have a pretty good idea, yes, but I can't disclose that, and it's important not to in terms of -- because it doesn't matter. Venter: Well, any scientist that I know in this field would love to be looking at their own genetic code. — “NOVA Online | Cracking the Code of Life | Dr. Craig Venter”, - Retrieved from "http:///wiki/venter" Categories: English nouns | Obsolete | Latin derivations | Anatomy | nl:Latin derivations | Dutch nouns | fr:Latin derivations | French verbs | French impersonal verbs | fr:Weather | Latin nouns | Norwegian verb forms. — “venter - Wiktionary”, - Soon after Venter was born, his family moved to the San Francisco area, where swimming and surfing occupied his free time. After high school Venter joined the U.S. Naval Medical Corps and served in the Vietnam War. On returning to the U.S., he earned a B.A. — “Venter”, related videos for venter - [email protected] - Craig Venter - Synthetic Life Synthetic Life. J. Craig Venter is a biologist who has led teams responsible for sequencing the first draft human genome, the first diploid human genome, and construction of the first self-replicating synthetic bacterial cell. He is founder and president of the J. Craig Venter Institute and founder and CEO of Synthetic Genomics, Inc. - Dr Freek Venter, Danie Pienaar, Abe Sibiya interviews.mp4 KNP Dams: One of the oldest organisms on earth is blue-green algae that exists is most water bodies in low numbers. Under certain conditions, the algae multiply rapidly producing large blooms and strong poisons which can damage the nervous systems and the livers of animals. We talk to Dr Freek Venter, Head of Conservation Management of the Kruger National Park, about the challenges the reserve faces with their dams. He also explains why Kruger is closing waterholes in the Park. Kruger Crocs Initiative: In 2008, over 170 crocodiles in the Olifants River Gorge died from the disease Pansteatitis. The fatal disease is characterised by a yellow-orange coloured, hardened fat in the tails and abdomen of crocodiles that prevents them from defending and fending for themselves in the wild. Danie Pienaar is part of an initiative known as the "Consortium for the Restoration of the Olifants Catchment" (CROC), which seeks to investigate the cause and effect of the crocodiles' deaths. He gives us an update on the Park's crocodiles. Interview: Ntoks chats to Abe Sibiya, Managing Executive of the Kruger Park and asks questions about the new proposed hotels, animals that burnt in experimental veld fires last year and the new regulations preventing day visitors from bringing alcohol into the Park. - Craig Venter - UB Distinguished Speakers Series (4/11) On April 27, 2011, the University at Buffalo's (UB) Distinguished Speakers Series welcomed J. Craig Venter, considered one of the leading scientists of the 21st century for his groundbreaking research into the human genome. Learn more about UB's Distinguished Speakers Series at buffalo.edu or - Double Helix Medals Awardees - James D. Watson and J. Craig Venter James D. Watson and J. Craig Venter doublehelixmedals.cshl.edu Dr. James D. Watson is widely regarded as the father of DNA science. He was born in Chicago, Illinois in 1928 and educated at the University of Chicago. In 1953, while at Cambridge University, he and Francis Crick successfully proposed the double-helical structure for DNA, an insight described by Sir Peter Medawar as the "greatest achievement of science in the twentieth century." For this work, Watson and Crick together with Maurice Wilkins were awarded the Nobel Prize in Physiology or Medicine in 1962. While a professor at Harvard, Watson commenced a writing career that generated the seminal text Molecular Biology of the Gene, the best best-selling autobiographical volume The Double Helix, and most recently, Avoid Boring People. J. Craig Venter is widely regarded as one of the leading scientists of the 21st century for his contributions to genomic research. He is Founder, Chairman and President of the J. Craig Venter Institute (JCVI), a not-for-profit, research organization with more than 400 scientist and staff dedicated to human, microbial, plant, synthetic and environmental genomic research, as well as the exploration of social and ethical issues in genomics. - Frost over the World - Craig Venter Craig Venter has created artificial life. Last week, the geneticist announced that he and his research partner, the Nobel Laureate Hamilton Smith, had successfully transplanted synthetic DNA into living bacterial cells. He joins Sir David from New York to discuss his startling breakthrough and its implications for both science and ethics. - Life: gene-centric view. Craig Venter & Richard Dawkins (Moderator: John Brockman) Part 1 I've had this since it was recorded in 2008, but never uploaded. I searched for it on youtube and no one did either so I'm uploading it now. - Brendan Venter Post Match Interview Saracens vs Racing Metro 11/12/2010 Saracens vs Racing Metro Brendan obviously ticked off. Not allowed to say what he really feels, we all know what happened last season. - Altech FY Results with CEO Craig Venter () Technology group Altech's full year headline earnings per share slipped 15 percent to 488 cents.The group said trading conditions were subdued by tough economic conditions. ABN's Eleni Giokos spoke to Altech CEO, Craig Venter. - Craig Venter Newsnight Interview - Synthetic Life 20 May 2010 BBC2 Newsnight Scientists in the US have succeeded in developing the first synthetic living cell. The researchers constructed a bacterium's "genetic software" and transplanted it into a host cell. The resulting microbe then looked and behaved like the species "dictated" by the synthetic DNA. The advance, published in Science, has been hailed as a scientific landmark, but critics say there are dangers posed by synthetic organisms. The researchers hope eventually to design bacterial cells that will produce medicines and fuels and even absorb greenhouse gases. The team was led by Dr Craig Venter of the J Craig Venter Institute (JCVI) in Maryland and California. He and his colleagues had previously made a synthetic bacterial genome, and transplanted the genome of one bacterium into another. Now, the scientists have put both methods together, to create what they call a "synthetic cell", although only its genome is truly synthetic. Dr Venter likened the advance to making new software for the cell. The researchers copied an existing bacterial genome. They sequenced its genetic code and then used "synthesis machines" to chemically construct a copy. Dr Venter told BBC News: "We've now been able to take our synthetic chromosome and transplant it into a recipient cell - a different organism. "As soon as this new software goes into the cell, the cell reads [it] and converts into the species specified in that genetic code." The new bacteria replicated over a ... - venter 2 - Brendan Venter's interview after a defeat to Racing Metro Brendan Venter gives a strange interview to Sky Sports after Saracens lose 21-24 to Racing Metro 92 in the Heineken Cup on December 11th 2010. - J. Craig Venter speaking at the Genbank 25th Anniversary J. Craig Venter President, J. Craig Venter Institute, Rockville, MD Genomics: From Human to the Environment m.nih.gov - Bang Goes the Theory: Craig Venter Interview Aired July 27, 2009 on BBC's new show 'Bang Goes The Theory' Tackling the science behind the headlines, Liz journeys to meet a US scientist who is developing his own controversial solution to solving the world's energy crisis. Craig Venter, one of the first people to sequence the human genome, is working to create the first generation of artificial life. via - - Craig Venter: A voyage of DNA, genes and the sea Genomics pioneer Craig Venter takes a break from his epic round-the-world expedition to talk about the millions of genes his team has discovered so far in its quest to map the oceans biodiversity.TEDTalks is a daily video podcast of the best talks and performances from the TED Conference, where the world's leading thinkers and doers are invited to give the talk of their lives in 18 minutes -- including speakers such as Jill Bolte Taylor, Sir Ken Robinson, Hans Rosling, Al Gore and Arthur Benjamin. TED stands for Technology, Entertainment, and Design, and TEDTalks cover these topics as well as science, business, politics and the arts. Watch the Top 10 TEDTalks on , at http - From the Boks: AJ Venter on SA vs. Namibia John Walland chats to former SA hardman AJ Venter about the victory over Namibia. John breaks down all the records that tumbled during the game. They discuss the great progress the side is making in moving forward in RWC2011. #2# - Allied Electronics H1 Earnings with CEO Robbie Venter () Allied Electronics first half year diluted basic earnings per share fell 19 percent to 73 cents. Earnings were negatively impacted by the once off effect of the black economic empowerment deal at Aberdale Cables. For more on this, ABN's Eleni Giokos is joined in studio by the CEO of Altron, Robbie Venter. - Craig Venter, Synthetic Genomics Dr. Craig Venter, J. Craig Venter Institute, recorded on 30 Oct 2010 at the Space Manufacturing 14: Critical Technologies for Space Settlement Special event in conjunction with Synthetic Biology Workshop. Credits: Space Studies Institute - Craig Venter: On the verge of creating synthetic life "Can we create new life out of our digital universe?" asks Craig Venter. And his answer is, yes, and pretty soon. He walks the TED2008 audience through his latest research into "fourth-generation fuels" -- biologically created fuels with CO2 as their feedstock. His talk covers the details of creating brand-new chromosomes using digital technology, the reasons why we would want to do this, and the bioethics of synthetic life. A fascinating Q&A with TED's Chris Anderson follows (two words suicide genes). - Altron FY Results with CEO Robbie Venter () Allied Electronics' reported that their headline earnings per share for the year to February climbed 15 percent to 228 cents. Joining ABN to discuss the numbers is Allied Electronics CEO, Robbie Venter. - Brendan Venter Interview Explained More at It seems the inspiration behind the bizarre Brendan Venter was in fact Bradley Walsh's character from the film Mike Bassett England Manager. - Craig Venter VS Intelligent Design-er. My God Delusion Part 4 If Craig Venter needed all this DESIGN to create a simple DNA, then how about this universe including Craig Venter? - Scientists Make Synthetic Cell Using Manmade DNA A team from J. Craig Venter's research institute says it has produced a living cell powered by manmade DNA. Venter, a genome mapping pioneer, describes the cell as "the first self-replicating species...on the planet whose parent is a computer." (May 20) - episode 59 - J. Craig Venter - part 02 One of the leading scientists of the 21st century, genomic pioneer J. Craig Venter astonished the world by publishing the first complete sequence of the human genome in 2001. In lay terms, he describes some of the robotics and high-end computing he used in his pathbreaking "shotgun" method of gene sequencing, which produced results at a pace many thought impossible. This shotgun method enabled him to complete the world's first sequencing of a living genome, a bacterium, in 1995. He discusses his 2007 autobiography, A Life Decoded: My Genome, My Life, and delves into the politics of research funding, the importance of challenging scientific dogmas, and his love of sailing. We are not, says, Venter, a genetically determined species; he states that genomes give probabilities of physical outcomes, not yes or no answers. - Undertaker's Circus - Nå Sitter VI Og Venter Great psychedelic track by the Norwegian rock band Undertaker's Circus. Song was featured on their '73-issued LP "Ragnarock". You can see those scary nordic men at the photo. - The Journal Science Interviews J. Craig Venter About the first "Synthetic Cell" A paper published in the journal Science describes a remarkable rebooting of a bacterial cell with a million-letter package of biological "software" created from scratch in a computer and then synthesized in a laboratory. This Skype interview was provided to the media by the journal, a publication of the American Association for the Advancement of Science. More at and on DOT EARTH http - TEDxCaltech - J. Craig Venter - Future Biology J. Craig Venter is a biologist most known for his contributions, in 2001, of sequencing the first draft human genome and in 2007 for the first complete diploid human genome. In 2010 he and his team announced success in constructing the first synthetic bacterial cell. His present work focuses on creating synthetic biological organisms and applications of this work, and discovering genetic diversity in the world's oceans. About TEDx, x = independently organized event: In the spirit of ideas worth spreading, TEDx is a program of local, self-organized events that bring people together to share a TED-like experience. At a TEDx event, TEDTalks video and live speakers combine to spark deep discussion and connection in a small group. These local, self-organized events are branded TEDx, where x = independently organized TED event. The TED Conference provides general guidance for the TEDx program, but individual TEDx events are self-organized. (Subject to certain rules and regulations.) On January 14, 2011, Caltech hosted TEDxCaltech, an exciting one-day event to honor Richard Feynman, Nobel Laureate, Caltech physics professor, iconoclast, visionary, and all-around "curious character." Visit for more details. - Juliana Venter - THE HERMIT - MUD ENSEMBLE Mud Ensemble perfoms the Hermit poem by Sylvia Plath at the garage Johannesburg in 1997, Juliana Venter wears the famous Nailsuit. - Craig Venter at TEDMED 2009 Craig Venter talks about creating synthetic life! - CrossFit - From the vault: WOD 100922 Demo with Pat Barber and Tamaryn Venter For time: 25 Walking lunge steps 20 Pull-ups 50 Box jumps, 20 inch box 20 Double-unders 25 Ring dips 20 Knees to elbows 30 Kettlebell swings, 2 pood 30 Sit-ups 20 Hang squat cleans, 35 pound dumbells 25 Back extensions 30 Wall ball shots, 20 pound ball 3 Rope climb ascents Pat Barber 10:30, Peter Egyed 12:27, Chris Gosler 12:29, Kristan Clever 14:02 (24" box, 1.5pood KB, 25lb DB clean, 16lb ball, 6 12' ascents), Austin Malleolo 14:36 (with 20lb vest), Heather Bergeron 14:51 (1.5pood, 25lb DB, 14lb ball), Tamaryn Venter 16:16 (1.5pood, 10Kg DB, 5Kg ball). Post time to comments. - Craig Venter - The Genius of Charles Darwin: The Uncut Interviews - Richard Dawkins Richard Dawkins interviews Craig Venter for "The Genius of Charles Darwin", the Channel 4 UK TV program which won British Broadcasting Awards' "Best Documentary Series" of 2008. Craig Venter founded The Institute for Genomic Research and has been credited with being instrumental in mapping the human genome. His team published the first complete genome of an individual human - Venter's own DNA sequence. Buy the full 3-DVD set of uncut interviews, over 18 hours, in the store: This footage was shot with the intention of editing for a television program. What you see here is the full extended interview, which includes a lot of rough camera transitions that were edited out of the final program (along with a lot of content). - Craig Venter unveils "synthetic life" Craig Venter and team make a historic announcement they've created the first fully functioning, reproducing cell controlled by synthetic DNA. He explains how they did it and why the achievement marks the beginning of a new era for science.TEDTalks is a daily video podcast of the best talks and performances from the TED Conference, where the world's leading thinkers and doers give the talk of their lives in 18 minutes. Featured speakers have included Al Gore on climate change, Philippe Starck on design, Jill Bolte Taylor on observing her own stroke, Nicholas Negroponte on One Laptop per Child, Jane Goodall on chimpanzees, Bill Gates on malaria and mosquitoes, Pattie Maes on the "Sixth Sense" wearable tech, and "Lost" producer JJ Abrams on the allure of mystery. TED stands for Technology, Entertainment, Design, and TEDTalks cover these topics as well as science, business, development and the arts. Closed captions and translated subtitles in a variety of languages are now available on , at Watch a highlight reel of the Top 10 TEDTalks at - Celera Genomics founder Craig Venter at ECO:nomics He cracked the code on the human genome. What's next for Craig Venter? Find out from his discussion with The Wall Street Journal's Alan Murray at ECO:nomics 2010. - Brendan Venter Akward Interview An early Christmas party next to the field ? No He has to give an interview under ERC rules. However he hates the ERC administration for fining him every time he says something. Hence this is how he gets back at them. - Public talk by J. Craig Venter in Vancouver - May 3, 2011 - Audio Only sorry audio only. Leading genomic scientist and sequencer of the human genome, on the construction of the first synthetic cell and the global ocean sampling expedition. - episode 59 - J. Craig Venter - part 01 One of the leading scientists of the 21st century, genomic pioneer J. Craig Venter astonished the world by publishing the first complete sequence of the human genome in 2001. In lay terms, he describes some of the robotics and high-end computing he used in his pathbreaking "shotgun" method of gene sequencing, which produced results at a pace many thought impossible. This shotgun method enabled him to complete the world's first sequencing of a living genome, a bacterium, in 1995. He discusses his 2007 autobiography, A Life Decoded: My Genome, My Life, and delves into the politics of research funding, the importance of challenging scientific dogmas, and his love of sailing. We are not, says, Venter, a genetically determined species; he states that genomes give probabilities of physical outcomes, not yes or no answers. - "Off The Pitch"...With Kyle Venter "Off the Pitch" spoke with Kyle Venter about his game tying goal this past Sunday and also the team's offense surge this season. The Lobos have scored 12 goals this season and only given up 2. The team will travel to Akron, OH this week for a matchup against the 2010 National Champion. The game will be televised as the the Fox Soccer college game of the week. Kickoff is set for Friday 9/16 at 7PM ET. - SA Idols 2011 - Dirk Venter - Funny - Wooden Mic More on my Channel at: This is South African Idols 2011. The Wooden Mic(rophone) is awarded to the very worst singers....... Thats all i like about any Idols show, the worst ones....! - Venter på Mattetimen 5.10.2011 - J. Craig Venter on Synthetic Biology at NASA Ames - Craig Venter - Creating Artificial Life - J. Craig Venter: Designing Life November 21, 2010 on CBS 60 Minutes - via Blogs & Forum blogs and forums about venter “Craig Venter is the Lady Gaga of science. Like her, he is a drama queen, an over-the-top performance artist with a genius for self-promotion. Hype is what Craig Venter does, and he does it extremely well, whether touting the decoding of his own The guest blog is a forum for such opinions” — Cross-check: Craig Venter has neither created--nor, “Time says Venter's been nominated for "creating DNA using chemicals in a lab and Time does note that not only can Venter be "polarizing" but also that his work itself” — The Year of Venter? | The Daily Scan | GenomeWeb, “a Nature Top 50 science blog from the American Journal of Bioethics because of overpopulation and unhealthy conditions," Venter says and then he shifts the” — Craig Venter on the ethics of creating synthetic organisms, “GamersLounge Forum > Community Blog > PuzzleBrick's Blog > Mens vi venter på en Xbox Slim Subscribe to PuzzleBrick's Blog. Mens vi venter på en Xbox Slim. 27” — GamersLounge Forum -> Mens vi venter på en Xbox Slim, gamerslounge.dk “Venter "is poised to announce" the discovery in the next couple weeks, according to the Nature's news blog, The Great Beyond points to a report in AFP, in which Heather Kowalski,” — CellNEWS: Venter makes synthetic life... or not yet?, cellnews- “See-ming Lee 李思明 + Professional Blog 專業博客 = See-ming Lee 李思明 SML + random thoughts 雜感 = ***ytics 分析 + art 藝術 + branding 品牌 + business 企業 + concepts 概念 + data 數據 + design 設計 + ideas 概念 + information 信息 + inspiration 靈感 + life 生活 + love 愛心” — SML Pro Blog: Artificial Life / J. Craig Venter, “Critics say that in applying for a patent for an artificial organism, the maverick scientist is out to create a "Microbesoft" empire. After publishing this blog, a spokesperson for the Venter Institute e-mailed me to say that Craig Venter speaks often about the societal implications of synthetic” — Craig Venter: The Bill Gates of Artificial Life? - Technology, “Science blog + Craig Venter. Monday 13 September 2010. Science Weekly: Richard Dawkins, David Attenborough and algae. Science Weekly: Algae make Alok Jha and an expert panel discuss the significance of Craig Venter's creation of artificial life” — Science blog + Craig Venter | Science | , “September 6th, 2009 Dr. Henry Venter No comments. In my previous blog I said that Hopelessness is the big threat of the 21st July 27th, 2009 Dr. Henry Venter 2 comments. Welcome to my blog. What drives me is to empower people to discover their dream” — Dr. Henry Venter's Blog, related keywords for venter - venter institute - venter trailers usa - venter genome - venter definition - venter trailer - venter synthetic cell - venter listener chat - venter 60 minutes - ventral root - vent enter search - venter institute internship - venter institute la jolla - venter institute san diego - venter institute rockville - venter institute director - venter institute artificial life - venter institute sequencing - venter institute photos - venter institute energy - venter institute synthetic similar for venter - craig venter - the human genome - john craig venter - genome sequence - south africa - family tree - family association - dna sequence - free encyclopedia - military commander - only woman - plant organ - personal tools - brendan venter - christoffel venter - dora venter - group leader - synthetic genomics - genome project - environmental genomics - microbial genome - eukaryotic genome - genome annotation - vanessa hayes - microbial populations - prokaryotic and eukaryotic - ***ysis courses - the first south - south african - goal set - surname research - dna project - family genealogy - research project - family records - family members - family origins - family tree dna - surname project - family name - genomic research - genome assembly - expressed sequence tags - the american - diploid genome - past participle - latin nouns - middle english - free dictionary
<urn:uuid:a5646fcd-e1e6-47fc-8dd7-7b64a14cbf04>
CC-MAIN-2017-17
http://wordsdomination.com/venter.html
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917123635.74/warc/CC-MAIN-20170423031203-00192-ip-10-145-167-34.ec2.internal.warc.gz
en
0.908929
6,498
2.65625
3
|Legal Service India - Right of Abortion v. Child in Mother's Womb| |Legal Advice | Find a lawyer | Constitutional law | Judgments | forms | PIL | family law | Cyber Law | Law Forum | Income-Tax | Consumer laws | Company laws| |Latest Articles | Articles 2013 | Articles 2012 | Articles 2011 | Articles 2010 | Articles 2009 | Articles 2008 | Articles 2007 | Articles 2006 | Articles 2000-05| How mother can be beta noir of her child, especially of unborn child? Woman can give birth to man. This distinct quality places her on both advantageous and disadvantageous stance. She has right to motherhood but here bone of contention is whether she has right to undergo abortion. Here war is between woman, who demands right over her body, and child, until now who has not taken birth. Abortion means deliberate ending of a pregnancy at an early stage. Abortion is the subject of strong public debate, especially in the US. Some people are in favour of abortion; called pro-choice supporters, they support a woman's right to choose whether to have a baby or not. On the opposite side are the pro-life campaigners, who believe in the right to life of the unborn child and think that abortion is wrong. US law says that abortion is legal during the first stages of pregnancy. The US government has tried to change the laws about abortion, but the changes have been disputed by federal courts and the courts have supported a woman's right to abortion . In Roe v Wade , US Supreme Court decided that abortion is allowed by the Constitution. In Roe v Wade , a US Supreme Court case (1973) which ended in a decision making it legal to have an abortion, judges said that a state must allow any woman, if she wishes, to have an abortion within the first three months after she becomes pregnant. The decision divided US society and caused a lot of discussion all over the country. And still this issue is a bone of contention there, as it is evident from various activities still going on at various places in U.S. But as far as India is concerned, there are many statues, which deal with this point. Now we enumerate Indian law to ascertain stance from both sides i.e. mother and unborn child. Section 312 of the INDIAN PENAL CODE, 1860 causes miscarriage punishable. It says:312. Causing miscarriage. - whoever causes a woman with child to miscarry, shall, if such miscarriage be not caused in good faith for the purpose of saving the life of the women, be punished with imprisonment of either description for a term which may be extend to three years, or with fine, or with both; and if the woman quick with child, shall be punished with imprisonment of either description for a term which may extend to seven years, and shall also be liable to fine. Explanation. - A woman who causes herself to miscarry is within the meaning of this section. Section 312 punishes the person who causes the miscarriage to women.Explanation appended to provision clarifies that women has no right to miscarry herself. The word miscarriage is used synonymously with the word ?abortion?. Section 312 gives the right of motherhood to woman and provides ample protection to this right but simultaneously takes away the right of abortion to the woman; it means she herself has no right over her body. It's not only question of right of woman over her body because a question of right to life of child in woman's womb also arises. There is a clash between right to life of unborn child and right of women over her body i.e. right of abortion. This issue also raises one more issue when life begins- it could be immediately after the egg is fertilized; when the foetus gets a soul; when the foetus can live independently outside the mother, or when the mother delivers the baby. But according to Jeffrey M. Drazen, (Editior-in-Chief of The New England Journal of Medicine) When life begins, is a philosophical question. As aforesaid, in U.S abortion at the beginning of pregnancy is not punishable but there is no distinction in India on such basis except in quantum of punishment: as Section 312 prescribes up to three years imprisonment or fine or both for causing the miscarriage a woman with child and up to seven years and also liable to fine for causing miscarriage to women quick with child. The meaning of the words woman with child simply means a pregnant woman. The moment a women conceives and the gestation period or the pregnancy begins, then a woman is said to be with child. The term quick with child refers to a more advanced stage of pregnancy. Quickening is the perception by a mother that the movement of the foetus has taken place or the embryo has taken place or the embryo has taken a foetal form. But question arises about the right of unborn child when life of woman is in peril due to that pregnancy. Law can be cruel little bit but not absolutely: as section allows abortion in good faith for the purpose of saving the life of the woman. Right of abortion is further extended by The Medical Termination Of Pregnancy Act. The Statement of Objects and Reasons of The Medical Termination Of Pregnancy Act, 1971 are: The provisions regarding the termination of pregnancy in the Indian Penal Code which were enacted about a century ago were drawn up in keeping with the then British Law on the subject. Abortion was made a crime for which the mother as well as the abortionist could be punished except where it had to be induced in order to save the life of the mother. It has been stated that this very strict law has been observed in the breach in a very large number of cases all over the country. Furthermore, most of these mothers are married women, and are under no particular necessity to conceal their pregnancy. 2. In recent years, when health services have expanded and hospitals are availed of to the fullest extent by all classes of society, doctors have often been confronted with gravely ill or dying pregnant women whose pregnant uterus have been tampered with, a view to causing an abortion and consequently suffered very severely. 3. There is thus avoidable wastage of the mother's health, strength and, sometimes, life. The proposed measure which seeks to liberalise certain existing provisions relating to termination of pregnancy has been conceived (1) as a health measure when there is danger to the life or risk to physical or mental health of the woman; (2) on humanitarian grounds such as when pregnancy arises from a sex crime like rape or intercourse with lunatic woman, etc., and (3) eugenic grounds Where there is substantial risk that the child, if born, would suffer from deformities and diseases. So, it obvious The Medical Termination Of Pregnancy Act, 1971 is made in favour of mother and as well as in favour of unborn child. It put forth the principal that death is better than sufferings: as Act allows killing of child in mother's womb, where there is substantial risk that the child, if born, would suffer from deformities and diseases. THE MEDICAL TERMINATION of PREGNANCY ACT, 1971 allows, on fulfillment of certain conditions, at the initial stage of pregnancy, if continuance of the pregnancy would involve a risk to the life of the pregnant woman or of grave injury to her physical or mental health or there is substantial risk that if the child born, it would suffer from such physical or mental abnormalities as to be seriously handicapped. Section 3 of The Medical Termination of Pregnancy Act, 1971:3. When pregnancies may be terminated by registered medical practitioners.- (1) Notwithstanding anything contained in the Indian Penal Code (45 of 1860) a registered medical practitioner shall not be guilty of any offence under that code or under any other law for the time being in force, if any pregnancy is terminated by him in accordance with the provisions of this Act. (2) Subject to the provisions of sub-section (4) a pregnancy may be terminated by a registered medical practitioner,? (a) Where the length of the pregnancy does not exceed twelve weeks, if such medical practitioner, is, or (b) where the length of the pregnancy exceeds twelve weeks but does not exceed twenty weeks, if not less than two registered medical practitioners are, of opinion formed in good faith, that? (i) the continuance of the pregnancy would involve a risk to the life of the pregnant woman or of grave injury to her physical or mental health; (ii) there is a substantial risk that if the child were born it would be suffer form such physical or mental abnormalities as to be seriously handicapped. Explanation 1.- Where any pregnancy is alleged by the pregnant woman to have been caused by rape, the anguish caused by such pregnancy shall be presumed to constitute a grave injury to the mental health of the pregnant woman. Explanation 2. - Where any pregnancy occurs as a result of failure of any device or method used by any married woman or her husband for the purpose of limiting the number of children, the anguish caused by such unwanted pregnancy may be presumed to constitute a grave injury to the mental health of the pregnant woman. (3) In determining whether the continuance of a pregnancy would involve such risk of injury to the health as is mentioned in sub-section (2) account may be taken of the pregnant woman's actual or reasonably foreseeable environment.? (4) (a) No pregnancy of a woman, who has not attained the age of eighteen years, or who having attained the age of eighteen years is a mentally ill person, shall be terminated except with the consent in writing of her guardian. (b) Save as otherwise provided in clause (a) no pregnancy shall be terminated except with the consent of the pregnant woman. Rape victims are empowered to undergo abortion under the said Act to prevent further worsening of trauma. Explanation appended to Section 3 (2) tells that the married couple is vested with right to get rid of unwanted pregnancy if it takes place due to failure of any contraceptive. But any exemption under the last said can be availed only on fulfilling of conditions and procedure laid down in section 3, section 4 and rules & regulations made under the act. Section 4 enumerates the places where pregnancy can be terminated. It says:4. Place where pregnancy may be terminated. - No termination of pregnancy shall be made in accordance with this act at any place other than- (a) a hospital established or maintained by government, or (b) a place for the time being approved for the purpose of this act by government or a District Level Committee constitute by that government with the Chief Medical Officer or District Health Officer as the Chairperson of the said committee. Provided that the District Level Committee shall not consists of not less than three or not more than five members including the Chairperson, as the Government may specify from time to time. Section 312 of IPC allows abortion in good faith for saving the life of women. Indian Penal Code does not describe any qualification for abortionist but Section 3 of The Medical Termination Of Pregnancy Act, 1971 tells that abortion can be done by registered medical practitioner only. Registered medical practitioner is defined under section 2 (d) of act which says: 2 (d) "registered medical practitioner" means a medical practitioner who possesses any recognised medical qualification as defined in clause (h) of section 2 of the Indian Medical Council Act, 1956 (102 of 1956), whose name has been entered in a State Medical Register and who has such experience or training in gynaecology and obstetrics as may be prescribed by rules made under this Act. Abortionist has also to fulfill the conditions experience or training in gynaecology and obsterics as may be prescribed by rules made under the Act. Rule 4 of The Medical Termination of Pregnancy Rules, 2003 lays down the same which is as follow:4. experience and training under clause (d) of section 2.- for the purpose of clause (d) of section 2, a registered medical practitioner shall have one or more of the following experience or training in gynaecology and obstetrics, namely:- (a) in the case of a medical practitioner, who was registered in a state medical register immediately before the commencement of the Act, experience in the practice of gynaecology and obsterics for a period of not less than three years (b) in the case of a medical practitioner, who is registered in a State Medical Register:- (i) if he has completed six months of house surgency in gynaecology and obsterics; or (ii) unless the following facilities are provided therein, if he had experience at any hospital for a period of not less than one year in the practice of obsterics and gynaecology; or (c) if he has assisted a registered medical practitioner in the performance of twenty five cases of medical termination of pregnancy of which at least five have performed independently in a hospital established or maintained, or a training institute approved for this purpose by government. (i) This training would enable the registered medical practitioner (RMP) to do only Ist trimester terminations (up to 12 weeks of gestation). (ii) For terminations up to twenty weeks the experience or training as prescribed under sub-rules (a), (b) and (d) shall apply. (d) in case of a medical practitioner who has been registered in a state medical register and who holds a post-graduate degree or diploma in gynaecology and obstetrics, the experience or training gained during the course of such degree or diploma. The Medical Termination of Pregnancy Act, 1971 does not prescribe any punishment for women but it does not spare the person who performs abortion in violation of provisions of the Act. Section 5 says: 5. Sections 3 and 4 when not apply.-(1) The provisions of section 4, and so much of the provisions of sub-section (2) of section 3 as relate to the length of the pregnancy and the opinion of not less than two registered medical practitioners, shall not apply to the termination of a pregnancy by a registered medical practitioner in a case where he is of opinion, formed in good faith, that the termination of such pregnancy is immediately necessary to save the life of the pregnant woman. (2) Notwithstanding anything contained in the Indian Penal Code (45 of 1860), the termination of pregnancy by a person who is not a registered medical practitioner shall be an offence punishable with rigorous imprisonment for a term which shall not be less than two years but which may extend to seven years under that code, and that code shall, to this extent, stand modified. (3) Whoever terminates any pregnancy in a place other than that mentioned in section 4, shall be punishable with rigorous imprisonment for a term which shall not be less than two years but which may extend to seven years. (4) any person being a owner of a place which is not approved under clause (b) of section 4, shall be punishable with rigorous imprisonment for a term which shall not be less than two years but which may extend to seven years. Explanation 1.- For the purposes of this Section, the expression owner in relation to a place means any person who is administrative head and otherwise responsible for working or maintenance of a hospital or place, by whatever name called, when pregnancy may be terminated under the Act. Explanation 2.- For the purposes of this Section, so much of the provisions of clause (d) of section 2 as relate to the possession, by a registered medical practitioner of experience or training in gynaecology and obstetrics shall not apply. Section 5 punishes the abortionist but it also put woman on peril because nobody would like to take risk of prosecution and in most of cases he/she will avoid the performance. But some relief is given to RMPS, as section 8 of Act says: 8. Protection of action taken in good faith. - No suit or other legal proceedings shall lie against any registered medical practitioner for any damage caused or likely to be caused by anything which is in good faith done or intended to be done under this Act. Medical Termination of Pregnancy Regulations, 2003, enacted by virtue of section 7 of the Act, require doctors to fulfill some more conditions. In India abortion take place not for exercising the right over body but to get rid of girl child. Hatred for girl child in India does not mandate any special explanation here: as everybody would be acquainted with the stance. The national figure of 933 women against 1,000 men (2001 census) further substantiates the same. In some states like Punjab, which is also considered as one of the most prosperous states, conditions are worst: as there are 874 women against 1000 men. In India the prevailing practice is that after determining the sex through modern techniques if the child is found to be girl then she is killed in mother's womb only. That's why unborn child, especially girl child, is given special protection through Pre- Conception And Pre-Natal Diagnostic Techniques (Prohibition Of Sex Selection) Act, 1994. The Act prohibits the sex determination due to which unborn girl child can be protected. Female foeticide is so rampant in India due to which woman is not bereft of right to abortion only but also prevents from knowing sex of child in womb: as Pre- Conception And Pre-Natal Diagnostic Techniques (Prohibition Of Sex Selection) Act, 1994 prohibit pre-natal test. Abortion is prohibited in almost all the countries because no body has right to take anybody else?s life, even to his/her mother. As far question of relevance of anti-abortion law in today's India, it is not difficult to say that anti-abortion laws are need of the hour because it exercises the check over female foeticide and declining sex ratio. No doubt there can be arguments in favour of women for his right over her body but there is no room for advancing such arguments in prevailing conditions in India. Evil of female foeticide is not creation of tomorrow but lies in root of Indian society and worsening day by day due to which Indian judiciary has also to come forward to control practice. In Vinod Soni & Anr. v. Union of India Mumbai high court rejected the argument of unconstitutionality of Pre- Conception And Pre-Natal Diagnostic Techniques (Prohibition Of Sex Selection) Act, 1994, on the basis of right to privacy. It is noteworthy to mention that in Roe v. Wade , the apex court of U.S declared the anti-abortion law unconstitutional on the basis of right to privacy. Reversely Indian apex court mounts the pressure on govt. for strict application of anti-abortion laws. Due to hatred for girl child, the right of woman over her body does not hold much weight in India. So, the Scale of Justice in India tips in favour of unborn child. Right to Abortion Cruelty as a Matrimonial offence under Muslim Law Family law-son's pious obligation Trafficking in Women and Children MTP Act- The Flag Bearer of Unconstitutionality The author can be reached at: [email protected] / Print This Article • Know your legal options • Information about your legal issues Call us at Ph no: 9650499965 Copyright Registration Online Right from your Desktop... *Call us at Ph no: 9891244487 Legal AdviceGet legal advice from Highly qualified lawyers within 48hrs. with complete solution. Your Name Your lawyers in Delhi lawyers in Chandigarh lawyers in Allahabad lawyers in Lucknow lawyers in Jodhpur lawyers in Jaipur lawyers in New Delhi lawyers in Nashik Protect your website lawyers in Mumbai lawyers in Pune lawyers in Nagpur lawyers in Ahmedabad lawyers in Surat lawyers in Dimapur Trademark Registration in India lawyers in Kolkata lawyers in Janjgir lawyers in Rajkot lawyers in Indore lawyers in Guwahati Protect your website Transfer of Petition |Lawyers in India - Search by City| lawyers in Chennai lawyers in Bangalore lawyers in Hyderabad lawyers in Cochin lawyers in Agra lawyers in Siliguri Lawyers in Auckland lawyers in Dhaka lawyers in Dubai lawyers in London lawyers in New York lawyers in Toronto lawyers in Sydney lawyers in Los Angeles Cheque bounce laws Lok Adalat, legal Aid and PIL About Us | Juvenile Laws | Divorce by mutual consent | | Submit article | Lawyers Registration | legal Service India.com is Copyrighted under the Registrar of Copyright Act ( Govt of India) © 2000-2015 ISBN No: 978-81-928510-0-6
<urn:uuid:289e8cb5-499a-413a-89c8-2470616963ef>
CC-MAIN-2017-17
http://www.legalserviceindia.com/articles/mot_womb.htm
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917123530.18/warc/CC-MAIN-20170423031203-00428-ip-10-145-167-34.ec2.internal.warc.gz
en
0.949458
4,356
2.53125
3
In adults the metabolic syndrome imposes a substantial risk for type 2 diabetes mellitus and premature coronary heart disease. Even so, no national estimate is currently available of the prevalence of this syndrome in adolescents. To estimate the prevalence and distribution of a metabolic syndrome among adolescents in the United States. Design and Setting Analyses of cross-sectional data obtained from the Third National Health and Nutrition Examination Survey (1988-1994), which was administered to a representative sample of the noninstitutionalized civilian population of the United States. Male and female respondents aged 12 to 19 years (n = 2430). Main Outcome Measures The prevalence and distribution of a metabolic syndrome among US adolescents, using the National Cholesterol Education Program (Adult Treatment Panel III) definition modified for age. The overall prevalence of the metabolic syndrome among adolescents aged 12 to 19 years was 4.2%; 6.1% of males and 2.1% of females were affected (P= .01). The syndrome was present in 28.7% of overweight adolescents (body mass index [BMI], ≥95th percentile) compared with 6.8% of at-risk adolescents (BMI, 85th to <95th percentile) and 0.1% of those with a BMI below the 85th percentile (P<.001). Based on population-weighted estimates, approximately 910 000 US adolescents have the metabolic syndrome. Perhaps 4% of adolescents and nearly 30% of overweight adolescents in the United States meet these criteria for a metabolic syndrome, a constellation of metabolic derangements associated with obesity. These findings may have significant implications for both public health and clinical interventions directed at this high-risk group of mostly overweight young people. THE PREVALENCE of obesity and diabetes mellitus among adults in the United States has increased during the past decade.1 Recent data indicate that 65% of the US adult population is either overweight, defined as a body mass index (BMI, calculated as the weight in kilograms divided by the height in meters squared) of 25 or more, or obese (BMI ≥ 30).2 In children and adolescents, the term overweight is used in place of obese and is defined as a BMI at or above the 95th percentile on age- and sex-specific growth charts from the Centers for Disease Control and Prevention.3 Overweight tripled among US children between 1970 and 2000, and 15% of 6- to 19-year-olds are overweight according to the most recent estimates.4 Obesity is estimated to cause approximately 300 000 deaths annually, and its 1-year direct and indirect costs are estimated to be $117 billion.5 Leaders in the emerging field of preventive cardiology have increasingly recognized obesity's role in adult cardiovascular disease. Correspondingly, the guidelines for adult cholesterol and the primary prevention of cardiovascular disease reflect this increased recognition of obesity's role.6,7 The guidelines for cholesterol also target the metabolic syndrome, a constellation of metabolic derangements that predict both type 2 diabetes mellitus and premature coronary artery disease, as a newly recognized entity that warrants clinical intervention. According to the National Cholesterol Education Program (NCEP, or Adult Treatment Panel III [ATP III]), persons meeting at least 3 of the following 5 criteria qualify as having the metabolic syndrome: elevated blood pressure, a low high-density lipoprotein (HDL) cholesterol level, a high triglyceride level, a high fasting glucose level, and abdominal obesity. Because of the increasing rates of adult obesity and obesity's association with insulin resistance and type 2 diabetes, the NCEP panel stated that the metabolic syndrome will soon have a greater impact on premature coronary artery disease than does tobacco.8 According to recent estimates, the metabolic syndrome affects 22% of the US adult population overall, including 7% of men and 6% of women in the 20- to 29-year age group.9 As childhood overweight increases,10,11 its medical complications are becoming more common and more frequently recognized.12- 14 For example, the prevalence of type 2 diabetes has risen dramatically among adolescents in the past 20 years.13 Studies suggest that a substantial percentage of overweight children and adolescents may be afflicted with the metabolic syndrome because many have 1 or more of the following: an elevated triglyceride level, a low HDL cholesterol level, and high blood pressure.15,16 Many overweight children also have elevated insulin levels, indicating an increase in insulin resistence.16 When one considers that autopsy studies have revealed that overweight in adolescence is associated with accelerated coronary atherosclerosis,17 recent trends become even more troubling. The purpose of the current study is to estimate the prevalence and distribution of a metabolic syndrome in adolescents using a nationally representative sample of the US population. Data from the Third National Health and Nutrition Examination Survey (NHANES III, 1988-1994) were examined. The NHANES III used a complex, multistage design to provide a representative sample of the noninstitutionalized civilian population of the United States. Approximately 40 000 persons aged 2 months to 65 years or older were studied. Young and old persons and ethnic minorities such as African Americans and Mexican Americans were oversampled.18 After being evaluated in a home interview to determine family medical history, current medical conditions, and medication use, participants were randomly assigned to undergo a morning, afternoon, or evening examination at the mobile examination centers. Morning participants were asked to fast for 8 hours; afternoon and evening participants were asked to fast for 6 hours. The details of the determination and analysis of triglyceride levels, HDL cholesterol levels, and glucose values have previously been described.9,19 For adolescents aged 17 years and older, 6 seated blood pressure readings were taken in 2 separate settings. The household interviewer took 3 measurements at the participant's home, and the study physician took 3 during the evaluation in the center. The first and fifth Korotkoff sounds were used to represent the systolic and diastolic values.18 We used the mean of these 6 measurements in these analyses. Adolescents aged 12 to 16 years did not have their blood pressure taken at home, and thus this age group had only the 3 measurements taken by the physician. Again, the mean was used. Height was measured in an upright position with a stadiometer, and weight was measured at a standing position using a self-zeroing scale (Mettler-Toledo, Inc, Columbus, Ohio). The waist circumference measurement was made at the midpoint between the bottom of the rib cage and above the top of the iliac crest. Measurements of waist circumference were made for each subject at minimal respiration to the nearest 0.1 cm.18 The Tanner stage of pubic hair development was used as an indicator of sexual maturity because it was obtained for both sexes.20 There was standardized training for physicians performing these examinations, and photographs and written descriptions were available for reference. Pubic hair was staged from 1, representing immaturity, to 5, for full maturity.20 The initial sample consisted of 3211 subjects aged 12 to 19 years, to whom the following exclusion criteria were applied: (1) had not fasted for 6 hours, (2) was currently pregnant, or (3) was taking medication classified as a blood glucose regulator, such as insulin, androgens or anabolic steroids, or adrenal corticosteroids. The final sample numbered 2430, including some individuals with 1 or more excluding factors. No children younger than 12 years were instructed to fast as part of NHANES III. The criteria for the metabolic syndrome in adults specified by NCEP's ATP III and the adapted definition used in this analysis for adolescents aged 12 to 19 years are shown in Table 1.7 Because these criteria have never been formally defined or applied in children or adolescents, we modified the adult criteria to the closest representative values obtainable from pediatric reference data. In developing a definition for metabolic syndrome in adolescents,21 we considered reference values from the NCEP Pediatric Panel report,22 the American Diabetes Association statement on type 2 diabetes in children and adolescents,23 and the updated Task Force report on the diagnosis and management of hypertension in childhood as well as ATP III.8 Because no reference values for waist circumference exist for adolescents or children, we analyzed all adolescents in the data set who had a waist circumference recorded. We classified participants with a waist circumference at or above the 90th percentile value for age and sex from this sample population as having abdominal obesity. Elevated systolic or diastolic blood pressure was defined as a value at or above the 90th percentile for age, sex, and height.21 If subjects reported current use of any antihypertensive drugs, they were labeled as participants with elevated blood pressure. This approach of counting participants taking medications was also used for examining the prevalence of the metabolic syndrome in adults in the same national data set.9 The NCEP Report of the Expert Panel on Blood Cholesterol Levels in Children and Adolescents22 and a table summarizing these values in a review by Styne14 were used to establish the criteria for cholesterol level abnormalities. The range of 35 to 45 mg/dL (0.91-1.16 mmol/L) is given for borderline low HDL cholesterol levels for all sexes and ages. In children aged 10 to 19 years, a borderline high range for triglyceride levels is given as 90 to 129 mg/dL (1.02-1.46 mmol/L). Therefore the midpoint value for HDL cholesterol (≤40 mg/dL [≤1.03 mmol/L]) was used as a 10th percentile value, and the midpoint value for triglycerides (≥110 mg/dL [≥1.24 mmol/L]) was taken as the 90th percentile value for age. The reference value for elevated fasting glucose was taken from the American Diabetes Association guideline of 110 mg/dL or higher (≥6.1 mmol/L).23 Prevalence values were compared using the χ2 test for proportions for those children with and without the metabolic syndrome. Comparisons of means of continuous variables were done with the t test. Children identified in the racial/ethnic category "other" were included in the overall sample analyzed, but this subsample was too small for meaningful analysis separately. To account for the complex sampling design, SAS24 and SUDAAN25 statistical software were used in the analysis, and SUDAAN was used to apply sampling weights to produce national estimates. Demographic characteristics associated with the metabolic syndrome in bivariate analyses are shown in Table 2. The overall prevalence of the metabolic syndrome in adolescents was 4.2%. It was more common in males (6.1%) than in females (2.1%) and was more frequent in Mexican Americans (5.6%) and whites (4.8%) than black subjects (2.0%). By region of the country, the rate was highest in the West and Midwest and lowest in the Northeast. Findings for age (12-14 years vs 15-19 years), Tanner stage by pubic hair, poverty level, and parental history of diabetes and myocardial infarction were not significant. When stratified by BMI, 28.7% of overweight adolescents (BMI ≥95th percentile for age and sex) met criteria for the metabolic syndrome. A comparison of the final sample with those subjects who were excluded revealed only 1 difference by demographic characteristics (BMI also did not differ): The percentage of African Americans was slightly higher in the excluded group (18.9% vs 14.6% in the overall sample; P = .05). The proportion of subjects with 1 or more abnormalities of the metabolic syndrome is presented in Table 3. In this sample, 41% of subjects had 1 or more of these risk factors, whereas 14% had 2 or more. There were no subjects who had all 5 of these risk factors. The prevalence of the metabolic syndrome by sex and race/ethnicity is shown in Figure 1. The prevalence among white (7.1%) and Mexican American males (7%) was nearly the same, whereas black males had the lowest rate at 2.6% (P = .003). Among females, Mexican Americans (4.1%) had the highest rate, whereas black females (1.4%) had the lowest rate (P<.001). Prevalence of the metabolic syndrome by sex and race/ethnicity. The distribution of each element of the metabolic syndrome is shown in Table 4. Overall, high triglyceride levels and low HDL cholesterol levels were most common, whereas high fasting glucose levels were the least common. White adolescents had the highest rates of high triglyceride levels (25.5%) and low HDL cholesterol levels (26.1%). Mexican American subjects had the highest rate of abdominal obesity by waist circumference (13.0%). Black adolescents had the highest proportion of elevated blood pressure (6.2%). Adolescents with the metabolic syndrome had a mean BMI of 30.1 and, on average, were at the 95.5th percentile for BMI by age and sex (data not shown). Of those adolescents who fulfilled these criteria for the metabolic syndrome, 25.2% were at risk for overweight, by BMI, and 73.9% were overweight. The metabolic syndrome has been called several other names, including syndrome X, insulin resistance syndrome, dysmetabolic syndrome X, Reaven syndrome, and the metabolic cardiovascular syndrome.15,26 Obesity, insulin resistance, dyslipidemia, and hypertension are common to all. The World Health Organization used "metabolic syndrome" in their 1998 report on diagnosis and classification of diabetes mellitus.27 Both the World Health Organization and ATP III chose this title for their consensus definitions.7,27 We believe that this is the first study to examine the prevalence and distribution of a metabolic syndrome in a nationally representative sample of US adolescents. Perhaps 4% of adolescents overall and nearly 30% of overweight adolescents meet the criteria for this syndrome, suggesting that almost 1 million adolescents in the United States are affected. The metabolic syndrome affects an estimated 47 million American adults.9 The syndrome emerges when a person's predisposition for insulin resistance is worsened by increasing adiposity; dyslipidemia, elevated blood pressure, and proinflammatory and prothrombotic properties result.28 Adults with this syndrome frequently progress to type 2 diabetes and demonstrate markedly increased risk for morbidity and mortality from cardiovascular disease.29- 31 The metabolic syndrome in adults is largely confined to the overweight population32 and represents a subgroup of obese persons who bear a level of risk for cardiovascular disease that exceeds that of the obese in general. An estimated 7% of men and 6% of women aged 20 to 29 years are affected with the metabolic syndrome,9 so our finding that 4% of those aged 12 to 19 years may have this syndrome should not be surprising. Four previous regional studies of children that relied on US and international samples demonstrated the clustering of the risk factors for the metabolic syndrome and reported rates from 2% to 9%.33- 36 Overweight has important implications for the future health of our young people, especially in terms of coronary heart disease and diabetes. The Pathobiological Determinants of Atherosclerosis in Youth research group, for example, found that overweight (by BMI) in young men was associated with fatty streaks, raised lesions, and low-grade stenosis of the coronary arteries.17 In addition, studies have established that child and adolescent obesity tracks into adulthood and also predicts the metabolic syndrome in adults.37- 39 Results of one of the many reports from the Bogalusa Heart Study40 show that when insulin concentrations are increased in childhood they tend to remain elevated in adulthood, and those adults with consistently elevated insulin levels tend also to have increased rates of obesity, hypertension, and dyslipidemia. In the present study, adolescents with the metabolic syndrome had a mean BMI just above the 95th percentile; thus, they represent a fairly common clinical problem, one likely to be encountered routinely by general pediatricians. Abdominal or centrally distributed fat is associated with type 2 diabetes and a poor cardiovascular profile in adults.41- 45 In children, an increased waist circumference has been shown to correlate with abnormal systolic and diastolic blood pressures and elevated serum levels of total cholesterol, low-density lipoprotein, triglyceride, lipoprotein, and insulin, as well as lower concentrations of HDL.36,46,47 The association between the clustering of cardiovascular risk factors and waist circumference is not only a reflection of the degree of obesity but is also dependent on the regional distribution of the excess body fat.48,49 Thus, because a more central distribution of fat correlates with worse cardiovascular risk and waist circumference has been shown to be the strongest correlate of central fat distribution in children,50 it seems appropriate to use waist circumference in a pediatric definition of metabolic syndrome. In fact, BMI is a less sensitive indicator of fatness in children and fails to account for fat distribution.51 Perhaps for these reasons, an American Heart Association statement has recommended the inclusion of waist circumference measurements in evaluating children for insulin resistance or those who manifest features resulting from insulin resistance that constitute much of the metabolic syndrome.49 Given the growing concern about metabolic syndrome, coupled with the alarming increase in the prevalence of overweight in children and adults, it is not surprising that the American Heart Association set forth a series of guidelines for promoting cardiovascular health as part of comprehensive pediatric care.49,52,53 Evidence shows that obesity and insulin resistance has already started "the clock of coronary heart disease" in some adults, even before the onset of diabetes.29 We cannot definitely state that this would be the case for overweight adolescents with the metabolic syndrome according to our definition, but this seems likely for many because the syndrome is a constellation of cardiovascular risk factors. Cluster-tracking studies have shown that multiple cardiovascular risk factors persist from childhood into adulthood in 25% to 60% of cases.54,55 One study showed that subjects who either developed or lost their risk factor clustering over time had significant changes in their adiposity and lifestyle behaviors related to nutrition and physical activity.55 The first limitation of the data we present is to consider how to define the metabolic syndrome for pediatric patients. The intent was to create a definition for metabolic syndrome in adolescents for initial epidemiologic investigation and for possible future clinical consideration. The concept was to identify borderline high (or borderline low in the case of HDL) values for each criterion from established guidelines for children and adolescents. In some instances, as with BMI, age- and sex-specific criteria are recommended to identify abnormal patients.56,57 In contrast, in the case of glucose and serum cholesterol, screening guidelines give single specific cutoff values for identifying abnormal subjects.23,58,59 Although the rates of abnormal cholesterol values in adolescent subjects may seem higher than expected, 30% of adults from the same data set had hypertriglyceridemia and 37% had a low HDL cholesterol value according to the ATP III criteria for the metabolic syndrome.9 There might also be concern that the cholesterol cutoff values used might lead to some overestimation or underestimation, but there was no difference in the prevalence of metabolic syndrome between 12- to 14-year-olds vs 15- to 19-year-olds (P = .92). When teenagers were stratified by Tanner stage, there was also no statistical difference in the rates of this syndrome phenotype, but rates increased among Tanner 2 and Tanner 3 individuals and decreased among Tanner 4 and 5 subjects. Although no national definition of the metabolic syndrome in adolescents currently exists, obesity treatment guidelines recommend identifying youth with medical complications of their obesity.56,57 Even recent scientific statements on cardiovascular disease prevention or obesity and insulin resistance in children have not presented a definition of metabolic syndrome for research or clinical application.49,52 Some other limitations to consider include the cross-sectional nature of these data, which do not allow causal inferences and limit any assumptions about the duration of the existence of any of the criteria, such as blood pressure or cholesterol level. Also, since NHANES III was conducted, both obesity and type 2 diabetes have become more common among adolescents,13,60 which may mean that this clustering of risk factors may have a higher prevalence now than it did during data collection. Despite subjects 12 years and older being instructed to fast, just more than 700 subjects from the original sample had to be eliminated for not fasting for at least 6 hours. Although 6 hours of fasting may not be ideal, it allowed a larger sample size to be analyzed by having subjects from afternoon and evening examinations included. Finally, it should be noted that owing to the low prevalence of the metabolic syndrome, some cell sizes were small when stratified by demographic characteristics. Multiple prospective reports confirm that the clustering of risk factors for the metabolic syndrome are developing during childhood,16,33,40,46,61 and studies of the metabolic syndrome in adults show that its prevalence increases with age.9 Our findings highlight a high percentage of overweight adolescents who may bear a heightened risk for future metabolic syndrome in adulthood with subsequent increased risks for premature cardiovascular disease and type 2 diabetes. Consistent with recent commentaries that have called for better ways to define overweight in children,62 use of a consensus definition for the metabolic syndrome to assess overweight adolescents might be a useful strategy to target a group at increased risk. Targeting adults with glucose intolerance and other markers of the metabolic syndrome has been employed in trials to prevent type 2 diabetes in adults.63- 65 The high prevalence of metabolic syndrome in overweight adolescents, however, emphasizes the need for effective preventive and therapeutic strategies that rely on diet, exercise, and lifestyle modification rather than medications. Otherwise, the financial burden imposed by obesity may be matched by the costs of treatment. This study demonstrates that a metabolic syndrome phenotype may exist in perhaps 4% of the US adolescent population and almost 30% of overweight adolescents. Of those adolescents with metabolic syndrome, the great majority were overweight. This syndrome may affect almost 1 million adolescents in the United States. The impact of the metabolic syndrome in adolescents on subsequent morbidity and mortality has not, however, been explored, nor has the potential to reduce these risks by weight loss, increased activity, or pharmacological alteration of associated metabolic derangements. Nonetheless, these data indicate that a substantial percentage of US adolescents may be at significantly heightened risk for the metabolic syndrome in adulthood and the subsequent risks for type 2 diabetes and premature coronary artery disease. Perhaps they should be considered candidates for aggressive therapeutic interventions to maintain healthy lifestyle into and throughout adulthood. Corresponding author and reprints: Stephen Cook, MD, University of Rochester School of Medicine and Dentistry, Department of Pediatrics, 601 Elmwood Ave, Box 278881, Rochester, NY 14642 (e-mail: [email protected]). Accepted for publication April 14, 2003. This study was supported by Faculty Development in Primary Care grant T32PE12002 from the Health Resources and Services Administration, Rockville, Md (Dr Cook). Childhood overweight currently affects 15% of children, and more than 60% of adults are overweight. Recently, the metabolic syndrome has been shown to affect more than 20% of the age-adjusted adult population and is closely related to the obesity epidemic. It is a clustering of metabolic derangements that reflect or portend insulin resistance, type 2 diabetes, and premature cardiovascular disease. To date, there has been no estimate of the potential disease burden for children or adolescents. This study suggests that the phenotype of the metabolic syndrome may affect 4% of adolescents in the United States, with nearly 80% of adolescents who meet the criteria employed for the metabolic syndrome being overweight. Almost 30% of the overweight youth in this sample have 3 or more of the risk factors for the metabolic syndrome, thus qualifying under the criteria employed. Because metabolic syndrome significantly increases the risk of type 2 diabetes and premature coronary artery disease in adults, adolescent subjects who continue to manifest this risk factor profile may constitute a subgroup of overweight teenagers to target for lifestyle behavior changes. Cook S, Weitzman M, Auinger P, Nguyen M, Dietz WH. Prevalence of a Metabolic Syndrome Phenotype in AdolescentsFindings From the Third National Health and Nutrition Examination Survey, 1988-1994. Arch Pediatr Adolesc Med. 2003;157(8):821-827. doi:10.1001/archpedi.157.8.821
<urn:uuid:fd58c37f-376b-467b-a60e-c324467a6f0a>
CC-MAIN-2017-17
http://jamanetwork.com/journals/jamapediatrics/fullarticle/481403
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917119225.38/warc/CC-MAIN-20170423031159-00482-ip-10-145-167-34.ec2.internal.warc.gz
en
0.945066
4,979
3.21875
3
Osteopathic medical education is similar to allopathic medical education in many ways, but it is uniquely different in others. State licensing agencies and many hospitals recognize the degrees as equivalent. Admission processes or requirements for prospective students are similar. Curricular content is consistently four years long, divided into two years of basic sciences and two years of clinical rotations. Curricular presentation may vary from traditional lecture-based formats to integrated models stressing case-based and problem-based learning. Medical education in osteopathic and allopathic schools, however, differs in mission, curricular emphasis, and types of faculty.1 The culture of osteopathic medical schools supports students entering primary care careers more than most allopathic medical schools. Dissimilarities in tradition, philosophy, and health care delivery2,3 define differences in the teaching and practice of clinical medicine. Osteopathic medical education preserves a link between human body structure and function, to promote a more holistic approach to patient care than allopathic physicians education normally does. Osteopathic medicine’s emphasis on primary care, family and preventive medicine, musculoskeletal health, and wellness influences curricula as well as diagnostic approaches. While allopathic clinical training is traditionally done in academic teaching hospitals, osteopathic clinical education places a greater reliance on voluntary faculty at community-based hospitals. Utilizing community-based institutions allows for more extensive exposure to the practice of primary care medicine.4 These differences are dynamically shown at Michigan State University, where osteopathic and allopathic students share faculty resources and classroom experiences during the first two years of medical school but complete their clinical training in different hospital systems.5,6 The clinical training of osteopathic medical students, therefore, differs significantly from allopathic clinical training. Differences emerge from the historical roots of the osteopathic profession, the links to community-based hospital settings, and the professional emphasis on primary care. In this paper, we explore the historical roots of osteopathic clinical training, describe the typical osteopathic clinical preparation, and outline the significantly different methods of delivering clinical training in three osteopathic medical schools: University of Medicine and Dentistry of New Jersey–School of Osteopathic Medicine, Kirksville College of Osteopathic Medicine, and Ohio University College of Osteopathic Medicine. History of Osteopathic Medicine: Impact on Clinical Training In 1892, osteopathic medicine began as a reformation movement when Andrew Taylor Still, a country physician, opened the American School of Osteopathy in a small house in Kirksville, Missouri, bought with his personal savings. In contrast, John Hopkins University opened a year later as a medical school with a $7 million endowment. Osteopathic medical schools initially grew rapidly, but then they experienced a post-Flexner decline similar to that of allopathic schools. By 1970, osteopathic physicians were licensed in almost every state, although there were only six osteopathic medical schools: the original five schools—Philadelphia College of Osteopathic Medicine, College of Osteopathic Medicine in Surgery in Des Moines, Chicago College of Osteopathic Medicine, Kansas City College of Osteopathic Medicine, and Kirksville College of Osteopathic Medicine—and Michigan State University College of Osteopathic Medicine, founded in 1969. One significant limitation on the growth of osteopathic schools was their exclusion from federal research dollars until 1956.7 The past four decades have shown increasing growth in osteopathic medical education, from the five original schools in the 1960s to 25 colleges of osteopathic medicine and three branch campuses today. Osteopathic physicians (DOs) were traditionally excluded from the medical staff of allopathic hospitals, particularly in large institutions. This led to the growth of smaller, community-based hospitals that still play a critical role in the current training of osteopathic students. DOs were not recognized as physicians by the armed forces during World War II. However, many MDs were drafted, and many of their patients went to osteopathic physicians for care. Because allopathic hospitals kept their ban on allowing DOs on staff, increased numbers of patients led to further growth of the community-centered osteopathic hospitals. These hospitals recognized the need for well-trained physicians, not only in primary care but also in specialties. Residency programs were rapidly developed; these programs relied on the volunteerism of attending physicians. The spirit of osteopathic volunteerism not only continues today but also remains an integral part of osteopathic clinical training as a whole. The vast majority of clinical educators within the osteopathic profession express their strong commitment to the ideal of “giving back” by devoting their professional time and energy to the education of their successors without remuneration. For example, in the three colleges of osteopathic medicine we discuss in this paper, 87% of the faculty members have voluntary status; these are supplemented by many other DOs at their affiliate hospitals who do not have faculty appointments. Historically, osteopathic hospitals were the cornerstone of osteopathic clinical training. These community-based hospitals fit well into the primary care mission of the osteopathic colleges. Although students of osteopathic medicine may have limited exposure to such tertiary services as transplants and advanced neurosurgery in community-based hospitals, they gain extensive experience with common illnesses and procedures. The Challenges of Rapid Expansion The rapid growth of osteopathic medical schools, including new schools, new branch campuses, and expansion of existing campuses, has created an additional challenge for osteopathic clinical training. This challenge is shared by allopathic schools, which are also expanding and compounded by offshore medical schools paying hospitals large sums for clinical training sites. The challenge is minimized by the standards of the Commission on Osteopathic College Accreditation (COCA), which require adequacy of clerkship sites before new colleges can begin and before expansion is approved. A few osteopathic colleges pay hospitals for medical student training; the great majority do not. Some colleges of osteopathic medicine contract with others to offer clinical training opportunities. COCA also requires detailed plans before allowing osteopathic colleges to expand class size. Hospitals with osteopathic graduate medical education programs affiliate with schools to ensure that their programs fill. The “Typical” Osteopathic Clinical Experience The American Association of Colleges of Osteopathic Medicine’s 2005–2006 Annual Report on Osteopathic Medical Education, the most current data available, derived data from the annual survey of the 20 colleges of osteopathic medicine operating in those years. The data show the major clinical clerkships required in the third and fourth years are family/community medicine, internal medicine, general surgery, pediatrics, obstetrics–gynecology, and psychiatry. Nineteen of the schools required emergency medicine. Osteopathic manipulative medicine (OMM) is integrated throughout these experiences. An additional rotation in OMM is offered by 33% of those surveyed. The leading rotations are internal medicine (11.1 weeks, or 17.2%), family/community medicine (10.9 weeks, or 16.9%), and pediatrics (9.3 weeks, or 14.4%). The typical osteopathic student spent 7.8 weeks (12.1%) on OMM clinical experiences. Thus, the “typical” osteopathic medical student does indeed complete a required curriculum that strongly emphasizes primary care. The student averages 50.4 weeks (over 50%) of his or her required rotations in family/community medicine, general internal medicine, pediatrics, and geriatrics. Other leading selections include obstetrics–gynecology, emergency medicine, general surgery, cardiology, critical care, gastroenterology, pulmonary, hematology/oncology, infectious diseases, psychiatry, nephrology, neurology, orthopedics/orthopedic surgery, rehabilitation medicine, rheumatology, and urology/urological surgery8 (Table 1). Three Osteopathic Medical Schools University of Medicine and Dentistry of New Jersey–School of Osteopathic Medicine: An academic health center In 1978, an act of the New Jersey legislature founded the University of Medicine and Dentistry of New Jersey–School of Osteopathic Medicine (UNDNJ-SOM), one of several state-supported osteopathic schools established in the 1970s (starting with the Michigan State University College of Osteopathic Medicine, created in 1969). Like its UMDNJ allopathic sister schools, the Robert Wood Johnson Medical School and the New Jersey Medical School, UMDNJ-SOM functions as a traditional academic medical center with a centralized hospital system. UMDNJ-SOM’s principle clinical affiliate is the three-hospital, 600-bed Kennedy Health System (KHS), which began when three separate osteopathic hospitals merged in 1981. UMDNJ-SOM is also affiliated with Our Lady of Lourdes Medical Center, a tertiary, 437-bed hospital in Camden, New Jersey. All four hospitals are within a 15-minute drive of the school’s main campus in Stratford, New Jersey. The KHS Stratford Division adjoins the school’s campus. Most of the school’s full-time faculty have hospital privileges at one or both institutions. The system allows all core clinical rotations to have full-time faculty members as clerkship directors. These clerkship directors meet regularly so that learning issues, administrative issues, and learning outcomes can be monitored and continuously improved. UMDNJ-SOM faces several of the same challenges as other osteopathic medical schools. Despite its large, 211-member, full-time faculty, the school relies on volunteer faculty for training, particularly in selected specialties in its unique first-year family medicine preceptor program. The spirit of volunteerism is challenged by increasing clinical demands on the physicians, yet the tradition persists. Volunteer faculties are not paid; the school finds other ways to value their contributions, such as letters, public recognition, and invitations to school events. Many of these volunteers have faculty appointments and titles. They must complete the same credentialing as full-time faculty members. The American Osteopathic Association (AOA) requires 120 hours of continuing medical education (CME) every three years for all members, and acting as a preceptor for students and residents can account for 60 of these required hours. The school allows volunteer faculty to attend many CME activities at little or no cost. Maintaining the osteopathic uniqueness in the third and fourth years is also a challenge for UMDNJ-SOM. KHS is now a mixed-staff hospital (53% DO and 47% MD). The staff at Lady of Lourdes Medical Center is predominately allopathic. The school has responded in several ways, including adding a required OMM clerkship in the third year and introducing an in-hospital consultative service in OMM at the KHS. The school is implementing osteopathic learning scenarios, case-based sessions that integrate OMM in each required clinical rotation. Another important response to the challenge is the development of the family medicine preceptor program. This program begins with early clinical experiences in the first year and continues with an eight-week preceptor experience in the third year during the family medicine clerkship. These preceptors are all osteopathic family practitioners, and all use OMM. They receive annual faculty development, including faculty development in OMM, during the annual meeting of the state osteopathic society. Ohio University College of Osteopathic Medicine: The statewide CORE system In 1995, the Ohio University College of Osteopathic Medicine (OU-COM) formalized its affiliation with 11 teaching hospitals throughout the state by forming the Centers for Osteopathic Research and Education (CORE). The resulting educational consortium, which has expanded to include 12 teaching hospitals, became the vehicle for delivering the college’s third- and fourth-year curriculum by providing clinical training opportunities for 200 third- and fourth-year predoctoral students from OU-COM and an additional 120–160 third- and fourth-year students from three colleges of osteopathic medicine located in other midwestern states. Third-year students, after ranking their top five choices, are assigned by a proprietary computer program to 1 of 12 base training sites. A board of directors, composed of representatives from the college and member hospitals (which, as of this date, number 24), governs the consortium. To unite such a dispersed academic administration, the college established the CORE office network. Each member hospital has a dedicated three-person CORE office staff, all employees of the OU-COM. The CORE assistant dean, a DO who holds faculty status with Ohio University through the OU-COM, is responsible for overall supervision of the CORE office and for mentoring and monitoring the professional progress of students assigned to that CORE site. Each CORE site also has an administrator (a master’s-level educator primarily responsible for student scheduling and day-to-day coordination and implementation of the site’s academic program) and an administrative assistant/associate (who provides secretarial support). Each CORE site unit reports to the central Office of Predoctoral Education, located on the Athens campus and headed by the associate dean and run by the director of predoctoral education. Videoconferencing technology allows the associate dean to meet with the entire group of assistant deans monthly; likewise, the director meets with the CORE administrators once per month. Five times a year, representatives from all CORE offices gather at a central location for a combined meeting. The CORE Academic Steering Committee consists of educational representatives from each CORE member hospital, including the CORE assistant deans, directors of medical education, the director of CORE research, OU-COM clinical and biomedical department chairs, faculty development directors, and various members of the OU-COM Offices of Predoctoral Education and Graduate Medical Education. This body meets monthly at a central location in Ohio to discuss issues germane to medical education locally, statewide, and nationally. Student learning outcomes for the clinical training years are monitored in a variety of ways, including computerized pre- and postrotation exams, preceptors’ evaluations of students’ performances using the seven AOA core competencies, and triannual individual progress reports by the CORE assistant deans. The CORE staff consistently attempt to identify students’ professional training ambitions and to guide them to appropriate training opportunities with the CORE consortium. After each rotation, students evaluate the program and preceptor; composite summaries of these evaluations are shared with teaching faculty and appropriate clinical departments on an annual basis. Twice a year, the associate dean for predoctoral education conducts individual site visits to meet with students, CORE office personnel, and teaching faculty. The information and data gathered and exchanged are used to further refine the academic program. To implement its clinical curriculum, OU-COM relies heavily on the osteopathic tradition of volunteerism. As part of its commitment to these generous supporters, OU-COM provides faculty development and complementary CME opportunities, both centrally coordinated from the Athens campus. In addition, many members of the teaching faculty value their involvement in predoctoral training as part of a recruitment effort for graduate medical programs at their institution. The multicampus configuration of the consortium challenges the consistency and flow of learning through both the third and fourth years. The learning objectives for the 79 weeks of required and elective clinical clerkships are centrally coordinated and locally implemented, using the organizational structure that includes the Athens-based Office of Predoctoral Education and the CORE offices at each training hospital. With the help of a generous grant from the Ohio Osteopathic Association, OU-COM held a series of retreats that resulted in the creation of a curriculum explicitly devoted to enhancing OMM skills in the third and fourth years. OMM “champions” at each CORE site took the lead in implementing this curriculum and, using such teaching and learning resources as student manuals and instructor PowerPoint presentations, introduced specific training in OMM skills into the formal didactic portion of the clinical years. The Kirksville College of Osteopathic Medicine: The regional campus system The Kirksville College of Osteopathic Medicine (KCOM), the descendant of Dr. Still’s original American School of Osteopathy, is a private, community-based medical school and part of A.T. Still University. It provides a classic 2–2 split in medical education, where the first two years are predominately didactic, heavily loaded with the basic sciences and osteopathic manipulative medicine, and the third and fourth year consist of clinical rotations. KCOM’s clinical rotations occur in regions, arranged predominately by state. Currently, KCOM has regions in Missouri, Michigan, Minnesota, Wisconsin, Indiana, Ohio, New Jersey/Pennsylvania, Florida, Arizona, Utah, and Colorado. Regional deans organize and supervise the student rotations. Learning objectives for these rotations are determined at one of the two annual regional deans’ meetings. KCOM selects a regional dean either through association with the existing structures, such as Ohio University’s CORE, or through an application process directed by the associate dean for clinical educational affairs. The regional dean is paid for part-time service by the KCOM. The system of regional deans allows the associate dean for clinical educational affairs to maintain vigilance over these geographically diverse sites. Support staff at each site, employed by KCOM, report to the regional dean, handle scheduling and student affairs issues and student nonacademic administration, and enter academic and nonacademic data into a KCOM-wide database. Outcomes from these data, including differences among sites, are tracked by the associate dean. The majority of these regional sites are hospital based. Some regional sites (Colorado and Utah) are preceptor-based rotations. These rotations are designed to offer extensive experiences in ambulatory medicine. Students are assigned to sites using a lottery method very similar to OUCOM’s. The method is described to students in detail during the admissions process. Hospital rotations are limited to a few months of core clerkships in internal medicine, pediatrics, surgery, and obstetrics–gynecology. The remainder of the time, students follow patients in a hospital setting from the preceptor’s practice. All students must pass a clinical skills exam at the end of their second year before they are permitted on rotations. They are tested on interviewing skills, physical examination skills, the interpretation of basic diagnostic tools, and their performance in standardized and simulated experiences. To assess the teaching of OMM, KCOM has developed modules to be presented by OMM fellows and local faculty at each regional campus. These modules are presented on “education days” held regularly, usually monthly, in each of the regions. Currently, KCOM is developing rotations in OMM in each of the regions. Because KCOM students are at regional campuses throughout the nation, reliable and valid student assessment is a challenge. Each rotation has a predetermined set of explicit learning objectives that are assessed at the end. Students’ progress is monitored using logs, exit objectives lists, postrotation exams, National Board of Medical Examiners (NBME) “shelf exams,” end-of-third-year exams, and NBME end-of-rotation exams. The capstone experience encompasses bringing the entire class to Kirksville for a performance examination that includes testing with standardized patients, high-fidelity human patient simulation, and objective structured clinical examinations. Students are rated on compassion, integrity, communication skills, professionalism, documentation skills, performance of skills, gathering and interpreting histories, and implementation of OMM skills. On rotations, students are required to make two formal case presentations, attend education days, present at two journal clubs, and write one paper of acceptable quality. Students are also required to take an end-of-rotation examination. The preceptor and the regional assistant dean, the director of student medical education, or their designee, assess each student’s professionalism, compassion, and integrity. Students are also required to pass a performance examination at the end of their third year, structured similarly to the prerotation examination but administered at a higher level of difficulty. Beginning this year, a third part of the assessment will include the use of student portfolios for formative assessment. The one measure of learning outcomes common to all three schools is the National Board of Osteopathic Medical Examiner’s (NBOME) Comprehensive Osteopathic Medical Licensing Examination (COMLEX). The examinations are “designed to assess the osteopathic medical knowledge and clinical skills considered essential for osteopathic generalist physicians to practice medicine without supervision. COMLEX-USA is constructed medical problem-solving which involves clinical presentations and physician tasks. Candidates are expected to utilize the philosophy and principles of osteopathic medicine to solve medical problems.” The Level 2 examinations are designed to measure clinical skills. The Level 2-CE computer-based examination is a multiple-choice/matching assessment “integrating the clinical disciplines of emergency medicine, family medicine, internal medicine, obstetrics/gynecology, osteopathic principles, pediatrics, psychiatry, surgery, and other areas necessary to solve medical problems.”9 The Level 2-PE examination tests clinical skill in a standardized patient setting. A description of each examination is available at the NBOME Web site (http://nbome.org). The NBOME notes that statistics about student performance in medical disciplines—surgery, obstetrics–gynecology, psychiatry, family medicine, pediatrics, internal medicine, emergency medicine, and osteopathic principles and practice—may not be valid and are, therefore, not included in this paper. The exams are not a perfect outcome measure, because many variables, including admissions criteria, class diversity, and the curricula of the first two years, can have an impact on the scores. Thus, the scores among schools cannot be directly compared, but they do give insight about the clinical training programs at each school. The results for the last two years are shown in Tables 2–5. First-time takers at all three schools performed above the national mean in the vast majority (90%) of all outcome measure on the two examinations. In some situations, particularly COMLEX Level 2-PE, the difference between pass rates was sometimes two or three students. The overall success of all three schools suggests that all three methods of clinical training can be successful. In the clinical training of osteopathic medical students, colleges rely on many of the profession’s historical strengths, including a great tradition of volunteerism and a group of strong, community-based hospital affiliates. Osteopathic medical schools overcome the many challenges of clinical training by varying models (academic medical center, statewide core, and regional campuses) and innovative programs (early osteopathic primary care preceptors, 79-week clinical curricula, and preceptor-based affiliates). Despite the pronounced differences between allopathic and osteopathic training, we believe these models can be adapted by our allopathic counterparts as they meet Association of American Medical Colleges’ call to expand their own class sizes. 1Peters AS, Clark-Chlarelli N, Block S. Comparison of osteopathic and allopathic medical schools’ support for primary care. J Gen Intern Med. 1999;14:730–739. 2Ward RC, ed. Foundations for Osteopathic Medicine. Baltimore, Md: Williams & Wilkins; 1997. 3Sun C, Pucci GJ, Jew S. Musculoskeletal disorders: Does the osteopathic medical profession demonstrate its unique and distinctive characteristics? J Am Osteopath Assoc. 2004;104:149–155. 4Shalapentokh V, O’Donnell N, Grey MB. Osteopathic interns’ attitudes toward their education and training. Med Educ. 1991;91:786–802. 5Jacobs A. Osteopathic and allopathic collaboration. Paper presented at: 16th Annual Berkshire Medical Conference: Collaborations in Medicine; July 12, 2000; Hancock, Mass. 6Tulgan H, DeMarco WJ, Pugnaire MP, Buser BR. Joint clinical clerkships for osteopathic and allopathic medical students: New England experience. J Am Osteopath Assoc. 2004;104:212–214. 7Gallagher RM, Humphrey FJ. Osteopathic Medicine: A Reformation in Progress. New York, NY: Churchill Livingston; 2001. 8Association of American Colleges of Osteopathic Medicine. 2006 Annual Report on Osteopathic Medication Education. Chevy Chase, Md: Association of American Colleges of Osteopathic Medicine. © 2009 Association of American Medical Colleges 9National Board of Osteopathic Medical Examiners Web site. Available at: (http://www.nbome.org ). Accessed February 18, 2009.
<urn:uuid:11c992b2-1e86-4ebd-ba42-a0754845ea7e>
CC-MAIN-2017-17
http://journals.lww.com/academicmedicine/Fulltext/2009/06000/Osteopathic_Clinical_Training_in_Three.16.aspx
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917118519.29/warc/CC-MAIN-20170423031158-00247-ip-10-145-167-34.ec2.internal.warc.gz
en
0.944953
5,125
2.9375
3
Science in Christian Perspective The Impact of Psychology's Philosophy of Continual Change on Evangelical Christianity EVERETT L. WORTHINGTON, JR. Department of Psychology Virginia Commonwealth University Richmond, Virginia 23284 From: JASA 36 (March 1984): 3-8 Hegel proposed that the content of truth was always changing. Only the dialectic process of change was permanent. This philosophy has been adopted, often uncritically, by scientists-including psychologists. Psychologists emphasize "processes" in their research, practice and public communication. Because psychologists use "objective" methods-even though their methods are directly influenced by Hegelian philosophy-and because they address subjects of great concern, psychologists risk undermining the Christian faith by assuming that nothing is unchanging except processes of change. I suggest that scientists who are Christians take the lead in restoring a balanced scientific methodology that supports eternal truths of Scripture. Modern psychology has become a study of the processes of change in living animals including humans. The lexicon of psychology is replete with explicated "processes" and with concern over "development," yet thirty-five years ago these words were rarely used. This current state represents large scale acceptance of a philosophy that could have detrimental consequences for evangelical Christianity. This paper examines the historical development of this emphasis on change, especially within psychotherapy. Some of the dangers of this emphasis are explored, and Christians who are scientists are urged to exert leadership in restoring a balanced view of human existence.Historical Conflict Parmenides, the champion of being, believed that Truth was eternal and unchanging. Stability was reality. The appearance of change, he believed, was an illusion based on our faulty senses. Because the senses were suspect, Truth has to be apprehended through reason and logical argument. Parmenides, thus, founded rationalism. Heraclitus, to the contrary, suggested that ever-changing fire made up the elementary nature of the universe. Stability was thus an illusion. A river looks the same today as yesterday, but in fact the river is continually composed of different water molecules. Heraclitus proclaimed that no person ever steps into the same river twice. This conflict between being and becoming is at the center of western thought, and the ebb and flow of emphasis on one pole or the other has molded science into what it is today. The main assumption of science today is that the universe is constantly changing and relativistic. Aristotle (384-322 B.C.), Plato's student, did not distinguish between the physical universe and the realm of Forms as widely as Plato. For Aristotle, universals existed to be discovered. Although he believed that universals needed to be apprehended through the mind, he did not believe that they were created through the mind. Aristotle linked universals with particulars by creating intermediate steps-classes or species. He identified four types of causality and made possible a natural science (of sorts) by emphasizing the physical universe, physical causes, and a science of being that allowed people to investigate what existed. The influence of Plato and Aristotle carried that day for a science of being. If Truth existed and was unchanging, then people could investigate and understand Truth. (Of course, opponents of a philosophy of being always existed, though they had little impact given its widespread acceptance.) In Europe the philosophy of being was solidified by the influx of the Israelites after the dispersion of 70 A.D. and by the spread of Christianity into Europe. Both Judaism and Christianity assumed a philosophy of stability and eternality to be at the core of existence. Thus, by the beginning of the sixteenth century, a rational-deductive science of being, which trusted mainly in logic and deduction from presuppositions rather than in observation, had been forged. Still lacking was a formal, empirically-based framework on which the flesh of science could be draped, thereby building a materialistic philosophy of being. In 1687, Sir Isaac Newton (1642-1727) published Principia Mathematica. Newton's laws of motion legitimized the philosophy of being because they proposed that the physical universe was in essence unchanging-bodies at rest tended to stay at rest and bodies in motion tended to remain in constant motion. As philosophies of being had always assumed, deviation from (normal) stability required the intervention of an external agent (force, in Newton's system). Forces introduced instability. They needed explaining. Constancy of motion was assumed normal and consequently did not need explaining. Newton's laws, of course, formalized a philosophy of being and legitimized it as a materialistic explanation for the nature of the world. Almost immediately, however, anomalies (findings inconsistent with Newton's theories) were discovered. These were ushered in on the arm of the "methodological revolution" (Rossi, 1975, p. 249). Inventions of the microscope and telescope, experiments on the vacuum, and the discovery of the circulation of blood suggested a science in which motion and change were normative and bodies at rest needed explanation. Yet, for years Newton's ideas continued to nurture growth of physical knowledge. This changed in the late eighteenth and early nineteenth centuries with the articulation of a philosophy of becoming by Kant (1724-1804) and Hegel (1770-1831). Kant asserted what Koch (1981) calls four "antinomies of pure reason" (p. 262)-that human existence is concerned with questions that are meaningful but rationally undecidable in principle. This undermined rationalism, though it was not until years later that the structure collapsed. Hegel replaced the idea of absolute truth with the concept of the dialectic. Because the idea of change and development was so central to this thought, Hegel was forced to conclude that the traditional, formal and (as he called it in derogation) static logic of Aristotle was hopelessly inadequate, and that it had to be replaced by what he called a dialectical logic more adequate to deal with the Absolute. Aristotle had said that a thing must either have an attribute or its opposite at a given time but Hegel disagreed, usually calling attention to intermediate or twilight zones when, he said, a thing appears to possess neither. (White, 1955, p. 41) Hegel's philosophy did not do away with all absolutes (as Schaeffer, 1968, has argued); rather, it proposed that content was ephemeral and always "becoming" while the process of change was universal or absolute. The process always involved the dialectic-thesis-antithesis-synthesis. Hegel thus delivered an apparently mortal wound to Plato (though it takes time to die). The philosophy of becoming slipped like a razor-sharp stiletto between the ribs of science. In political science, Karl Marx (1818-1883) applied Hegel's dialectical reasoning process to a materialistic conception of nature. This dialectical materialism was the first truly acceptable application of "becoming" philosophy to a naturalistic science. Then, "becoming" entered biology. Although theories of evolution had been extant for years (Lamarck, 1744 1829), evolutionary theory was only widely accepted after Hegel and Marx. In 1859, Darwin (1809-1882) published Origin of Species. Strangely, he had written down his ideas in 1842 but did not publish them for 17 years because the ideas were philosophically repugnant to him (Leahey, 1980). By the 1900's even physics was reconceptualized by Einstein (1879-1955) and others (see Barnett, 1948 for a review of the remaking of physics). The nature of the universe was seen as relativistic and ever-changing. By 1927, with the influence of Heisenberg, the universe was conceptualized as probabilistic at base. In general, the main assumption of science today is that the universe is constantly changing and relativistic. Scientific laws are concerned either with probability or with describing the process of an assumed change. For example, in a recent article in The Chronicle of Higher Education, Roark (1981) summarized the major questions in biology as "How fast do plants and animals evolve? By what means do they change? Through what processes do new species emerge?" (p. 3). No longer are assumptions of change debated. The basic questions involve describing the nature of the assumed change process. The most widely accepted philosophy of science (Kuhn, 1970, 1977) reflects the view of science as ever-changing. Kuhn (1970) presents science as a collective cognitive map, or paradigm, of the phenomenal world. This paradigm is subject to periodic extensive reorganizations-collective perceptual shifts-called scientific revolutions. When a scientific revolution is imminent, proponents of the extant paradigm are unable to solve significant problems (anomalies) within the paradigm, which because of focused attention on the paradigm's failure, induces a crisis. Proponents of different paradigms propose solutions. When scientists must choose between a new paradigm, which is supported by little research but which solves the anomalies, and the established paradigm, which is in crisis, then what Kuhn (1977) calls the "essential tension" occurs. Kuhn has been criticized by other philosophers of science, notably Lakatos (1970), as proposing an irrational view of science in which "progress" is meaningless except from within a paradigm. Lakatos argues that scientific revolutions are not progress, but merely set science on new pathways. Toulmin (1972) has proposed a more rational philosophy of science based on evolution rather than revolution. Concepts are thought to survive or perish through natural selection. Concepts that make sense survive; those that don't make sense, perish. Both Kuhn's and Toulmin's philosophies embody the assumption of continual change.The Evolution of Experimental Psychology Within this philosophical climate, experimental psychology was born. Contrary to the emerging emphasis on change, at its inception psychology investigated content rather than change processes. Wundt investigated the content of consciousness; Freud, the content of the unconscious mind. Within the United States, however, a school of psychology arose from James' (1842-1910), and Dewey's (1859-1952) philosophizing. To these pragmatists, truth was a process of adaptation. James and Dewey lauded the "stream of consciousness, 11 or the "functions" of consciousness. Their psychology was called functionalism. They abandoned the study of the content of consciousness to study the process by which consciousness operates. That influence is still prevalent in the United States today. For example, Zick Rubin (1981) described the state of modern psychology as follows: The rallying cry of the 1970's has been people's virtually limitless capacity for change-not only in childhood but through the span of life.... The view that personality keeps changing throughout life has picked up so many adherents recently that it has practically become the new dogma. (p. 18) The study of morals and values provides an example of modern experimental (social) psychology. In contrast to a traditional Christian approach to morals, which emphasizes the content of God's laws and the demands those moral laws make on humans, psychologists have researched the process of moral development, irrespective of the content of morals. Two of the leading researchers in this area are Lawrence Kohlberg (1973) and William Perry (1970). Kohlberg has identified six stages of moral development. The notion of developmental states presupposes that reasoning processes change. Likewise, Perry has identified nine stages of iritellectual and ethical development during the college years. Both of these scientists use longitudinal research to support their theories. Such research looks for, and finds, changes with time. Thus, their methods emphasize change. In a sense this creates the conception among consumers of research that change is the essence of human existence. Not all researchers on values employ methods that focus attention on value change. Milton Rokeach (1968) assesses the value structures of individuals. He has people rank order values in each of two lists-terminal values (desired end states) and instrumental values (desired ways of behaving). Rokeach, through this methodology, treats the content of value structures as important in predicting human behavior. Yet, among researchers in values, Rokeach is in the minority. The majority of researchers investigate the process of value clarification, the process of value development, or the process of influencing people to change values.Modern Counseling and Clinical Psychology Counseling and clinical psychology were spawned by Freud and are only recently (in their Oedipal stage) seeking to "kill" the father. Freud was largely a pre-Hegelian thinker. He investigated universals-universal structures of the mind and universal developmental stages. Prior to World War II the dominant theories of psychotherapy focused on stability of personality and on universal truths about human nature. Clinical psychology was formed and nurtured through personality and intelligence assessment, which assumed that individuals maintained stable traits. Counseling psychology also was originated through assessing traits and factors in vocational counseling. There were advocates of becoming, to be sure (e.g., behaviorism), but counseling and clinical psychology were largely based on assumptions of being rather In the early 1950's, Carl Rogers (1951) proposed client centered therapy. He not only propounded a counseling theory that focused on the process of counseling, but he also introduced a philosophy of continual growth and change and "becoming." Rogers' 1951 model of personality remained largely content-oriented, paralleling Piaget's cognitive theory by using constructs like the real self (experience) and the ideal self (a cognitive map of one's experience). He also borrowed heavily from Freud, by using such concepts as introjection of values and psychic defenses (denial and distortion), and by emphasizing the emotions. By 1957, Rogers had deemphasized these remnants from the age of being. He had begun to concentrate on the "necessary and sufficient conditions of change in psychotherapy," and thus on the process of counseling. Each of these, [the modern theories of counseling and clinical psychology shows enormous concern for processes and little concern for content. At about the same time, Harry Stack Sullivan (1954) proposed an interpersonal approach to psychodynamic counseling. He attended to the interpersonal process of counseling and deemphasized the content of the patient's problem. Sullivan was a harbinger of modern interpersonal process theories of counseling, including more recent theorizing by Kiesler (1979). In general, these theories view the content of conversations during therapy as merely the veneer over an interpersonal fencing match between therapist and client. The thrust, parry, and riposte of interpersonal influence is termed the "process" of counseling. With Rogers and Sullivan, therapies of being were swept relentlessly aside and replaced by a hoard of therapies of becoming. Notable among these were the existentialists (May, 1958), gestalt therapy (Perls, 1969), and behavior therapy (Bandura, 1969). The theories of Rogers and Sullivan introduced "counseling process" into the vocabularies of clinicians though few consider the philosophical underpinnings of attending primarily to "counseling process." In recent years, theories of psychotherapy have touted therapy processes. The degree to which theorists attend to content of thoughts and to "universals" varies from somewhat to not-at-all. For example, three major approaches to psychotherapy currently dominate the field-cognitive-behavior modification, psychodynamic therapy, and family therapy. Each of these shows enormous concern for processes and little concern for content. First, let us consider cognitive-behavior modification. One might expect that cognitive therapies would examine contents of consciousness. This is rarely the case. Albert Ellis (1962) proposed rational-emotive therapy (RET). At the core of RET is uncovering people's "universal" irrational ideas. Ellis views these "universal" ideals as culture-specific but, nonetheless, he shows some concern with content of cognitions. Ellis certainly does not espouse a modern psychology of being, however, for A New Guide to Rational Living (Ellis & Harper, 1975) is written in a language called E-prime. The primary characteristic of E-prime is that it uses no form of the verb to be. Ellis is not concerned with being, but with action (e.g., with becoming). Attention to the possibility of universal thoughts contrasts with other cognitive theorists. Aaron Beck (1975) also modifies clients' dysfunctional automatic thinking and faulty cognitive processes, regardless of their content. Donald Meichenbaum (1977) modifies self-instructions and faulty cognitive processes. Behavior therapy in the form of its founders (Eysenck, 1959; Wolpe, 1958) has all but been abandoned. Freudian psychodynamic therapies have also become process-oriented. Currently, there are two major thrusts of psychodynamic psychology. Some therapists analyze ego processes and ego development. Others analyze interpersonal processes. Having begun with Sullivan (1954), this approach advocates an almost content free analysis of what happens between therapist and client in the counseling session. Individual psychotherapy is rapidly declining in popularity-tbough it will probably never die-and more therapists are becoming attracted to family systems approaches (Bowen, 1978; Haley, 1976; Minuchin, 1974). Generally, family approaches are process-oriented rather than contentoriented. They do not assume linear causality. They are epitomes of relativistic theories, and thus they embrace the zeitgeist of secular psychology in the 1980's.Consequences of Philosophies of Becoming and Being in Light of Scripture Thus far we have traced the historical roots of a philosophy of becoming and suggested its prevalence in modern experimental as well as counseling and clinical psychology. This paper is based on the assumption that science in general and psychology in particular are among the primary molds through which modern thought is shaped. Psychologists, whether they are theorists, researchers or both, influence many people. Psychologists influence researchers and theorists in training. Most psychology-trainees enter graduate school with little knowledge of the theories of psychology (though all have "implicit" theories). Throughout graduate school, trainees are exposed to (usually) an eclectic sampling of psychological approaches. The implicit norm is that students will learn what is taught; thus, a social pressure is applied for students to adopt or adapt the current secular theories of psychology. This is done through training students practically in research methods and/or in methods of counseling. Values inherent in research or counseling methods are often not specifically addressed because they are assumed. Psychologists-in-training are inculturated. Values are caught more than taught, and because of the small prior information base of most students, their graduate school experiences are very influential. It is not uncommon for a Christian to enter graduate training and adopt methods that are philosophically inconsistent with Christianity. The values of secular psychology may, consequently, be transmitted unwittingly by the trainee (and even by the professor, too). Psychological researchers and theoreticians only occasionally influence other practicing therapists and researchers' According to Kuhn (1970), established scientists are less susceptible to influence than are trainees. Perhaps they rarely read current research or books. Or perhaps they have a psychological commitment or resource commitment to an established treatment or research program. Or perhaps their information base is large enough to require a great "shock" to dislodge established beliefs. For whatever reason, established scientists or practitioners are not very susceptible to influence. Yet, through repeated exposure to philosophic assumptions or through personal crises, which open individuals' eyes to previously unconsidered beliefs, some established psychologists are influenced. The philosophy of scientists determines scientific methods, which influence scientific findings, which confirm the philosophy. Our faith is not very likely to be shaken by any book on Hinduism. But if whenever we read an elementary book on Biology, Botany, Politics, or Astronomy, we found that its implications were Hindu, that would shake us. It is not the books written in direct defense of Materialism that make the modern man a materialist; it is the materialistic assumption in all other books. (p. 93) In the same way it is implicit assumptions inherent in the methodology of science that are later adopted by the lay public. Furthermore, psychology is assumed to be even more influential than institutionalized Christianity at transmitting values and beliefs to the public, for almost everyone is exposed to psychology through schools and through the media, whereas only a minority is exposed to institutional Christianity. For the evangelical Christian, this means that science should promote values consistent with Scripture. Polanyi (1946) has argued forcefully that science is by nature valueladen. Recently this idea has been cogently applied to psychology by Sigmund Koch (1981) in the American Psy chologist, psychology's most prestigious journal. He criticizes psychology, and science in general, for being slavishly wed to thought that "regards knowledge as the result of 'processing' rather than discovery" (Koch, 1981, p. 259). Koch clearly explicated the certain link between the methods of psychology and the assumptions of philosophy. Psychology is necessarily the most philosophy-sensitive discipline in the entire gamut of disciplines that claim empirical status. We can not discriminate a so-called variable, pose a research question, choose or invent a method, project a theory, stipulate a psychotechnology, without making strong presumptions of philosophical cast about the value of our human subject matter-presumptions that can be ordered to age-old contexts of philosophical discussion. (p. 267). Given that Christians who are scientists want to behave consistently with Scripture, what does Scripture teach in this area? God is unchanging (Mal. 3:6). Jesus is unchanging (Heb 13:8), God's attitudes are unchanging (Ps. 118:2) his promises are unchanging (Gen 17:7), his kingdom is unchanging (Ps. 145:13), his way is everlasting (Ps. 139:24), and in our lifetimes the law is eternal (Matt 5:18). There is clear evidence from Scripture that God is an Absolute Being, with unchangeable attributes. Acts by humans that are contrary to those attributes were, are, and always will be wrong. There is a strong case for assuming a permanence of divine and human attributes. On the other hand, there is a small amount of evidence of dialectical logic within Scripture, and there are some universal processes of human existence identified within Scripture (e.g., sanctification is a universal process by which Christians learn to rely more closely on God). It appears that stability, eternality, and "being" are extremely important to the traditional Christian world-view. Furthermore, God saw fit to create and canonize the Bible within the Judaic culture, which clearly reflected a philosophy of being rather than of becoming. Since philosophies of becoming existed at least as early as 500 B.C. (Heraclitus of Ephesus), one would assume that God could have established Judaism and Christianity as embodying a philosophy of becoming if He had so desired. Yet, God called His people apart from the standards and philosophies which were " popular" or "accepted" within larger culture. Thus, within psychology, a self-strengthening loop has been established. Most psychologists have adopted a philosophy that deemphasized the content of thoughts and behaviors and emphasized psychological processes. The philosophy of scientists determines scientific methods, which influence scientific findings, which confirm the philosophy. This "evidence" contributes to cultural acceptance of the philosophy, because scientists have objectively discovered the nature of .. reality." Therefore, although Scripture does not directly prescribe what the nature of scientific research is to be, we conclude. that a Scripture-consistent position for psychologists to take would include general adherence to a philosophy of being. This suggests that psychologists concern themselves with universals-both universal contents of thoughts, motives, emotions, and behaviors and universal processes of interaction and development. Because of the prevailing emphasis on processes and on change, taking a position that affirms examining content in addition to processes can seem foolish. In the face of modern "reality" of empirical science that .. proves" (by presuming) the all pervasive nature of change, it takes either foolishness or courage (depending on one , s assumptions) to adopt a contrary position, especially with the possibility of rejection by peers-not to mention rejection by promotion and tenure committees. Unless Christian psychologists develop powerful new methods to assess content, we risk being seen as intellectually stagnant, as remnants of an epoch past. Clearly, I believe that psychologists overemphasize assumptions of continual change in their research and practice and that this overemphasis undermines the foundation of the Christian faith by assuming that nothing is unchanging except the principle of eternal change. As scientists we need to be aware of the effect that our research and practice have on beliefs in our culture. We need to question our scientific assumptions and examine our consciences concerning whether we believe the current emphasis on change reflects the heavenly and earthly reality. I believe that as Christians who are scientists, we must attempt to refocus the attention of the scientific community through developing new methods that will lead to new findings and theories that will in turn restore a balanced view of nature. These new methods must be consistent with a philosophy of science that is checked against Christian standards (the Scripture, the witness of the Holy Spirit, and the wisdom of the ages). Bandura, A. Principles of Behavior Modification. New York: Holt, Rinehart and Winston, 1969. Beek , A.T. Cognitive Therapy and the Emotional Disorders. New York: International Universities Press, 1975. Bowen, M. Family Therapy in Clinical Practice. New York: Jason Aronson, 1978. Ellis, A. Reason and Emotion in Psychotherapy. New York: Lyle Stuart, 1962. Ellis, A., & Harper, R.A. A New Guide to Rational Living. Englewood Cliffs, NJ: Prentice-Hall, 1975. Eysenck, H.J. Learning theory and behavior therapy. Journal of Mental Science, 1959, 105, 61-75.Haley, J. Problem Solving Therapy. San Francisco: Jossey-Bass, 1976. Kiesler, D.J. An interpersonal communication analysis of relationship in psychotherapy. Psychiatry, 1979, 42, 299-311. King, R.R., Jr. Evangelical Christians and professional counseling: A conflict of values? Journal of Psychology and Theology, 1978, 6, 276-281. Koch , S. The nature and limits of psychological knowledge: Lessons from a century qua "science". American Psychologist, 1981, 36, 257-269. Kohlberg, L. Development of children's orientation towards moral order: Sequence in the development of moral thought. Vita Humana, 1973, 6 11-36. Kuhn, T.S. The Structure of Scientific Revolutions (enlarged ed.). Chicago: University of Chicago Press, 1970. Kuhn, T.S. The Essential Tension: Selected Studies in Scientific Tradition and Change. Chicago: The University of Chicago Press, 1977. Lakatos, 1. Criticism and the methodology of scientific research programmers. In 1. Lakatos & A. Musgrave (Eds.), Criticism and the Growth of Knowledge. Cambridge, England: Cambridge University Press, 1970. Leahey, T.H. A History of Psychology: Main Currents in Psychological Thought. Englewood Cliffs, NJ: Prentice-Hall, 1980. Lewis, C.S. Christian apologetics. In W. Hooper (Ed.), C.S. Lewis, God in the Dock: Essays on Theology and Ethics. Grand Rapids, Mich.: William B. Eerdmans Publishing Co., 1970 (originally presented by C.S. Lewis, 1945). May, R. Contributions of existential psychology. In R. May (Ed.), Existence. New York: Simon & Schuster, 1958. Meichenbaum, D. Cognitive-behavior Modification: An Integrative Approach. New York: Plenum, 1977. Minuchin, S. Families and Family Therapy. Cambridge, Mass.: Harvard University Press, 1974.Perls, F.S. Gestalt Therapy Verbatim. Toronto: Bantam Books, 1969. Perry, W. Forms of Intellectual and Ethical Development in the College Years. New York: Holt, Rinehart and Winston, 1970. Polanyi, M. Science, Faith and Society. Chicago: The University of Chicago Press, 1946. Roark, A.C. A'new synthesis' in evolution leads scientists to ask when and how life began. The Chronicle of Higher Education, March 23, 1981, pp. 3-4.Rogers, C.R. Client-centered Therapy. Boston: Houghton Mifflin, 1951. Rogers, C.R. The necessary and sufficient conditions of therapeutic personality . change. Journal of Consulting Psychology, 1957,21,95-103.Rokeach, M. Beliefs, Attitudes, and Values. San Francisco: Jossey-Bass, 1968. Rossi, P. Hermeticism, rationality and the scientific revolution. In M. Bonnelli & W. Shea (Eds.), Reason, Experiment, and Mysticism in the Scientific Revolution. New York: Science History Publications, 1975. Rubin Z. Does personality really change after 20? Psychology Today, 1981, lg(5), 18-27. Schaeffer, F.A. Escape from Reason. Downers Grove, Ill.: Intervarsity Press, 1968.Sullivan, H.S. The Psychiatric Interview. New York: Norton, 1954. Toulmin, S. Human Understanding, Volume 1. Princeton, NJ.: Princeton University Press, 1972. Welkowitz, J., Cohen, J., & Ortmeyer, D. Value system similarity: Investigation of patient-therapist dyads. Journal of Consulting Psychology, 1967, 31, 48-55.White M. The Age of Analysis. London: New American Library, 1955.
<urn:uuid:a7381918-3498-47a0-a888-afad707ba05f>
CC-MAIN-2017-17
http://www.asa3.org/ASA/PSCF/1984/JASA3-84Worthington.html
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917122996.52/warc/CC-MAIN-20170423031202-00192-ip-10-145-167-34.ec2.internal.warc.gz
en
0.934342
6,248
2.8125
3
Contents - Previous - Next This is the old United Nations University website. Visit the new site at http://unu.edu Chapter 22.On the problem of water supply in the Hai-Luan plain Commission for Integrated Survey of Natural Resources, Academia Sinica, Beijing Is THE PLAIN of the Hai He and Luan He short of water? Do we need to transfer Chang Jiang water into the plain? How can we optimize the use of local water resources? These are important and controversial questions which we are all concerned about. The following views are based on my own field work and an investigation of relevant documentary materials. WATER SHORTAGE ON THE HAI-LUAN PLAIN Water Sources Cannot Satisfy Agricultural Requirements The 126,000 km² Hai-Luan Plain lies between the Huang He in the south, the Taihang Shan in the west, the Yan Shan in the north and the Bo Hai in the east, at an altitude of 0 to 100 m above sea level. Though this region has a long history of cultivation, the level of agricultural production has consistently been very low. Before 1949 the grain yield was little more than 0.75 t/ha. Since then there has been a certain amount of improvement, but yields still fluctuate between 1.5 and 2.25 t/ha, The main reason for these low yields is water deficiency. Light and heat conditions are adequate for the growth of many crop varieties and in most areas allow two crops to be reaped in a year. The land is flat and contiguous in large tracts, highly suited for mechanized cultivation. The only deficiency is in moisture. First of all, precipitation is scant. The average annual precipitation varies from 400 mm in the vicinity of Hengshui to 600 mm in the piedmont and coastal areas. Moreover, only a small proportion of the catchment areas of the plain's rivers lies in the mountainous areas, so the inflow from this source is not very significant. The mean annual precipitation on the plain is 73.5 km³ and the runoff only 7.4 km³. The mountainous areas provide an additional runoff of 20.95 km³ (from a precipitation of 104 km³). Runoff per cultivated hectare is only 2,775 m³ in the basin, 10 per cent of the national average. Second, precipitation is uneven, both seasonally and from year to year. (For specifics, see Wei Zhongyi and Zhao Chunian, Chapter 7). Third, potential evaporation is high, (See Cheng Weixin, Chapter 19). Runoff depth is 50 mm in most of the plain and only 10 to 25 mm in the central part. The runoff coefficient is 0.101, far less than the national average of 0.452 for exterior drainage basins. The insufficiency and variability of available moisture combine with the flat topography (1:10,000 to 1:20,000) to produce alternating drought, flooding and salinization, which are the basic causes of low yields. At present, virtually every locality on the plain which has good water control also has high and stable agricultural yields. For example, Quzhonzhuang Village was a saline area in the past where grain yields were only a few hundred kg per hectare. After land shaping and soil improvement centering on water control, especially well irrigation and drainage, the salts were leached out of the soil and drained off, causing output to increase steadily until now grain yields have reached 7.5 t/ha. There are numerous examples like this. Water Sources Cannot Satisfy Industrial and Municipal Requirements Mineral resources are abundant in the Hai-Luan plain, including the Fenggeng, Jingxing,Jingxi and Tangshan coalfields, as well as the Datong, Shnoxian and Yangchuan fields in eastern Shanxi whose water use is related to that on the plain; the oilfields of Shengli, Dagang, Renqiu and Bo Hair and the iron mines of Qian'an, Luanxian, Xingtai and Handan. The exploitation of these resources furnishes favourable conditions for the development of thermal power generation, coal chemical industry, oil refining, petrochemicals and iron and steel smelting, but these industries are relatively large consumers of water. The required investment in water sources is therefore quite large for factories on the Hai-Luan Plain such as the newly built petrochemical base in Tianjin or the Dongfanghong petrochemical plant in Beijing. The lack of water sources often makes it difficult to come to a decision even on industrial siting. For example, the construction of a power station has been considered at the pit entrance of the Datong mine to use its coal as a means of solving the problem of power shortages in the Beijing-Tianjin-Tangshan system. The project has been postponed time and again because of the inability to fix the water sources and because of conflicts with agricultural water use. Water Sources Cannot Satisfy Economic Requirements The area under irrigation in the Hai-Luan Plain has reached 4.4 x 106 ha, 52 per cent of the arable land. Water supply is insufficient in most of the irrigation districts, and some can only be irrigated once or twice a year. Great changes have occurred in surface runoff and groundwater conditions in many areas because of large-scale diversions of surface water and excessive extractions of groundwater. For example, in 1975 Hebei Province extracted nearly 9 km³ of groundwater. Although an increasing number of wells have been dug since, the total yield has not increased and the water tables of the aquifers in many places are being lowered year by year because the rate of extraction has exceeded the recharge. Lowering of the water tables was registered in 22 places in the province in 1975, occupying an area of 11,600 km². In some cases, this has developed very quickly. The overexploitation of the deep aquifers has brought with it a number of problems. First, because of the lowering of the water table, per-well discharge has declined. Not only has the efficiency of irrigation fallen but irrigation costs have increased due to the increase in pumping lift. Second, the original pumping implements must be replaced because they do not meet the new requirements. In some cases, pumps have had to be replaced twice. Third, with the drop in the water table a number of old wells become dry each year and new replacement wells have to be drilled. In some regions and some years the number of abandoned wells exceeds that of new ones. Fourth, increasing mineralization has led to deterioration in water quality. In littoral areas, salt water intrusion can take place if the water table is too low. Fifth, there is land subsidence. Tianjin is located in the lower- reaches of the Hai He, and has serious water supply problems. In the past, the municipality's inland navigation was highly developed. The Wei Canal, Ziya He and Daqing He provided important links between Tianjin and Baoding, Handan and other municipalities in Hebei Province. The coal produced at the Fengfeng and Jingxing mines could be transported directly by river to Tianjin. At present, however, the situation is quite different. Water is scant or absent in some reaches of those rivers. In other reaches locks have been built to store water, forming reservoirs. Not only is navigation impossible, but even the supply of several hundred million cubic metres of municipal water is hard to assure. One after another the Guanting, Wangkuai, Xidayang, GangnanHuangbizhuang and Yucheng reservoirs, which supplied water to Tianjin in the early 1960s, have stopped doing so as the result of increases in water use in the upper and middle reaches of the Hai He. Water supply from the remaining reservoir at Miyun is unreliable. Moreover, the flow loss is very serious, and water actually delivered to Tianjin is often less than one-third that released from the reservoir. At present, per capita domestic water use averages only 60+ litres per day, one-third that of Shanghai. As a result of the severe 1980 north China drought, Tianjin's water shortage during the following winter was even more serious. Industrial water use was affected for months and the situation in agriculture was even more strained. In the 1950s Tianjin used the water of the Hai He to grow about 60,000 ha of paddy rice. At that time 82 per cent of grain output was rice and Tianjin's xinozhan variety was well-known throughout the country. Beginning in the early 1960s, however, the insecurity of water sources forced a changeover to upland crops and the area sown to paddy rice was reduced repeatedly. By the beginning of the 1970s, paddy rice could only be assured on about 700 ha. Because of the absence of water resources, Tianjin has been forced to continue to overexploit the deep aquifers within the municipality, and land subsidence has continued to develop as a result. Conditions in Beijing are quite similar. Following the rapid growth in industry and agriculture and the expansion of urban construction, water use has increased daily. The supply of running water in the municipality and that of self-provided water sources in 1978 were 45 and 100 times the respective figures for 1949. At present industrial, agricultural, municipal and domestic water use in the municipality totals nearly 4.8 km³/annum. The excessive exploitation of the groundwater that has occurred in Hebei and Tianjin has also appeared to varying degrees in Beijing. In the case of surface water, large reservoirs have been built at Guanting and Miyun over the past three decades and the runoff of the mountainous areas has been basically controlled. At present the reservoirs are responsible for a very large portion of Beijing's industrial water, but the flow into some of them has decreased due to increases in water use in the upper reaches. Although the inflow of the Miyuan reservoir is relatively stable, it supplies Tianjin and Hebei as well as Beijing, and the resultant conflicts are extremely acute. In a word, the Hai-Luan Plain lacks water. The conflicts between supply and requirements are becoming ever more serious. The solution of this problem deserves our serious consideration. NORTHWARD WATER TRANSFER IS IMPERATIVE, BUT DISTANT WATER CANNOT QUENCH PRESENT THIRST This conclusion is drawn from the following two aspects: The Necessity of Chang Jiang Water Transfer From the viewpoint of economic development in the Hai-Luan Plain, the shortage of water is beyond any doubt. It would be difficult to satisfy requirements even with best use of local water sources, including the Huang He. During dry years, the Plain's water resources have already reached a high degree of exploitation. The average flow into the sea of the Hai He during 1950-1972 was 9.24 km³/annum. Within this period, however, the average flow for 1960- 1972 was only 6.65 km³/annum. The water flowing into the Hai He in the mountainous areas was similar, 9 km³, in both 1952 and 1968, but the flow into the sea in 1968 was only 0.347 km³, less than 7 per cent of the 1952 figure of 5.07 km³. This is due to the higher level of utilization of the water of the Hai in the later year. The degree of utilization is much higher in dry years such as 1968, when the flow into the sea was only 3.86 per cent of the amount produced in the mountains, leaving very little potential. Thus future water source development will mainly be a matter of storing and utilizing the runoff of average and wet years. This is not a simple matter. Since the maximum discharge into the sea from the Hai basin was 33.7 km³ in 1963, nearly a hundred times that of 1968, and since wet years occur only once every several years and sometimes at intervals exceeding a decade, extensive evaporation losses may preclude using one year's water for a number of years even if there were sufficient storage capacity. In addition, it is necessary to maintain a certain amount of river runoff to carry the salts in the soil of the plain into the sea and to prevent silt accumulation in the river channel and estuary which would affect flood paths. Nutrients required by littoral aquatic life could vanish if the runoff is exhausted. Consequently, both the comprehensive control of flood, drought and salinization of the plain and ecological equilibrium require a certain portion of water to be discharged into the sea. This means that it will not be long before the limits are reached in the exploitation of local water resources. The northward transfer of Chang Jiang water must be carried out as soon as possible. The Formidable Nature of Chang Jiang Transfer Although the development of water use on the Hai-Luan Plain and the natural conditions for diversion make it both necessary and feasible to transfer the water of the Chang Jiang, the formidable nature of the project should never be underestimated because the conditions of the Hai-Luan Plain, sited on the final half of either the Middle Route or the East Route, are considerably more complex than those south of the Huang He. First, the project would be expensive. The East Route would divert water 660 to 1,150 km and require a pump lift of 65 to 70 m, making unit project investment costs far greater than for diversion into northern Jiangsu. At present, conveyance system investment costs on the Hai-Luan Plain are about 7 yuan per irrigated hectare. To combat salinity mainly requires constructing a drainage system which corresponds with that of irrigation. On the Plain, project investment for drainage is not much less than for irrigation. Calculating on the basis of the project magnitude as roughly estimated by the relevant departments, the total investment for the East Route, including the main canal, pumping stations, Huang He crossing and conveyance systems in the irrigation districts, will be 10 to 12 x 109 yuan for water transfer alone, of which two-thirds would be for the Hai-Luan Plain. Investment on the Middle Route would also be quite high, even though the water would flow by gravity and pumping equipment would not be used, because the diversion works would be more complex than on the East Route. A new channel would have to be dug virtually the entire route from the Danjiangkou Reservoir to Beijing and would have to cross 167 rivers with catchment basins over 200 km² each, including the main rivers and tributaries of the Han Shui, Huai He, Huang He and Hai He. Investment on the main channel would be greater than on the East Route as would be the amount of land occupied. If surface irrigation canal systems and salinity control projects are included, total investment would be at least as high as on the East Route. In terms of the present economic strength of China, this project is an enormous investment item, requiring close to the country's total capital investment in water control over the past five years. Total agricultural income is still quite low, so it is impossible in the short term to rely on the accumulated funds of the rural collectives (people's communes, production brigades and production teams) to build the surface conveyance systems for the northward transfer project. Likewise, it is impossible to rely on state investment in the near future. On the one hand, a certain ratio must be maintained between capital investment in water control and in other sectors; in addition, a certain ratio must be maintained between capital investments in water control between the Hai-Luan Plain and other parts of the country. Clearly, northward water transfer cannot occupy too much of the state's investment. Second, the returns to investment are not striking. The northward transfer of Chang Jiang water is mainly to serve agriculture, which is at present chiefly under collective ownership. The operating expenses of the transfer project must be recompensed in the form of water fees paid to higher levels. These payments would be obtained from their agricultural and subsidiary produce by beneficiary units such as communes and brigades. Even though agricultural output may require irrigation by the water of the Chang Jiang, if the water fees are too high, higher yields will not elicit higher income and the collective units will not be able to use the water. The operating costs of diverting the water of the Chang Jiang to the northern bank of the Huang He will be 0.020 to 0.025 yuan/m³. North of the Huang He, the average additional cost of transporting water to the consumer, including 30 per cent loss due to evaporation and seepage from the reservoirs and canals, will be an additional 0.020 to 0.025 yuan/m³, making a total of 0.04 to 0.05 yuan/m³. The cost of water transfer increases if we also include deductions from profits of enterprises built with bank loans. This estimate yields a rather high water fee, over 10 times greater than the price of water paid at present in the surface water irrigation districts of the Hai-Luan Plain. The existing state-set prices for agricultural produce and by-products are relatively low. This would make it very difficult to have a net increase in income from irrigating with interbasin water. Certainly water transfer is necessary to alleviate shortage on the Hai-Luan Plain, but if the project is too large and the costs too high, the water so diverted will not necessarily find a market. Third, there is a significant danger of secondary salinization. The success or failure of the proposed transfers depends on whether they aggravate the danger of salinization. For a time in 1958, drought was fought by damming rivers, building large numbers of plains reservoirs and diverting the water of the Huang He. This resulted in widespread secondary salinization. Since then, as the result of many years of treatments the saline area has been greatly reduced. To date, however, the situation remains quite unstable and in a state of frail equilibrium. The threat of salinization would become stronger with a future mass transfer of Chang Jiang water and the utilization of lowlands for storage. Of course, secondary salinization is not an inevitable consequence of water diversion for irrigation. Theoretically, it can be averted by combining drainage with irrigation and using both wells and canals so as to control the water table below the critical level. But at present there are a number of problems with this in practice, such as the lack of complete conveyance systems to the fields, inadequate management and the non-implementation of water control policies. Some of these problems involve the economic capabilities of the state and some await a reform in the management institutions of society. The transfer of Chang Jiang water would have some of the most complex management problems of any irrigation project in the world. In addition to requiring a series of measures to prevent salinization on the project itself and a set of effective management systems for water sources, the mass diversion has no assurance of being realized until the comprehensive control of flooding, drought and salinization on the plain in general has reached a certain level. Therefore, since the northward transfer of Chang Jiang water is both necessary and feasible, scientific research, surveying and designing should continue to be carried out actively so that the project can be realized smoothly as soon as China's economic conditions have matured sufficiently. On the other hand, because of the enormous magnitude of the project and the difficulty in carrying it out in the near future, the problem of water shortage on the Hai-Luan Plain must be solved by tapping local resources and making the best use of them. THOROUGHLY AND RATIONALLY UTILIZE LOCAL WATER SOURCES Although the potential of water resources in the Hai-Luan Plain, including the Hai, Luan and Huang He, is not great in a dry year, in normal years several km³ are still untapped. While the conditions for their exploitation are not favourable, perunit capital investment and operating costs are not likely to be greater than interbasin water transfer. Moreover, the scope of the projects is smaller and their implementation easier than diversion from the Chang Jiang. In particular, some projects can yield additional benefits in preventing flood, excess surface water and salinization. Priority should be given to exploiting the water resources in normal years, along the following lines. Fully Intercept Mountain Runoff Although reservoirs control 60 per cent of the mountain inflow area, this is not enough. For example, in Hebei Province the runoff from the mountains averages 20.95 km³/annum, yet the total capacity of existing reservoirs is only 9.37 km³/annum. Because precipitation is concentrated and varies greatly from year to year, and meteorological forecasts are not accurate, there is a tradeoff between flood control, the primary purpose of the reservoirs, and water storage. Actively Promote Underground Storage Water produced on the Hai-Luan Plain constitutes about one-third of the drainage into the sea in the Hai-Luan basin. This water source has not been effectively used so far. Utilizing the aquifers of the plain for storage is one means of regulating the water. In actuality, pumping water from shallow wells for irrigation may lower the phreatic water table, vacating the pores of the soil and sand layers so they may absorb rain and river water during the flood season. This method of storage can not only increase the utilizable water sources but also help prevent flooding and salinization. Originally the plain in front of the mountains in Hebei was subject to frequent flooding due to rainfall, but in the past dozen or so years the development of well irrigation has brought about a major transformation in the runoff discharge situation and greatly reduced the area of flooding due to rain. In general, the phreatic water table has fallen from 1.5 to 2.5 m to about 8 m and the drainage standard of ditches and canals has increased from a 5-year event to a 10 to 20 year event. In addition, underground water storage can also cut losses through evaporation. It is easy to see the desirability of this form of storage (see Wu Chen and Wu Jinxiang, Chapter 21). Build Plains Reservoirs Suited to Local Conditions In central Hebei there are over 40 major lowlands each exceeding 10,000 mou (667 ha) in area. In most of the counties of the Tianjin-Cangzhou-Baoding plain, 20 to 30 per cent of the cultivated area is made up of lowlands prone to flooding. In recent years an average of more than 70,000 ha (1 x 106 mou) of arable land in Hebei have been inundated each year. Most of this area consists of lowlands in the middle and lower reaches of the various rivers. Grain yields are extremely low in some lowlands seriously affected by flooding and salinization. Because of the difficulties in increasing yields in the lowlands, it would be more straightforward to use them to store water. For example, the Baiyang Dian collects the inflow of the Daqing He drainage system and is closely linked together with the Dong Dian and Wen'an Wa, controlling a large portion of various drainage basins and extremely effectively reducing the discharge into the sea of those river systems through its storage. If the necessary projects are built in all the lowlands of the Hai-Luan Plain, taking into account their differing conditions, and reservoir construction in the mountainous upper reaches is combined with groundwater storage in the plains, thereby intercepting the runoff at each level, it may be possible to utilize the water sources of the plain to their fullest extent. Problems of salinization and swampiness can be overcome when lowlands are used as plains reservoirs for storage, provided that appropriate engineering measures are adopted (see Yu Fenglan and Wang Wenkai, Chapter 20). Appropriately Divert the Huang He In recent years the average annual discharge of the Huang He into the sea has been approximately 48.6 km³. One might expect this amount to decline gradually as a result of increases in water use in the upper and middle reaches, but even in the extraordinary dry period of 1970-1977 the annual discharge still reached 31.0 km³; 14.3 km³ flowed into the sea during the non-irrigation period (October to March); and 6.1 km³ was discharged in the driest year. It is rather difficult to divert large amounts of water from the Huang He in the upper and middle reaches because most of the cultivated land has relatively poor conditions for diversion. Following the completion of the Longyang Gorge and other reservoirs in the upper reaches, the low water flow will increase in the lower reaches. The construction of the Xiaolangdi Reservoir would be even more favourable to Huang He diversion. It is estimated that in the coming decade the annual discharge into the sea of the Huang He will still be 30 km³, indicating that it will be possible to increase diversions by several km³, but the diversion must be done appropriately. Silt deposition occurs in the lower reaches mostly during the non-flood season when the rate of flow is low and the silt-carrying capacity is small. Bed scouring occurs primarily during the flood season when the flow is high and the silt-carrying capacity great. This has positive implications for diversion during the non-flood season when water use is high. If all the water of the Huang He is diverted, the small flow which produces silt deposition would not pass through and diversion would be beneficial to river control. When the water is not consumed entirely, the remaining portion can be adjusted by making concentrated discharges into the sea from artificial flood peaks released by the Sanmen Gorge Reservoir, precluding bed silt deposition in the lower reaches. Of course, there would still be a considerable amount of silt to take care of, but this northward conveyance project would be much simpler than diverting Chang Jiang water across the Huang He. Diversion expenses, including silt control, would not necessarily be higher than the transfer of Chang Jiang water and the period of construction would be much quicker. Use Surface and Underground Reservoirs Jointly and Manage all Water Sources in a Unified Manner If we are to solve the problem of water shortage on the Hai-Luan Plain, all of the above measures, which focus on opening up new sources, must be supplemented by the joint operation of surface and underground reservoir storage in accordance with the principle of mutual benefit. We must carry out unified management of all water sources, beginning by requiring that projects be constructed with sound storage, irrigating and drainage systems. In the plain, if there is no storage there can be no irrigation, and storage is impossible without drainage. With a sound water system, imbalances in water sources between areas and between surface and subsurface can be artificially regulated, controlled and supplemented through management which is done in accordance with the requirements of ecological balance and the development of production. This would turn harmful water into helpful water and make optimal use of water sources. This type of unified management requires the establishment and perfection of institutions governing water source management. It is first of all necessary to utilize the shallow groundwater and effectively control the water table. Then irrigation must be done from the plains and mountain reservoirs according to the level of the water table and the severity of the drought. Only in an extraordinary drought should the deep groundwater reserves be utilized, and only for short periods. Water should be stored in major flood periods in the following order: first, the mountain and plains reservoirs, and then the underground reservoirs. In addition to the water they absorb directly from rainfall, shallow aquifer resevoirs may, under proper conditions, be recharged from the mountain and plains reservoirs by providing water to the canals in excess of the water duty. This would allow the water table to rise continuously until it reaches the critical level at the end of the flood period. Contents - Previous - Next
<urn:uuid:0277e8f5-3d67-4c41-82b4-89628e4fbe4d>
CC-MAIN-2017-17
http://archive.unu.edu/unupress/unupbooks/80157e/80157E0p.htm
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917121153.91/warc/CC-MAIN-20170423031201-00248-ip-10-145-167-34.ec2.internal.warc.gz
en
0.955022
5,791
2.828125
3
- National Socialism, or Nazism, was the ideology of the National Socialist German Workers’ Party (NSDAP), which was founded in 1919 and ruled Germany in 1933-1945 under its leader Adolf Hitler. It was based on the following pillars: - leadership principle: power is concentrated in the hands of an individual who de facto controls the functioning of the entire state; - totalitarian state: the interests of individuals are subordinate to the interests of society, the state controls everything, including the private sphere of an individual; - command economy: strategic businesses are controlled by the state, business activities are allowed to a limited degree, there are close connections between the economic elites and the political/party elites and the leader. The economy is entirely subordinate to the interests of the state; - nationalism: the highest value is being a member of the nation, which is represented mainly by individuals of the same race, culture, language, and strives to seize the living space for that nation at the expense of other groups of inhabitants; - biological racism and anti-Semitism: the conviction that people are not equal, expressed in justification for various forms of discrimination and escalating to take the form of planned pogroms and genocide. A feature of historical (Hitler’s) National Socialism is hatred of the Jews, escalating into a plan to eliminate all the Jews in Europe. Neo-Nazism is a modern ideology that draws on the ideas and traditions of National Socialism. It aims to revive and to reintroduce it as a desirable political system. Efforts to promote this ideology are made by political parties and unofficial policial movements and groups, some of which pose a major security threat to individuals, groups of people, and the state itself. To achieve their goals Neo-Nazi groups use methods of targeted intimidation of their opponents and tools of political terrorism (the attacks in Oklahoma City, the attacks by a group called the Order). The modern Neo-Nazi movement is an international movement that overcomes some historical discrepancies and ambivalence by attempting to assert white supremacism and the defence of the Euro-American space against the influence of non-indigenous cultures and inhabitants. A part of the ideology is racism, which in the Czech Republic is directed mainly against Roma, foreigners, and Jews. Neo-Nazism also tries to deny the Holocaust and believes in a Jewish conspiracy, sometimes referred to as Z.O.G. (Zionist Occupation Government). The movement uses various forms of propaganda to try to appeal to young people: music, the Internet, the creation of a unique image. Some Neo-Nazi groups try to get into politics through political parties. Neo-Nazis tend at times to be interpreted as the same thing as skinheads, a youth subculture which emerged in the 1960s in the UK as a racially diverse street movement united around the unique musical style of ska, a way of spending free time, and generational revolt. In the 1970s part of the movement adopted a racist, anti-Semitic, and Neo-Nazi profile. In the Czech Republic, skinheads became synonymous with the racist branch of skinheads, mainly owing to the influence of the music group Orlík. This band was one of the first to introduce the concept of skinheads, which rapidly became racist. At present the Czech Neo-Nazi movement is moving away from its skinhead image (bomber jackets, heavy boots, shaved heads), and the prevailing image draws on some Neo-Nazi clothing brands (ThorSteiner, Praetorian, Grassel) or the Black Block style that is typical mainly of the supporters of Autonomous Nationalism. Autonomous Nationalism is a new concept on the European Neo-Nazi scene. It began forming in 2002 in Germany, from where it found its way to the Czech Republic, the Netherlands, and Belgium. It does not differ from Neo-Nazism in terms of its ideological principles, what is different is its manner of appealing to the public, its image, and the informal structure of the movement. Groups of Autonomous Nationalists work independently of each other, thus preventing security forces from uncovering them and enhancing their own operating capacity. Their rhetoric in some respects resembles that of left-wing movements – coming out against capitalism, emphasising social issues, protesting against globalisation. The image also resembles that of left-wing autonomists: black jackets, hoods and trousers, masked faces – this Black Bloc style provides supporters of Autonomous Nationalism with anonymity at public demonstrations, prevents police and political opponents from identifying them, and makes it harder to take action against demonstrations of these marching groups. Autonomous Nationalists are also currently changing their musical style and the image of their posters and websites. In the Czech Republic the boundary between Autonomous Nationalists and other Neo-Nazi groups is not clear, and it is likely that among new supporters of the Neo-Nazi movement the popularity of Autonomous Nationalism, as a young and progressive movement, is going to grow. Fascism was a sythesis of organic nationalism (biological determinism, nationalism as a political ideology) and anti-Marxist (syndicalistic) socialism, a revolutionary movement resting on the rejection of liberalism, democracy, and Marxism. In the concept of Fascism, man is a creature of society, he only really exists if he is shaped by society. Fascism is a totalitarian ideology in the sense that all man’s actions are subordinate to the state – nothing outside the state can have any value. Neo-Fascism draws on the tradition of Italian Fascism in particular, or that of regimes in other countries. Neo- and Post-Fascists try to distance themselves from Nazism, condemning exclusive nationalism, racism, and anti-Semitism. Thanks to its emphasis on conservative values they find common ground with radical religious groups (Clerical Fascists). The development of the Neo-Nazi movement The Neo-Nazi branch definitively formed itself out of the skinhead movement at the start of the 1980s. The singer for the band Skrewdriver, Ian Stuart, together with other Neo-Nazi bands, founded the organisation Blood and Honour (the battle motto of the Hitler Youth) which served as a platfor for organising Neo-Nazi concerts and publishing white-power musical recordings. After Stuart’s death, an organisation was founded called Combat 18 (18 meaning “A.H” from the positions of the letters in the alphabet - a code for the initials of Adolf Hitler). These organisations did much to popularise Neo-Nazism and contributed to its spread from Britain to Europe and America. In many countries of the world, countless large and small organisations have formed and hundreds of racist bands and militant Neo-Nazi movements emerged. Music and the Internet have together been the most important channels of communication, influencing and shaping young Neo-Nazis and hooligans, and serving to transmit propaganda, mediate meetings, and procure financial resources. Football stadiums are also a place where young people encounter older Neo-Nazis and hooligans and imitate their behaviour and style of dress. In most countries concerts given by racist bands take place in secret, concealed as private events. The texts of these bands’ songs directly encourage racially and ideologically motivated violence, express hatred, and celebrate representatives of the Third Reich, and so on. Norse mythology also occupies an important role in their texts. Musical recordings, videos, and print materials are distributed through Internet servers from countries with more liberal legislation governing freedom of speech, such as Denmark and the United States. Neo-Nazis continuously commit racially and ideologically motivated violence. It is the strength and power derived from aggression that is appealing to many young people. They strengthen their self-confidence even further by carrying out nighttime assaults on heavily-outnumbered and usually randomly-selected targets that fit their category of undesirables (this can include dark-skinned citizens, political opponents, or representatives of various subcultures like punk, non-racist skinheads, hip hop, hardcore, skateboarders and so on, and also the homeless, homosexuals, Jews, and anyone else who objects to their words or their physical violence). The Neo-Nazi branch of the skinhead movement established itself in the Czech Republic not long after the Velvet Revolution. Racism was popularised among young people by Orlík and Bráník, two bands that published their recordings officially and thus prepared the ground for real Neo-Nazi organisations and opinions. The development of a militant Neo-Nazi movement was also significantly influenced by the presence of latent or sometimos even overt racism on the part of Czech society. At the start of the 1990s racist skinheads were often viewed as ‘decent boys, trying to maintain order’ and justifiably attacking ‘thieves and noncompliant Gypsies’. In the 1990s two Neo-Nazi organisations operated in the Czech Republic: first, Bohemia Hammer Skins, and later a Czech branch of the international Neo-Nazi organisation founded in the UK, Blood and Honour. Amidst little interest from the state authorities or the media, these organisations created distribution networks for the spread of racist and Neo-Nazi music, Nazi symbols, souvenir objects and clothing. Disguised as private events, Neo-Nazi meetings and concerts were organised at which Nazism was openly propagated, Neo-Nazi materials were distributed, and international contacts were made. According to the civic association Tolerance and Civil Society, more than two dozen people lost their lives to racist violence during the 1990s, and to date (09/2009) the figure is more than 30 people killed for reasons of race, political conviction, nationality, or related reasons. The inertia shown by the police and the courts temporarily changed after an international scandal occurred in 1997, when Neo-Nazis murdered a Sudani student named Albdelradi. The police carried out a number of raids and arrested top activists in the Neo-Nazi movement, which in practical terms resulted in shutting down the Czech branch of Blood and Honour, which had picked up from the earlier activities of Bohemia Hammer Skins. Out of the remnants of Prague’s Blood and Honour there gradually emerged a new, unregistered group called National Resistance [Národní odpor], whose members, alongside organising concerts and demonstrations, have altered tactics and are striving to get into the mainstream political scene. Their non-violent public image is intended to win enough support for various disparate groups (as well as the aforementioned National Resistance, the National Alliance and the Patriotic Republican Party) and to establish a political party which, through its populist and moderate rhetoric, should enable Neo-Nazis to enter into communal and parliamentary party politics. The gradual formation of coalitions and merging of smaller groups gave rise to the National-Social Bloc Party, which attempted to take part in the 2002 parliamentary elections. The Ministry of the Interior did not permit them this name and the party registered instead as the Right Alternative. Their attempt to establish themselves on the political scene ended in failure: Neo-Nazis failed to significantly change the style and rhetoric of their activities, and were unable to articulate serious social issues in an intelligible way that would appeal to the disgruntled strata of the population. The only participants at public events organised by the Right Alternative were racist skinheads. After some shake-ups in the leadership, the party was ultimately unable to obtain the financial resources to pay the election fee, and in the end did not take part in the 2002 elections. Another blow that paralysed the Neo-Nazi movement for several years was a police raid of Patriots directed against the organisers of Neo-Nazi concerts. The Czech Neo-Nazi movement thus went into a temporarily slump in 2002-2004. Activities renewed in late 2004 and early 2005. National Resistance works on the basis of ‘leaderless resistance’. This concept involves decentralising the movement into local groups, which are linked in a cooperative network, but have no central leadership. Through this organisational structure the Neo-Nazi movement prevents police infiltration and achieves greater mobilising power, and until around 2007 it was one of the driving forces in the Neo-Nazi scene. It is directed at recruiting new members, organising white-power music concerts, and distributing propaganda through the Internet (www.odpor.org). At the end of 2004, National Resistance was followed in the political scene by a newly founded group, National Corporativism, which introduced some new themes that border on nationalism, Neo-Fascism and Neo-Nazism, and which makes no secret of its political ambitions. The membership base and the base of supporters of National Corporativism overlap with other extreme right-wing groups, so it forms a kind of bridge between a legal political party (Workers’ Party) and an openly Neo-Nazi group (National Resistance), from which it draws the base ranks of its personnel. National Corporativism was gradually consumed by internal conflicts, and the dominating tendencies were towards forming links between National Resistance and the newly founded Autonomous Nationalists. In April 2008 National Corporatism shut down, and its leaders encouraged supporters to join the Workers’ Party. The Autonomous Nationalists, which in 2007 took over from the activities of the Kladno Nationalists (founded in 2005), are for the time being the youngest Neo-Nazi group active in the Czech Republic. It represents a generation of new Neo-Nazi activists focused primarily on image and political propaganda. Their ideology, and their concept of free, independent cells that reject a central management, were both adopted from National Socialism. In 2008 the Autonomous Nationalists had at least ten active regional cells, and were directly responsible (together with National Resistance and the Workers’ Party) for the escalation of racially motivated verbal and physical aggression targeting, in particular, the Roma population in the Czech Republic. A typical feature of the new generation of Czech Neo-Nazis is their effort to establish international contacts. Cooperation with Slovak Neo-Fascists has traditionally been maintained ever since Czechs and Slovaks were members of the same federal state. However, in the past two years efforts to establish regular contacts with German Neo-Nazis – unregistered Autonomous Nationalists, Free/National Resistance, and the NPD (Nationaldemokratische Partei Deutschlands), a legal political party – have been much more successful. Neo-Nazis from both countries regularly support each other at their public gatherings, and Czech activists have adopted a number of new methods from their German counterparts. The Czech Neo-Nazi scene notably copies German examples, whether this involves the concept of Autonomous Nationalism or the involvement of unregistered and, not unusually, militant Neo-Nazi groups and political parties. Neo-Nazis try to overcome the historical conflict of the Nazi occupation of the Czech lands by advancing the concepts of a white Europe and a Europe of nations, according to which the European geographic space should be opened up for the original white population, and non-indigenous minorities should be assimilated or removed. National Resistance and Autonomous Nationalists succeeded within two years in making a victorious return to the political scene, thanks to important reciprocal cooperation with a legal political subject, the Workers’ Party, which was founded in 2003 by members of the Republican Youth (‘prep schools’ for Sládek’s Republican Party or SPR-RSČ), including people in contact with National Resistance. This political party was publicly unremarkable until 2007, it when it formed coalitions with other extreme right-wing subjects. By consciously linking itself with National Resistance and Autonomous Nationalists the Workers’ Party gained numerous supporters, voters, and media attention, which paid off in the regional elections in 2008: for the first time ever, it surpassed the magic threshold of 1% in some regions. At this time the party was already acting outside the boundaries of legality, and its verbal attacks on Roma, foreigners, Jews, and homosexuals were taking the form of physical violence occurring after public gatherings of the party. Under pressure from public opinion, the government submitted a proposal to dissolve the Workers’ Party. In March 2009 the Supreme Administrative Court stated that the government’s four-page proposal did not adequately substantiate or prove that this political party is a threat to the basic values of the democratic rule of law. The Workers’ Party used the favourable verdict to increase its visibility, which it tried to use to its advantage in the elections to the European Parliament, where surpassing the 1% limit would mean that a state financial subsidy of 750,000 would go into the coffers of a party that describes itself as a party of National Socialists. The increase in public activity by the extreme right in 2007-2009 is the result of the long-term underestimation of this situation by state administration and the police. Their lack of interest in uncovering violent and organised crime gave Neo-Nazis a feeling of impunity and the conviction that their struggle on behalf of the white race, directed primarily against the Roma minority, foreigners, the Jewish community, and anti-racist activists, is generally legitimate. The growth in confidence of Czech Neo-Nazis cultimated in organised pogroms against Roma living in socially-excluded localities, including an arson attack against a Roma family in Northern Moravia. The Roma reacted to this security situation and to the fact that the state administration was unable to ensure their safety with a new wave of emigration, headed mainly to Canada, where they applied for asylum. As a result of Neo-Nazi violence, in the summer of 2009 Canada reinstated visa requirements for all Czech citizens owing to the growth in the number of applications for asylum. The ideology of Neo-Nazis Members of Neo-Nazi groups regularly read Nazi print materials and attend ideological meetings, racist concerts, and political demonstrations. Neo-Nazis believe in the ‘biological supremacy’ of the white or ‘Aryan’ race. They divide people into a hierarchy of races, with Germans at the top as the ‘creators of culture’, and with Jews at the bottom endas the alleged destroyers of culture. Czech Neo-Nazis are not bothered by the fact that, according to the racist theories adopted from Nazi Germany, Slavonic peoples are also considered to be an ‘inferior and subservient race’. While frequently attacking Roma, they overlook the fact that according to their own theories the Roma belong to the IndoEuropean – Aryan – race. In the view of Neo-Nazis the mixing of races is a major offence. Because every society is to a large degree multi-cultural – made up of members of different ethnic or otherwise defined groups – even the current state of society is unacceptable to Neo-Nazis. For this reason, in the extreme case they want to evoke a ‘holy racial war’ or a ‘white revolution’ and by means of violence ensure the rule of the ‘white race’. In more moderate form they present society with numerous more palatable solutions, which nonetheless have a common basis in biological Darwinism, racism, and xenophobia. Racial hatred is usually linked to xenophobia, targeting anyone who is different in any way. Xenophobia tends to be manifested as aggression against a specific ethnic group – in the Czech Republic this is mainly the Roma. Following the model of German Nazis, supporters of the extreme right direct their irrational hate against the Jews. There is a very widespread, paranoid belief in a worldwide Jewish conspiracy whose objective is to control the world and destroy Aryan culture. For this reason Neo-Nazis are obsessed with ‘uncovering Jewish plots’ and they explain most world events in the terms of this ‘theory of a Jewish-Masonic conspiracy’. One manifestation of this conspiracy, according to Neo-Nazis, is the so-called Auschwitz Lie – Jews allegedly made up the entire Holocaust, the gas chambers, and the mass exterminations, in order to hold Germany and the world public to ransom. Denials of the Holocaust are always motivated by irrational anti-Semitism and latent or overt racism, but Neo-Nazis try to present their view as though it were ‘serious historical research’. Extreme right organisations maintain lists of ‘Jews and their minions’ and it is significant that these lists tend to contain the names of people and organisations actively engaged in combating racism, Nazism, xenophobia, and intolerance. ‘Jewish origin’ is not essential. Neo-Nazis subject ‘undesirables’ to verbal assaults, usually abounding in vulgarisms. Fully in the spirit of racism and intolerance Neo-Nazis are opposed to migration, which they blame for the deterioration of the economic situation. Their hateful outbursts against asylum-seekers and foreigners unfortunately resonate with the wider strata of the xenophobically-inclined public. Another target of hatred, verbal assaults, or worse is the so-called ‘trash’ of society, which neo-Nazis define as drug addicts, homosexuals, and the mentally and physically disabled. Neo-Nazis summarise their ideology in short, easy to remember slogans, which they encode to increase their confidentiality. The most typical slogan is ‘14/88’: ‘14’ signifies 14 words – ‘We must secure the existence of our race and future for white childern’ (based on the Czech translation the Czech reference is sometimes to 10 words – ‘My musíme chránit existenci naší rasy a budoucnost bílých dětí’); ‘88’ is the code for the Nazi greeting ‘Heil Hitler’. It is significant that Neo-Nazis – at least in public – do not call things by their proper names: they call racism ‘patriotism’, xenophobia ‘the longing for the wellbeing of the nation’, irrational aggression against randomly-selected individuals is ‘keeping order’, and stoking racial hatred is referred to as ‘spreading national awareness’. This allows them to appeal to a portion of society. A typical Neo-Nazi ... There is no such thing as a typical Neo-Nazi. Neo-Nazi subculture comprises a group of people who differ significantly among themselves, but are united by their effort to assert the ideas of National Socialism, inclinations towards racism and anti-Semitism, not unusually an inclination for violence, but also a heightened interest in history, politics, and public affairs. The pathway to a local extreme right group usually leads through personal, family contacts or friends, or contacts made at school. Recently, likely owing to the influence of the new media, the age at which young people first experience contact with the Neo-Nazi movement is falling towards the first or second grade of high school. It would be an incorrect underestimation of the Neo-Nazi sub-culture to regard all Neo-Nazis as socially immature, emotionally deprived, primitive individuals. For several years the movement’s members have been successful at finding work in state administration and in the police force; the movement is skilled at manipulating every form of mass media to their advantage; it is gaining political support from the non-Neo-Nazi public; it is financially independent, and it is succeeding at appealing again and again to emerging generations by providing clear responses to questions and solutions to everyday problems. Among the supporters in the Czech Neo-Nazi movement we find side by side people with vocational education, university students, law school graduates, and graduates of universities abroad. Some Neo-Nazis are unemployed, but many of them are successful businesspeople, and they manage to combine their business lives with work for the extreme right. Some Neo-Nazis make no secret of their political ambitions and are trying to emerge as a real political opposition. Women are playing an increasingly important role in the Neo-Nazi movement. In the past two years they have gone from being the indistinctive partners of their male counterparts to become genuine political activists. They thus represent another significantly distinct qualitative change within the Neo-Nazi movement. With the gradual generational turnover, an increasing number of children are being born into families which do not conceal their sympathies for the Neo-Nazi movement, thus influencing their future perception of intercultural coexistence in the border regions of the Czech Republic. In the course of approximately two decades of public activity by racist and Neo-Nazi groups in the Czech Republic, several new generations of political activists have emerged. The founders of the movement are now almost 40 years old, some of them continue to be active, and some support their younger colleagues in other ways. The new generation gives the extreme right a new boost with their language and IT skills, and they form an essential manpower base, a driving force, and to some extent even a group prepared to commit physical violence on behalf of the ideas of National Socialism. It cannot be said that Neo-Nazis ‘grow out of it’. It is possible that some supporters will leave their given extreme right group relatively soon, but there are dozens of activists who remain in the Neo-Nazi movements for more than five years, and there is an entire generation of people united in their adult lives by the idea of a racially pure and white Europe. Agentura EU pro základní lidská práva (Fundamental Rights Agengy) – sborníky, publikace, srovnávací studie na téma rasismus, antisemitismu a netolerance. http://fra.europa.eu/fraWebsite/home/home_en.htm Archiv občanského sdružení Tolerance a občanská společnost Archiv občanského sdružení In IUSTITIA Černý, P. (2005).Politický extremismus a právo. Praha: Eurolex Bohemia. Charvát, J.(2007). Současný politický extremismus. Praha: Portál. Lee, M. (2004). Bestie se probouzí. Praha: BB art. Mareš, M. (2006). Symboly používané extremisty na území ČR v současnosti. Praha: Ministerstvo vnitra ČR. Mareš, M. (2003).Pravicový extremismus a radikalismus v ČR. Brno: Barrister & Principal. Mezinárodní síť proti nenávisti šířené prostřednictvím internetu (International Network Against Cyber Hate). http://www.inach.net Kulturbüro Sachsen e. V (2009). Nebezpečné známosti. Pravicový extremismus v malém příhraničním styku. Drážďany: Kulturbüro Sachsen e. V. Dostupné z: http://www.nebezpecne-znamosti.info/herausgeber.html Southern Poverty Law Center (organizace zabývající se rasismem, násilím z nenávisti a skupinami toto násilí produkujícími ve Spojených státech). http://www.splcenter.org Vejvodová, P. (2008). Autonomní nacionalismus. Rexter. Dostupné z: http://www.rexter.cz/autonomni-nacionalismus/2008/11/01/
<urn:uuid:110977eb-9cad-4419-a1f2-7350602a0992>
CC-MAIN-2017-17
http://czechkid.eu/si1310.html
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917120001.0/warc/CC-MAIN-20170423031200-00306-ip-10-145-167-34.ec2.internal.warc.gz
en
0.937765
5,752
3.578125
4
OF THE MAYFLOWER AND ITS PLACE IN THE LIFE OF TO-DAY A. C. ADDISON AUTHOR OF " OLD BOSTON .' ITS PURITAN SONS AND PILGRIM SHRINES," ETC. WITH NUMEROUS ORIGINAL ILLUSTRATIONSBOSTON L. C. PAGE & COMPANY THE PILGRIM ROLL CALL — FATE AND FORTUNES OF THE FATHERS On Fame's eternall beadroll wortbie to be fyled. There were men with hoary hair Amidst that pilgrim band: Why had they come to wither there, Away from their childhood's land? There was woman's fearless eye, Lit by her deep love's truth; There was manhood's brow serenely high, And the fiery heart of youth. SO sings Mrs. Hemans in her famous poem "The Landing of the Pilgrim Fathers in New England." That devoted little Pilgrim band comprised, indeed, the Fathers and their families together, members of both sexes of all ages. When the compact was signed in the Mayflower's cabin on November 21, 1620, while the vessel lay off Cape Cod, each man subscribing to it indicated those who accompanied him. There were forty-one signatories, and the total number of passengers was shown to be one hundred and two. What became of them? What was their individual lot and fate subsequent to the landing on Plymouth Rock on December 26? For 128 THE ROMANTIC STORY OF long, long years the record as regards the majority of them was lost to the world. Now, after much painstaking search, it has been found, bit by bit, and pieced together. And we have it here. It is a document full of human interest. John Alden, the youngest man of the party, was hired as a cooper at Southampton, with right to return to England or stay in New Plymouth. He preferred to stay, and married, in 1623, Priscilla MuIIins, the "May-flower of Plymouth," the maiden who, as the legend goes, when he first went to plead Miles Standish's suit, witchingly asked, "Prithee, why don't you speak for yourself, John?" Alden was chosen as assistant in 1633, and served from 1634 to 1639 and from 1650 to 1686. He was treasurer of the Colony from 1656 to 1659; was Deputy from Duxbury in 1641-42, and from 1645 to 1649; a member of the Council of War from 1653 to 1660 and 1675-76; a soldier in Captain Miles Standish's company 1643. He was the last survivor of the signers of the compact of November, 1620, dying September 12, 1687, aged eighty-four years. Bartholomew Allerton, born in Holland in 1612, was in Plymouth in 1627, when he returned to England. He was son of Isaac Allerton. Isaac Allerton, a tailor of London, married at Leyden, November 4, 1611, Mary Norris from Newbury, Berkshire, England. He was a freeman of Leyden. His wife died February THE MAYFLOWER PILGRIMS 131 25, 1621, at Plymouth. Allerton married Fear Brewster (his second wife), who died at Plymouth, December 12, 1634. In 1644 he had married Joanna (his third wife). He was an assistant in 1621 and 1634, and Deputy Governor. He was living in New Haven in 1642, later in New York, then returned to New Haven. He died in 1659. John Allerton, a sailor, died before the Mayflower made her return voyage. Mary Allerton, a daughter of Isaac, was born in 1616. She married Elder Thomas Cushman. She died in 1699, the last survivor of the Mayflower passengers. Remember Allerton was another daughter living in Plymouth in 1627. Sarah Allerton, yet another daughter, married Moses Maverick of Salem. Francis Billington, son of John and Eleanor, went out in 1620 with his parents. In 1634 he married widow Christian (Penn) Eaton, by whom he had children. He removed before 1648 to Yarmouth. He was a member of the Plymouth military company in 1643. He died in Yarmouth after 1650. John Billington was hanged 1 in 1630 for the 1 The murderer Billington, sad to relate, was one of those who signed the historic compact on board the Mayflower. He was tried, condemned to death, and executed by his brethren in accordance with their primitive criminal procedure. At first, trials in the little colony were conducted by the whole body of the townsmen, the Governor presiding. In 1623 trial by jury was established, and 132 THE ROMANTIC STORY OF murder of John Newcomen. His widow, Eleanor, who went over with him, married in 1638 Gregory Armstrong, who died in 1650, leaving no children by her. John Billington, a son of John and Eleanor, born in England, died at Plymouth soon after 1627. William Bradford, baptised in 1589 at Auster-field, Yorkshire, was a leading spirit in the Pilgrim movement from its inception to its absorption in the Union of the New England Colonies. We have seen how, on the death of John Carver, he became the second Governor of Plymouth Colony, and he five times filled that office, in 1621-33, 1635, 1637, 1639-44, and 1645-47, as well as serving several times as Deputy Governor and assistant. A patent was granted to him in subsequently a regular code of laws was adopted. The capital offences were treason, murder, diabolical conversation, arson, rape, and unnatural crimes. Plymouth had only six sorts of capital crime, against thirty-one in England at the accession of James I, and of these six it actually punished only two, Billington's belonging to one of them. The Pilgrims used no barbarous punishments. Like all their contemporaries they used the stocks and the whipping-post, without perceiving that those punishments in public were barbarizing. They inflicted fines and forfeitures freely without regard to the station or quality of the offenders. They never punished, or even committed any person as a witch. Restrictive laws were early adopted as to spirituous drinks, and in 1667 cider was included. In 1638 the smoking of tobacco was forbidden out-of-doors within a mile of a dwelling-house or while at work in the fields; but unlike England and Massachusetts, Plymouth never had a law regulating apparel. THE MAYFLOWER PILGRIMS 135 1629 by the Council of New England vesting the Colony in trust to him, his heirs, associates and assigns, confirming their title to a tract of land and conferring the power to frame a constitution and laws; but eleven years later he transferred this patent to the General Court, reserving only to himself the allotment conceded to him in the original division of land. Bradford's rule as chief magistrate was marked by honesty and fair dealing, alike in his relations with the Indian tribes and his treatment of recalcitrant colonists. His word was respected and caused him to be trusted; his will was resolute in every emergency, and yet all knew that his clemency and charity might be counted on whenever it could be safely exercised. The Church was always dear to him: he enjoyed its faith and respected its institutions, and up to the hour of his death, on May 9, 1657, he confessed his delight in its teachings and simple services. Governor Bradford was twice married, first, as we know, at Leyden in 1613 to Dorothy May, who was accidentally drowned in Cape Cod harbour on December 7, 1620; and again on August 14, 1623, to Alice Carpenter, widow of Edward Southworth. By his first wife he had one son, and by his second, two sons and a daughter. Jointly with Edward Winslow, Bradford wrote "A Diary of Occurences during the First Year of the Colony," and this was published in England in 1622. He left many manuscripts, letters and chronicles, verses and 136 THE ROMANTIC STORY OF dialogues, which are the principal authorities for the early history of the Colony; but the work by which he is best remembered is his manuscript "History of Plymouth Plantation," now happily, after being carried to England and lost to sight for years in the Fulham Palace Library, restored to the safe custody of the State of Massachusetts. William Brewster more than any man was entitled to be called the Founder of the Pilgrim Church. It originated in his house at Scrooby, where he was born in 1566, and he sacrificed everything for it. He was elder of the church at Leyden and Plymouth, and served it also as minister for some time after going out. Through troubles, trials, and adversity, he stood by the Plymouth flocks, and when his followers were in peril and perplexity, worn and almost hopeless through fear and suffering, he kept a stout heart and bade them be of good cheer. Bradford has borne touching testimony to the personal attributes of his friend, who, he tells us, was "qualified above many," and of whom he writes that "he was wise and discrete, and well-spoken, having a grave and deliberate utterance, of a very cheerful spirite, very sociable and pleasante among his friends, of an humble and modest mind, of a peaceable disposition, under-valewing himself and his own abilities and sometimes over-vallewing others, inoffensive and innocent in his life and conversation, which gained him ye love of those without, as well as those within." THE MAYFLOWER PILGRIMS 139 Of William Brewster it has been truly said that until his death, on April 16, 1644, his hand was never lifted from Pilgrim history. He shaped the counsels of his colleagues, helped to mould their policy, safeguarded their liberties, and kept in check tendencies towards religious bigotry and oppression. He tolerated differences, but put down wrangling and dissension, and promoted to the best of his power the strength and purity of public and private life. Mary Brewster, wife of William, who went out with him, died before 1627. Love Brewster, son of Elder William, born in England, married (1634) Sarah, daughter of William Collier. He was a member of the Duxbury company in 1643, and died at Duxbury in 1650. Wrestling Brewster, son of Elder William, emigrated at the same time; he died a young man, unmarried. Richard Britteridge died December 21, 1620, his being the first death after landing. Peter Brown probably married the widow Martha Ford; he died in 1633. William Button, a servant of Samuel Fuller, died on the voyage. John Carver, first Governor of the Plymouth Colony, landed from the Mayflower with his wife, Catherine, and both died the following spring or summer. Carver was deacon in Holland. He left no descendants. Robert Carter was a servant of William Mullins, and died during the first winter. 140 THE ROMANTIC STORY OF James Chilton died December 8, 1620, before the landing at Plymouth, and his wife succumbed shortly after. Their daughter Mary, tradition states, romantically if not truthfully, was the first to leap on shore. She married John Winslow, and had ten children. Richard Clarke died soon after arrival. Francis Cook died at Plymouth in 1663. John Cook, son of Francis Cook by his wife, Esther, shipped in the Mayflower with his father. He married Sarah, daughter of Richard Warren. On account of religious differences he removed to Dartmouth, of which he was one of the first purchasers. He became a Baptist minister there. He was also Deputy in 1666-68, 1673, and 1681-83-86. The father and son were both members of the Plymouth military company in 1643. John Cook died at Dartmouth after 1694. Humility Cooper returned to England, and died there. John Crackston died in 1621; his son, John, who went out with him, died in 1628. Edward Dotey married Faith Clark, probably as second wife, and had nine children, some of whom moved to New Jersey, Long Island, and elsewhere. He was a purchaser of Dartmouth, but moved to Yarmouth, where he died August 23, 1655. He made the passage out as a servant to Stephen Hopkins, and was wild and headstrong in his youth, being a party to the first duel fought in New England. THE MAYFLOWER PILGRIMS 143 Francis Eaton went over with his first wife, Sarah, and their son, Samuel. He married a second wife, and a third, Christian Penn, before 1627. He died in 1633. Samuel Eaton married, in 1661, Martha Billington. In 1643 he was in the Plymouth military company, and was living at Duxbury in 1663. He removed to Middleboro, where he died about 1684. Thomas English died the first winter. One Ely, a hired man, served his time and returned to England. Moses Fletcher married at Leyden, in 1613, widow Sarah Dingby. He died during the first winter. Edward Fuller shipped with his wife, Ann, and son, Samuel. The parents died the first season. Samuel Fuller, the son, married in 1635 Jane, daughter of the Reverend John Lothrop; he removed to Barnstable, where he died October 31, 1683, having many descendants. Dr. Samuel Fuller, brother of Edward, was the first physician; he married (1) Elsie Glascock, (2) Agnes Carpenter, (3) Bridget Lee; he died in 1633. His descendants of the name are through a son, Samuel, who settled in Middleboro. Richard Gardiner, mariner, was at Plymouth in 1624, but soon disappeared. John Goodman, unmarried, died the first winter. 144 THE ROMANTIC STORY OF John Hooke died the first winter, as did also William Holbeck. Giles Hopkins, son of Stephen, married in 1639 Catherine Wheldon; he moved to Yarmouth and afterwards to Eastham, and died about 1690. Stephen Hopkins went out with his second wife, Elizabeth, and Giles and Constance, children by a first wife. On the voyage a child was born to them, which they named Oceanus, but it died in 1621. He was an assistant, 1634-35, and died in 1644. His wife died between 1640 and 1644. Constance, daughter of Stephen, married Nicholas Snow. They settled at Eastham, from which he was a Deputy in 1648, and he died November 15, 1676; she died in October, 1677, having had twelve children. Damaris, a daughter, was born after their arrival and married Jacob Cooke. John Howland married Elizabeth, daughter of John Tilley. He was a Deputy in 1641, 1645 to 1658, 1661, 1663, 1666-67, and 1670; assistant in 1634 and 1635; also a soldier in the Plymouth military company in 1643. He died February 23, 1673, aged more than eighty years, and his widow died December 21, 1687, aged eighty years. John Langemore died during the first winter. William Latham about 1640 left for England, and afterwards went to the Bahamas, where he probably died. Edward Leister went to Virginia. THE MAYFLOWER PILGRIMS 147 Edmund Margeson, unmarried, died in 1621. Christopher Martin and wife both died early; his death took place January 8, 1621. Desire Minter returned to England, and there died. Ellen More perished the first winter. Jasper More removed to Scituate, and his name is said to have become Mann. He died in Scituate in 1656; his brother died the first winter. William Mullins shipped with his wife, son Joseph, and daughter Priscilla, who married John Alden. The father died February 21, 1621, and his wife during the same winter, as did also the son. Solomon Power died December 24, 1620. Degory Priest married in 1611, at Leyden, widow Sarah Vincent, a sister of Isaac Allerton; he died January 1, 1621. John Rigdale went out with his wife, Alice, both dying the first winter. Joseph Rogers went with his father, Thomas Rogers, who died in 1621. The son married, and lived at Eastham in 1655, dwelling first at Duxbury and Sandwich. He was a lieutenant, and died in 1678 at Eastham. Harry Sampson settled at Duxbury, and married Ann Plummer in 1636. He was of the Duxbury military company in 1643, and died there in 1684. George Soule was married to Mary Becket. He was in the military company of Duxbury, 148 THE ROMANTIC STORY OF where he resided, and was the Deputy in 1645-46, and 1650-54. He was an original proprietor of Bridgewater and owner of land in Dartmouth and Middleboro; he died 1680, his wife in 1677. Ellen Story died the first winter. Miles Standish, that romantic figure in the Pilgrim history, did good service for the Colony, and practically settled the question whether the Anglo-Saxon or the native Indian was to predominate in New England. Born in Lancashire about 1584, and belonging to the Duxbury branch of the Standish family, he obtained a lieutenant's commission in the English army and fought in the wars against The Netherlands and Spain. His taste for military adventure led to his joining the Pilgrims at Leyden, and when the Mayflower reached Cape Cod, he led the land exploring parties. Soon he was elected military captain of the Colony, and with a small force he protected the settlers against Indian incursions until the danger from that quarter was past. When they were made peaceably secure in their rights and possessions, and warlike exploits and adventures were at an end, Standish retired to his estate at Duxbury, on the north side of Plymouth Bay: but in peace, as in war, he was still devoted to the interests of the Colony, frequently acting as Governor's assistant from 1632 onward, becoming Deputy in 1644, and serving as treasurer between that year and 1649. His wife Rose, THE MAYFLOWER PILGRIMS 151 who sailed with him in the Mayflower, died January 29, 1621, but he married again, and had four sons and a daughter. He died on October 3, 1656, honoured by all the community among whom he dwelt, and his name and fame are perpetuated in history, in the poetry of Longfellow and Lowell, and by the monument which stands upon what was his estate at Duxbury, the lofty column on Captain's Hill, seen for miles both from sea and land. Edward Thompson died December 4, 1620. Edward Tilley and his wife Ann both died the first winter. John Tilley accompanied his wife and daughter Elizabeth; the parents died the first winter, but the daughter survived and married John Howland. Thomas Tinker, with his wife and son, died the first winter. John Turner had with him two sons, but the party succumbed to the hardships of the first season. William Trevore entered as a sailor on the Mayflower, and returned to England on the Fortune in 1621. William White went out with his wife Susanna, and son Resolved. A son, Peregrine, was born to them in Provincetown Harbour, who has been distinguished as being the first child of the Pilgrims born after the arrival in the New World. This is his strongest claim, as his early life was rather disreputable, though his obituary, in 152 THE ROMANTIC STORY OF 1704, allowed "he was much reformed in his last years." William, the father, died on February 21, 1621; his widow married, in the May following, Edward Winslow, who had recently lost his wife. Resolved White married (1) Judith, daughter of William Vassall; he lived at Scituate, Marshfield, and lastly Salem, where he married, (2) October 5, 1674, widow Abigail Lord, and died after 1680. He was a member of the Scituate military company in 1643. Roger Wilder died the first winter, and Thomas Williams also died the first season. Edward Winslow, an educated young English gentleman from Droitwich, joined the brethren at Leyden in 1617, and accompanying them to New England, was the third to sign the compact on board the Mayflower, Carver and Bradford signing before, and Brewster after him, then Isaac Allerton and Miles Standish. Winslow was one of the party sent to prospect along the coast. Before leaving Holland, he married at Leyden, in 1618, Elizabeth Barker, who went out with him, but died March 24, 1621, and as we have seen, he shortly afterwards married widow Susanna (Fuller) White. Winslow proved himself a man of exceptional ability and character, and gave the best years of his life to the service of the Colony. While on a mission to England in its interests in 1623, he published an account of the settlement and struggles of the Mayflower Pilgrims, under the title "Good THE MAYFLOWER PILGRIMS 155 News for New England, or a relation of things remarkable in that Plantation." Later he wrote (and published in 1646) "Hypocrisie Unmasked; by a true relation of the proceedings of the Governor of Massachusetts against Samuel Groton [sic], a notorious Disturber of the Peace," which is chiefly remarkable for an appendix giving an account of the preparations in Leyden for removal to America, and the substance of John Robinson's address to the Pilgrims on their departure from Holland. Winslow was Governor of the Colony in 1633, 1636, and 1644, and at other times assistant. In 1634 he went to England again on colonial business, and before sailing accepted a commission for the Bay Colony which required him to appear before the King's Commissioners for Plantations. Here he was brought face to face with Archbishop Laud, who could not resist the opportunity of venting his wrath upon the representative of the Plymouth settlement, about whose sayings and doings he had been duly informed. Winslow was accused of taking part in Sunday services and of conducting civil marriages. He admitted the charges, and pleaded extenuating circumstances; but Laud was not to be appeased and committed the bold Separatist to the Fleet Prison, where he remained for seventeen weeks, when he was released and permitted to return to America, wounded in his conscience by the cruel wrong done him and impoverished by legal expenses. In October, 1646, against the advice of his com- 156 THE ROMANTIC STORY OF patriots, Winslow undertook another mission to the old country, this time in connection with the federation of the New England Colonies, and, accepting service under Cromwell, sailed on an expedition to the West Indies, caught a fever, and died, and was buried at sea on May 8, 1655. Gilbert Winslow, another subscriber to the compact in the Mayflower's cabin, returned subsequently to England and died in 1650. Apart from the events of their after lives, the spirit which possessed the Mayflower Pilgrims and guided their leaders in exile is well expressed by Mrs. Hemans when she says, in her stirring lines — They sought a faith's pure shrine! Ay, call it holy ground, The soil where first they trod; They have left unstained what there they found — Freedom to worship God.
<urn:uuid:c86101e4-e0e6-41cf-b8bf-fdf1f217fe1b>
CC-MAIN-2017-17
http://capecodhistory.us/books/Addison-1911.htm
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917123102.83/warc/CC-MAIN-20170423031203-00603-ip-10-145-167-34.ec2.internal.warc.gz
en
0.989183
5,130
2.640625
3
- 1 Definition - 2 Why a cognitive tools approach ? - 3 Problems and challenges - 4 Cognitive tools and the joint learning system - 5 Typologies of cognitive tools - 6 Tools - 7 Links - 8 References - Cognitive tools refer to learning with technology (as opposed to learning through technology). Jonassen (1994) argues that “ technologies, from the ecological perspective of Gibson (1979), afford the most meaningful thinking when used as tools”. - Cognitive tools are generalizable computer tools that are intended to engage and facilitate cognitive processing. [...] Cognitive tools can be thought of as a set of tools that learners need in order to serve cognitive apprenticeships. [...] They scaffold the all-important processes of articulation and reflection, which are the foundations of knowledge construction. They (gag, can I say it?) empower the learners to think more meaningfully and to assume ownership of their knowledge, rather than reproducing the teacher's. The major problem if we accept this conception of technologies is what to do with all of the instructional designers... (Jonassen 1994). - Cognitive tools help learners with complex cognitive learning activities and critical thinking. These tools are learner controlled in the sense that they construct their knowledge themselves using the tools rather than memorizing knowledge. In this perspective, computer systems are "partners" that stimulate learners or groups of learners to make maximum use of their cognitive potential. - “ Because of the interactive nature of technology and the power of its information-processing capabilities, Jonassen (1996) proposes that when students learn with technology, it becomes a "mindtool." He defines mindtools as "computer-based tools and learning environments that have been adapted or developed to function as intellectual partners with the learner in order to engage and facilitate critical thinking and higher-order learning" (p. 9). Using commonly available software (databases, spreadsheets, electronic mail, multimedia, hypermedia, and others), learners employ technology to both construct and represent knowledge. This concept is similar to Pea's (1985) conception of a cognitive technology as " . . . any medium that helps transcend the limitations of the mind, such as memory, in activities of thinking, learning, and problem solving" (p. 168).” (Boethel and Dimok, 1999: 17). - “ Cognitive tools are technologies that learners interact and think with in knowledge construction, designed to bring their expertise to the performance as part of the joint learning system.” (Kim and Reeves (2007:224) 2 Why a cognitive tools approach ? According to Shim and Lee (2006), Lajoie (1993, p. 261) summarized that cognitive tools can benefit learners by serving the functions as follows: - Support cognitive processes, such as, memory and metacognitive processes - Share the cognitive load by providing support for lower level cognitive skills so that resources are left over for higher order thinking skills - Allow the learners to engage in cognitive activities that would be out of their reach otherwise - Allow learners to generate and test hypotheses in the context of problem solving Let's continue with a longer quotation from Reeves (1999) keynote speech at Ed-Media 1999: This longer quotation (sorry) summarizes key features of the "cognitive tool approach" formalated in the late nineties: Learner empowerment, project-orientated authentic and "meaningful" learning, computer as a partner and variety of tools. 3 Problems and challenges (needs to be completed) Use of cognitive tools often requires expertise that learners don't necessarily have. In addition, assessment of what is learned is often done in a different context which is debatable if one adheres to the idea that cognitive tools are also professional tools, i.e. related to practise that has to be learnt. On the other hand, some cognitive tools (e.g. simulations) can have the effect that the learner just learns the tool (video game effect) and not something that he can transfer. 4 Cognitive tools and the joint learning system Building on top of Salomon's (1991,1993a, 1993c) concepts of distributed cognition, Kim and Reeves (2007:207) argue that “the learner, tool, and activity form a joint learning system, and the expertise in the world should be reflected not only in the tool but also in the learning activity within which learners make use of the tool.” Interestingly enough, there has been a similar argument by Rabardel in terms of instrumentation: “An activity consists of acting upon an object in order to realize a goal and give concrete form to a motive. Yet the relationship between the subject and the object is not direct. It involves mediation by a third party: the instrument [...] An instrument cannot be confounded with an artifact. An artifact only becomes an instrument through the subject's activity. In this light, while an instrument is clearly a mediator between the subject and the object, it is also made up of the subject and the artifact.” (Béguin & Rabardel, 2000, P.175). In other words, instrumentation is related to action, i.e. how a technical object is used within an activity and how it affects cognitive schemas. Activity theory, based on soviet micro-sociology and psychology also stresses the role of instruments within an activity system. Participants in an activity are portrayed as subjects interacting with objects and other subjects to achieve desired outcomes. Human interactions with each other and with objects of the environment are mediated through the use of tools, rules and division of labour. Common to these approaches is the idea that human cognition relies on the (situated) environment. In this perspective cognition is distributed, although in various forms and to various degrees. Kim and Reeves (2007:216) argue that “A cognitive activity usually reflects some aspects of all three cognitive distributions: social, symbolic, and physical. For example, brainstorming for ideas as a team exemplifies social distribution of cognition among people. Drawing a diagram on the board to visualize their discussed ideas reflects their dependence upon the symbolic and physical distribution.”. Note, that some researchers may not agree that symbol systems should be conceptualized as part of distributed cognition, since most cognitive actities rely on symbol processes. On the other hand, technology like computers or paper do allow for symbolic representations that would not be used without these media. The is topic that has been hotly debated in the media debate (initially Clark vs. Kozma). Computer programs are both symbolic and physical tools, i.e. they represent things and do this and therefore extend our cognitive powers in various ways. These programs have affordances, i.e. properties upon which one can act. These may be intended by the designers or not, be perceptible or not, etc. In most cases, they require engagement from the user and various degrees of expertise. According to Kim and Reeves (2007:218): 5 Typologies of cognitive tools 5.1 Jonassen & Carr According to Kim and Reeves (2007:226), Jonassen and Carr (2000) suggested the following classes of "mindtools": - semantic organization tools (e.g., databases and concept mapping tools), - dynamic modeling tools (e.g., spreadsheets and microworlds), - visualization tools (e.g., MathLab and Geometry Tutor), - knowledge construction tools (e.g., a multimedia authoring tool), - socially shared cognitive tools (e.g., computer conferencing (forums) and computer-supported collaborative argumentation). 5.2 Lajoe and Derry 1993a and 2000 Kim and Reeves (2007:226) noticed that over two different volumes of Computers as Cognitive Tools, classification has shifted to take into account new learning paradigms. - Modelers (e.g., TAPS; Derry and Hawkes, 1993). "Modelers" is defined in terms of ITS research, i.e. whether the software would model the student - Nonmodelers (e.g., HyperAuthor; (Lehrer, 1993), - and the the ones merging the two (e.g., DARN; Schauble et al., 1993) - Tools supporting knowledge-building activities (e.g., SCI-WISE; White et al., 2000) - Tools supporting new forms of knowledge representations (e.g., DNA; Shute et al., 2000). 5.3 Iiyoshi Hannifin and Wang 2005 Robertson, B., Elliot, L., & Robinson, D. (2007). summarize the Roles of Cognitive Tools, Examples, and Specific Technologies with the following table adapted from Iyoshi, et al, 2005. It lists the 5 roles of cognitive tools followed by examples and specific technologies that demonstrate each role. |Roles of Cognitive Tools||Examples||Specific Technologies| | I. Information Seeking:| These tools allow student to retrieve and identify information through learning situations that require the seeking of information. |II. Information Presentation:| These tools enable information to be presented in a meaningful and appropriate representation. |III. Knowledge Organization:| These support students by allowing them to use a tool to establish relationships among information by structuring or restructuring information by manipulating information. |IV. Knowledge Integration:| Such tools allow students to connect new information to prior knowledge therefore students are building a larger array of information. 5.4 Jonassen 2006 Shim and Li (2006) summarize Jonassen's (2006) Cognitive tools for teachers with the following table: ||Database management systems (DBMSs)| |Structured Computer Conference|| ||Email, Bulletin board service, Discussion board| From these various classifications, Daniel K. Schneider thinks that one could distinguish these broad categories of cognitive tools. - Simple writing and communication tools (e.g. All sorts of easy web-2.0 applications like blogs, simple CMS tools and simple professional tools. - Special purpose drawing and writing tools (e.g. concept maps, knowledge forum), some specifically made for education, some not. - Highly specialized professional tools like authoring environments or simulation tools. - Tools that model some kind of behavior and let the learner freely interact with those worlds Simulations, Microworlds. These tools are typically made for education This classification is not based on principles, but rather on what I feel are typical clusters of usage in schools. In our experience (need some more serious data here), the most popular tools are the ones that are both easy and familiar from other context and of use for non-educational purposes. A more systematic and thoughtful classication scheme has been developed by Kim and Reeves (2007). The authors identify the following dimensions: - Tool interactivity - delivery of information, e.g. multimedia presentations, forum messages - task offload or support, e.g. a calculator, error checker - support of individual's cognitive activities Note: The authors insist that in any case cognitive tools should be considered partners that interact with learners to create knowledge and require higher-order thinking from learners. - Tool specificity - Levels of expertise (according to Patel and Groen) - general, domain independant expertise (such as creativity) - specific domain-dependant (precise knowledge and processing strategies of particular domain) - generic domain-dependant (general domain knowledge, applicable to various sub-domain problems) - Tool specificity - Structure of expertise - knowledge, e.g. facts or abstract rules - functions, e.g. information search, rule execution, decision support - representations, e.g. concrete (isomorphic), abstract (symbolic) This section should index related articles that refer to specific kinds of tools (not complete yet !) Cognitive tools can be really simple, e.g. a Word processor that will allow a teacher to scaffold a student's activity planning process (one can write outlines, use the text as a mirror, etc.). Among teachers, paper-based tools like interactive notebooks or organizers are also very popular, e.g. see creationgs on Pinterest or Teachers Pay Teachers 6.1 Forum + argumentation - CSILE was a research system that now is commercialized as Knowledge forum - Fle3 is a free pedagogical platform that builds on ideas of CSILE 6.2 Collaborative hypertexts - This Wiki is also used in teaching, e.g. students participate through writing activities. During the summer semester 2006 a few students participate in a course that will only be offered once and that features only writing activities. 6.3 Tools for organizing ideas 6.4 Tools to organize writing activities 6.5 Professional tools 6.6 Simulation and microworld building - See microworlds and simulation and look at systems that are designed for student's end-user programming activities, e.g. AgentSheets or LEGO Mindstorms. - Drawing tools that simulate something e.g. Cabri Geomètre (geometry) or ASSIST (Gravity) - Mindtool Resource Page. A good collection of links (papers, tools, etc.) - elearning-reviews Topic: Cognitive Tools - Cognition and Technology ED TEC 6444/ED PSY 6444, Joe Polman's course at University of Missouri - St.Louis - Design Rationale group at MIT. They build some interesting "sketching" applications, like ASSIST for which someone sent me a video linkwith Randy Davis presentig the thing. - Constructing knowledge with technology. - Béguin, Pascal (2003), Design as a mutual learning process between users and designers, Interacting with Computers, Volume 15, Issue 5, October 2003, Pages 709-730. doi:10.1016/S0953-5438(03)00060-2 - Béguin and Rabardel, 2000. P. Béguin and P. Rabardel, Designing for instrument-mediated activity. Scandinavian Journal of Information Systems 12 (2000), pp. 173-191. - Boethel, Martha and K. Victoria Dimock (1999). Constructing Knowledge with Technology: A Review of the Literature, SEDL, html/PDF/booklet - Bransford, John D.; Brown, Ann L.; Cocking, Rodney R. (2000) Technology to Support Learning In Bransford, John D.; Brown, Ann L.; Cocking, Rodney R. (Eds.), How People Learn: Brain, Mind, Experience, pp. 206-230 ISBN 0309070368 - Bereiter, C. (2002). Education and mind in a knowledge society. Mahwah, NJ: Erlbaum. - Clark, Richard C. (1983). "Reconsidering Research on Learning from Media," Review of Educational Research 53 (Winter 1983): 445-59. JSTOR HTMLImages / PDF (Access restricted) - Clark, R.E. (1994). Media will Never Influence Learning. Educational Technology Research and Development, 42(2), 21-29. Abstract PDF - DOI 10.1007/BF02299088 (Access restricted) - Derry, S.J. & Hawkes, L.W. (1993). Local cognitive modeling of problem-solving behavior: an application of fuzzy theory. In S.P. Lajoie and S.J. Derry, ed., Computers as cognitive tools, pp. 107-140. Lawrence Erlbaum Associates: Hillsdale, NJ. - Iiyoshi, T., Hannifin, M. J., & Wang, F. (2005). Cognitive tools and student-centered learning:Rethinkingtools, functions, and applications. Educational Media International, 42, 281-296. - Jonassen, D. H., & Reeves, T. C. (1996). Learning with technology: Using computers as cognitive tools. In D. H. Jonassen (Ed.), Handbook of research for educational communications and technology, 1st edition. (pp. 693-719). New York: Macmillan. - Jonassen, David. H. (1994), Technology as Cognitive Tools: Learners as Designers, ITForum Paper #1 HTML - Jonassen, D. H. (1996). Computers in the Classroom: Mindtools for Critical Thinking. Englewood Cliffs, New Jersey: Prentice-Hall, Inc. - Jonassen, D.H. & Carr C.S. (2000). Mindtools: affording multiple knowledge representations for learning. In S.P. Lajoie, ed., Computers as cognitive tools: No more walls, Vol. 2, pp. 165-196. Lawrence Erlbaum Associates: Mahwah, NJ - Jonassen, D.H. (2006). Modeling with technology: Mindtools for conceptual change. Columbus, OH: Merill/Prentice Hall. - Kim, Beaumie and Thomas C. Reeves (2007), Reframing research on learning with technology: in search of the meaning of cognitive tools, Instructional Science, Volume 35, Number 3, 207-256. DOI 10.1007/s11251-006-9005-2 (Access restricted). - Kozma, Robert B. (1991). "Learning with Media," Review of Educational Research 61 (Summer 1991): 179-211. - Kozma, Robert B. (1994), The Influence of Media on Learning: The Debate Continues, School Library Media Research, Volume 22, Number 4, Summer 1994. HTML - Lajoie, S. P., & Derry, S. J. (Eds.). (1993). Computers as cognitive tools. Hillsdale, NJ: Lawrence Erlbaum. - Lajoie, S.P. (ed.). (2000). Computers as cognitive tools: No more walls, Vol. 2. Mahwah NJ: Lawrence Erlbaum Associates - Lebeau R.B. (1998). Cognitive tools in a clinical encounter in medicine: supporting empathy and expertise in distributed systems. Educational Psychology Review 10(1):3-24 - Lehrer, R. (1993). Authors of knowledge: patterns of hypermedia design. In S.P. Lajoie and S.J. Derry, ed., Computers as cognitive tools, pp. 197-227. Lawrence Erlbaum Associates: Hillsdale, NJ. - Maddux, C. D., Johnson, D. L., and Willis, J. W. (1997). Educational Computing: Learning with Tomorrow’s Technologies, Second Edition. Boston: Allyn and Bacon. - Pea, R. (1985). Beyond amplification: using the computer to reorganize mental functioning. Educational Psychologist, 20, 176–182. - Reeves, Thomas C. , A Research Agenda for Interactive Learning in the New Millennium, Ed-Media '99 Keynote. - Reeves, Thomas C. (1998), The Impact of Media and Technology in Schools, A Research Report prepared for The Bertelsmann Foundation, - Robertson, B., Elliot, L., & Robinson, D. (2007). Cognitive tools. In M. Orey (Ed.), Emerging perspectives on learning, teaching, and technology. Retrieved 17:46, 26 July 2007 (MEST) from http://projects.coe.uga.edu/epltt/ - Salomon G., Perkins D.N., Globerson T. (1991). Partners in cognition: extending human intelligence with intelligent technologies. Educational researcher 20(3):2-9. - Salomon G. (1993a). No distribution without individuals' cognition. In: Salomon G. (eds). Distributed cognitions: Psychological and educational considerations. Cambridge University Press, New York, pp. 111-138 - Salomon G. (1993b). On the nature of pedagogic computer tools: the case of the writing partner. In: Derry S.J. (eds). Computers as cognitive tools. Lawrence Erlbaum Associates, Hillsdale, NJ, pp. 179-196 - Salomon, G. (ed.). (1993c). Distributed cognitions: Psychological and educational considerations. New York: Cambridge University Press - Scardamalia, M. (2003). Knowledge Forum (Advances beyond CSILE). Journal of Distance Education, 17 (Suppl. 3, Learning Technology Innovation in Canada), 23-28. - Scardamalia, M. & Bereiter, C. (1994). The CSILE project: Trying to bring the classroom into world 3. In K. McGilly, ed., Classroom Lessons: Integrating Cognitive Theory and Classroom Practice (pp. 201-228). Cambridge, MA: MIT Press/Bradford Books. - Scardamalia, M. (2004a). CSILE/Knowledge Forum. In Education and technology: An Encyclopedia (pp. 183-192). Santa Barbara: ABC-CLIO. - Schauble, L., Raghavan, K. & Glaser, R. (1993). The discovery and reflection notation: a graphical trace for supporting self-regulation in computer-based laboratories. In S.P. Lajoie and S.J. Derry, ed., Computers as cognitive tools, pp. 319-337. Lawrence Erlbaum Associates: Hillsdale, NJ. - Shim, J. E., & Li, Y. (2006). Applications of Cognitive Tools in the Classroom. In M. Orey (Ed.), Emerging perspectives on learning, teaching, and technology. Retrieved 17:46, 26 July 2007 (MEST), from http://projects.coe.uga.edu/epltt/.
<urn:uuid:938071b6-6a88-4e79-aa8d-649130c8a521>
CC-MAIN-2017-17
http://edutechwiki.unige.ch/en/Cognitive_tool
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917124478.77/warc/CC-MAIN-20170423031204-00604-ip-10-145-167-34.ec2.internal.warc.gz
en
0.854712
4,557
3.890625
4
Botswana /bÉ'tËswÉ'ËnÉ/, officially the Republic of Botswana (Tswana: Lefatshe la Botswana), is a landlocked country located in Southern Africa. The citizens refer to themselves as Batswana (singular: Motswana). Formerly the British protectorate of Bechuanaland, Botswana adopted its new name after becoming independent within the Commonwealth on 30 September 1966. Since then, it has maintained a strong tradition of stable representative democracy, with a consistent record of uninterrupted democratic elections. Botswana is topographically flat, with up to 70 percent of its territory being the Kalahari Desert. It is bordered by South Africa to the south and southeast, Namibia to the west and north, and Zimbabwe to the northeast. Its border with Zambia to the north near Kazungula is poorly defined but at most is a few hundred metres long. A mid-sized country of just over 2 million people, Botswana is one of the most sparsely populated nations in the world. Around 10 percent of the population lives in the capital and largest city, Gaborone. Formerly one of the poorest countries in the worldâ"with a GDP per capita of about US$70 per year in the late 1960sâ"Botswana has since transformed itself into one of the fastest-growing economies in the world, now boasting a GDP (purchasing power parity) per capita of about $18,825 per year as of 2015, which is one of the highest in Africa. Its high gross national income (by some estimates the fourth-largest in Africa) gives the country a modest standard of living and the highest Human Development Index of continental Sub-Saharan Africa. Botswana is a member of the African Union, the Southern African Development Community, the Commonwealth of Nations, and the United Nations. Despite its political stability and relative socioeconomic prosperity, the country is among the hardest hit by the HIV/AIDS epidemic, with around a quarter of the population estimated to be infected. In the 19th century, hostilities broke out between Tswana inhabitants of Botswana and Ndebele tribes who were making incursions into the territory from the north-east. Tensions also escalated with the Dutch Boer settlers from the Transvaal to the east. After appeals by the Batswana leaders Khama III, Bathoen and Sebele for assistance, the British Government put Bechuanaland under its protection on 31 March 1885. The northern territory remained under direct administration as the Bechuanaland Protectorate and is modern-day Botswana, while the southern territory became part of the Cape Colony and is now part of the northwest province of South Africa. The majority of Setswana-speaking people today live in South Africa. When the Union of South Africa was formed in 1910 out of the main British colonies in the region, the Bechuanaland Protectorate, Basutoland (now Lesotho) and Swaziland (the High Commission Territories) were not included, but provision was made for their later incorporation. However, their inhabitants began to be consulted by the UK, and although successive South African governments sought to have the territories transferred, the UK kept delaying; consequently, it never occurred. The election of the Nationalist government in 1948, which instituted apartheid, and South Africa's withdrawal from the Commonwealth in 1961, ended any prospect of incorporation of the territories into South Africa. An expansion of British central authority and the evolution of tribal government resulted in the 1920 establishment of two advisory councils to represent both Africans and Europeans. Proclamations in 1934 regulated tribal rule and powers. A European-African advisory council was formed in 1951, and the 1961 constitution established a consultative legislative council. In June 1964, the UK accepted proposals for a democratic self-government in Botswana. The seat of government was moved in 1965 from Mafikeng in South Africa, to the newly established Gaborone, which sits near its border. The 1965 constitution led to the first general elections and to independence on 30 September 1966. Seretse Khama, a leader in the independence movement and the legitimate claimant to the Ngwato chiefship, was elected as the first President, going on to be re-elected twice. The presidency passed to the sitting Vice-President, Quett Masire, who was elected in his own right in 1984 and re-elected in 1989 and 1994. Masire retired from office in 1998, and was succeeded by Festus Mogae, who was elected in his own right in 1999 and re-elected in 2004. The presidency passed in 2008 to Ian Khama (son of the first President), who had been serving as Mogae's Vice-President since resigning his position in 1998 as Commander of the Botswana Defence Force to take up this civilian role. A long-running dispute over the northern border with Namibia's Caprivi Strip was the subject of a ruling by the International Court of Justice in December 1999, which ruled that Kasikili Island belongs to Botswana. At 581,730 km2 (224,607 sq mi) Botswana is the world's 48th-largest country. It is similar in size to Madagascar or France. It lies between latitudes 17° and 27°S, and longitudes 20° and 30°E. The country is predominantly flat, tending toward gently rolling tableland. Botswana is dominated by the Kalahari Desert, which covers up to 70% of its land surface. The Okavango Delta, one of the world's largest inland deltas, is in the northwest. The Makgadikgadi Pan, a large salt pan, lies in the north. The Limpopo River Basin, the major landform of all of southern Africa, lies partly in Botswana, with the basins of its tributaries, the Notwane, Bonwapitse, Mahalapswe, Lotsane, Motloutse and the Shashe, located in the eastern part of the country. The Notwane provides water to the capital through the Gaborone Dam. The Chobe River lies to the north, providing a boundary between Botswana and Namibia's Zambezi Region. The Chobe River meets with the Zambezi River at a place called Kazungula (meaning a small sausage tree, a point where Sebitwane and his Makololo tribe crossed the Zambezi into Zambia). Botswana has diverse areas of wildlife habitat. In addition to the delta and desert areas, there are grasslands and savannas, where blue wildebeest, antelopes, and other mammals and birds are found. Northern Botswana has one of the few remaining large populations of the endangered African wild dog. Chobe National Park, found in the Chobe District, has the world's largest concentration of African elephants. The park covers about 11,000 km2 (4,247 sq mi) and supports about 350 species of birds. The Chobe National Park and Moremi Game Reserve (in the Okavango Delta) are major tourist destinations. Other reserves include the Central Kalahari Game Reserve located in the Kalahari desert in Ghanzi District; Makgadikgadi Pans National Park and Nxai Pan National Park are in Central District in the Makgadikgadi Pan. Mashatu Game Reserve is privately owned: located where the Shashe River and Limpopo River meet in eastern Botswana. The other privately owned reserve is Mokolodi Nature Reserve near Gaborone. There are also specialised sanctuaries like the Khama Rhino Sanctuary (for rhinoceros) and Makgadikgadi Sanctuary (for flamingos). They are both located in Central District. Botswana faces two major environmental problems: drought and desertification. The desertification problems predominantly stem from the severe times of drought in the country. Three quarters of the country's human and animal populations depend on groundwater due to drought. Groundwater use through deep borehole drilling has somewhat eased the effects of drought. Surface water is scarce in Botswana and less than 5% of the agriculture in the country is sustainable by rainfall. In the remaining 95% of the country, raising livestock is the primary source of rural income. Approximately 71% of the country's land is used for communal grazing, which has been a major cause of the desertification and the accelerating soil erosion of the country. Since raising livestock has proven to be profitable for the people of Botswana, the land continues to be exploited. The animal populations have continued to dramatically increase. From 1966 to 1991 the livestock population has increased from 1.7 million to 5.5 million. Similarly, the human population has increased from 574,000 in 1971 to 1.5 million in 1995, nearly a 200% increase. "Over 50% of all households in Botswana own cattle, which is currently the largest single source of rural income." "Rangeland degradation or desertification is regarded as the reduction in land productivity as a result of overstocking and overgrazing or as a result of veld product gathering for commercial use. Degradation is exacerbated by the effects of drought and climate change." Environmentalists report that the Okavango Delta is drying up due to the increased grazing of livestock. The Okavango Delta is one of the major semi-forested wetlands in Botswana and one of the largest inland deltas in the world; it is a crucial ecosystem to the survival of many animals. The Department of Forestry and Range Resources has already begun to implement a project to reintroduce indigenous vegetation into communities in Kgalagadi South, Kweneng North and Boteti. Reintroduction of indigenous vegetation will help with the degradation of the land. The United States Government has also entered into an agreement with Botswana, giving them $7 million US dollars to reduce Botswana's debt by $8.3 million US dollars. The stipulation of the US reducing Botswana's debt is that Botswana will focus on more extensive conservation of the land. The United Nations Development Programme claims that poverty is a major problem behind the overexploitation of resources, including land, in Botswana. To help change this the UNDP joined in with a project started in the southern community of Struizendam in Botswana. The purpose of the project is to draw from "indigenous knowledge and traditional land management systems". The leaders of this movement are supposed to be the people in the community, to draw them in, in turn increasing their possibilities to earn an income and thus decreasing poverty. The UNDP also stated that the government has to effectively implement policies to allow people to manage their own local resources and are giving the government information to help with policy development Politics and government The Constitution of Botswana is the rule of law which protects the citizens of Botswana and represents their rights. The politics of Botswana take place in a framework of a representative democratic republic, whereby the President of Botswana is both head of state and head of government, and of a multi-party system. Executive power is exercised by the government. Legislative power is vested in both the government and the Parliament of Botswana. The most recent election, its eleventh, was held on 24 October 2014. Since independence was declared, the party system has been dominated by the Botswana Democratic Party. The judiciary is independent of the executive and the legislature. Botswana ranks 30th out of 167 states in the 2012 Democracy Index. According to Transparency International, Botswana is the least corrupt country in Africa and ranks close to Portugal and South Korea. Foreign relations and military At the time of independence, Botswana had no armed forces. It was only after the Rhodesian and South African militaries struck respectively against the Zimbabwe People's Revolutionary Army and Umkhonto we Sizwe bases that the Botswana Defence Force (BDF) was formed in 1977. The President is commander-in-chief of the armed forces and appoints a defence council. The BDF has approximately 12,000 members. Following political changes in South Africa and the region, the BDF's missions have increasingly focused on prevention of poaching, preparing for disasters, and foreign peacekeeping. The United States has been the largest single foreign contributor to the development of the BDF, and a large segment of its officer corps have received U.S. training. It is considered an apolitical and professional institution. The Botswana government gave the United States permission to explore the possibility of establishing an Africa Command (AFRICOM) base in the country. Botswana is divided into nine districts. The Bank of Botswana serves as a central bank in order to develop and maintain the Botswana pula, the country's currency. Since independence, Botswana has had one of the fastest growth rates in per capita income in the world. Botswana has transformed itself from one of the poorest countries in the world to a middle-income country. By one estimate, it has the fourth highest gross national income at purchasing power parity in Africa, giving it a standard of living around that of Mexico and Turkey. The Ministry of Trade and Industry of Botswana is responsible for promoting business development throughout the country. According to the International Monetary Fund, economic growth averaged over 9% per year from 1966 to 1999. Botswana has a high level of economic freedom compared to other African countries. The government has maintained a sound fiscal policy, despite consecutive budget deficits in 2002 and 2003, and a negligible level of foreign debt. It earned the highest sovereign credit rating in Africa and has stockpiled foreign exchange reserves (over $7 billion in 2005/2006) amounting to almost two and a half years of current imports. An array of financial institutions populates the country's financial system, with pension funds and commercial banks being the two most important segments by asset size. Banks remain profitable, well-capitalised, and liquid, as a result of growing national resources and high interest rates. Botswana's competitive banking system is one of Africa's most advanced. Generally adhering to global standards in the transparency of financial policies and banking supervision, the financial sector provides ample access to credit for entrepreneurs. The opening of Capital Bank in 2008 brought the total number of licensed banks to eight. The government is involved in banking through state-owned financial institutions and a special financial incentives program that is aimed at increasing Botswana's status as a financial centre. Credit is allocated on market terms, although the government provides subsidised loans. Reform of non-bank financial institutions has continued in recent years, notably through the establishment of a single financial regulatory agency that provides more effective supervision. The government has abolished exchange controls, and with the resulting creation of new portfolio investment options, the Botswana Stock Exchange is growing. The constitution prohibits the nationalisation of private property and provides for an independent judiciary, and the government respects this in practice. The legal system is sufficient to conduct secure commercial dealings, although a serious and growing backlog of cases prevents timely trials. The protection of intellectual property rights has improved significantly. Botswana is ranked second only to South Africa among sub-Saharan Africa countries in the 2009 International Property Rights Index. While generally open to foreign participation in its economy, Botswana reserves some sectors for citizens. Increased foreign investment plays a significant role in the privatisation of state-owned enterprises. Investment regulations are transparent, and bureaucratic procedures are streamlined and open, although somewhat slow. Investment returns such as profits and dividends, debt service, capital gains, returns on intellectual property, royalties, franchise's fees, and service fees can be repatriated without limits. Botswana imports refined petroleum products and electricity from South Africa. There is some domestic production of electricity from coal. Gemstones and precious metals In Botswana, the Department of Mines and Ministry of Minerals, Energy and Water Resources, led by Hon Onkokame Kitso Mokaila in Gaborone, maintains data regarding mining throughout the country. Debswana, the largest diamond mining company operating in Botswana, is 50% owned by the government. The mineral industry provides about 40% of all government revenues. In 2007, significant quantities of uranium were discovered, and mining was projected to begin by 2010. Several international mining corporations have established regional headquarters in Botswana, and prospected for diamonds, gold, uranium, copper, and even oil, many coming back with positive results. Government announced in early 2009 that they would try to shift their economic dependence on diamonds, over serious concern that diamonds are predicted to dry out in Botswana over the next twenty years. Botswanaâs Orapa mine is the largest diamond mine in the world in terms of value and quantity of carats produced annually. Estimated to produce over 11 million carats in 2013, with an average price of $145/carat, the Orapa mine is estimated to produce over $1.6 billion worth of diamonds in 2013. The Tswana are the majority ethnic group in Botswana, making up 79% of the population. The largest minority ethnic groups are the BaKalanga, San or AbaThwa also known as Basarwa. Other tribes are Bayei, Bambukushu, Basubia, Baherero and Bakgalagadi. In addition, there are small numbers of whites and Indians, both groups being roughly equally small in number. Botswana's Indian population is made up of many Indian-Africans of several generations, from Mozambique, Kenya, Tanzania, Mauritius, South Africa, and so on, as well as first generation Indian immigrants. The white population speaks English and Afrikaans and makes up roughly 3% of the population. Since 2000, because of deteriorating economic conditions in Zimbabwe, the number of Zimbabweans in Botswana has risen into the tens of thousands. Fewer than 10,000 San are still living the traditional hunter-gatherer style of life. Since the mid-1990s the central government of Botswana has been trying to move San out of their lands. The UN's top official on indigenous rights, Prof. James Anaya, condemned Botswana's actions toward the San in a report released in February 2010. The official language of Botswana is English although Setswana is widely spoken across the country. In Setswana, prefixes are more important than they are in many other languages. These prefixes include Bo, which refers to the country, Ba, which refers to the people, Mo, which is one person, and Se which is the language. For example, the main tribe of Botswana is the Tswana people, hence the name Botswana for its country. The people as a whole are Batswana, one person is a Motswana, and the language they speak is Setswana. Other languages spoken in Botswana include Kalanga (sekalanga), Sarwa (sesarwa), Ndebele, !Xóõ and in some parts Afrikaans. An estimated 70% of the country's citizens identify themselves as Christians. Anglicans, Methodists, and the United Congregational Church of Southern Africa make up the majority of Christians. There are also congregations of Lutherans, Baptists, the Dutch Reformed Church, Mennonites, Roman Catholics, Seventh-day Adventists, Mormons and Jehovah's Witnesses in the country. In Gaborone, there is a Lutheran History Centre which is open to the public. According to the 2001 census, the country has around 5,000 Muslims, mainly from South Asia, 3,000 Hindus and 700 Baha'is. Approximately 20% of citizens espouse no religion. Religious services are well attended in both rural and urban areas. Besides referring to the language of the dominant people groups in Botswana, Setswana is the adjective used to describe the rich cultural traditions of the Batswanaâ"whether construed as members of the Tswana ethnic groups or of all citizens of Botswana. The Scottish writer Alexander McCall Smith has written popular novels (No. 1 Ladies' Detective Agency series) about Botswana that entertain as well as inform the reader about the culture and customs of Botswana. Botswana music is mostly vocal and performed sometimes without drums depending on the occasion; it also makes heavy use of string instruments. Botswana folk music has instruments such as Setinkane (a Botswana version of miniature piano), Segankure/Segaba (a Botswana version of the Chinese instrument Erhu), Moropa (Meropa -plural) (a Botswana version of the many varieties of drums), phala (a Botswana version of a whistle used mostly during celebrations. It comes in variety of forms too). Botswana cultural musical instruments are not confined only to the strings or drums. the hands are used as musical instruments too, by either clapping them together or against phathisi (goat skin turned inside out wrapped around the calf area; it is only used by men) to create music and rhythm. For the last few decades, the guitar has been celebrated as a versatile music instrument for Tswana music as it offers a variety in string which the Segaba instrument does not have. It is the outsider that found a home within the culture. The highlight of any celebration or event that shows especially happiness is the dancing. This differs by regime, age, gender and status in the group or if it's a tribal activity, status in the community. The national anthem is Fatshe leno la rona. Written and composed by Kgalemang Tumediso Motsete, it was adopted upon independence in 1966. In the northern part of Botswana, women in the villages of Etsha and Gumare are noted for their skill at crafting baskets from Botswana from Mokola Palm and local dyes. The baskets are generally woven into three types: large, lidded baskets used for storage, large, open baskets for carrying objects on the head or for winnowing threshed grain, and smaller plates for winnowing pounded grain. The artistry of these baskets is being steadily enhanced through colour use and improved designs as they are increasingly produced for commercial use. Other notable artistic communities include Thamaga Pottery and Oodi Weavers, both located in the south-eastern part of Botswana. The oldest paintings from both Botswana and South Africa depict hunting, animal and human figures, and were made by the Khoisan (!Kung San/Bushmen) over twenty thousand years ago within the Kalahari desert. The cuisine of Botswana is unique but also shares some characteristics with other cuisine of Southern Africa. Examples of Botswana food are pap, boerewors, samp, vetkoek and mopani worms. A food unique to Botswana includes seswaa, heavily salted mashed-up meat. Football is the most popular sport in Botswana, with qualification for the 2012 Africa Cup of Nations being the biggest achievement to date. Other popular sports are cricket, tennis, rugby, badminton, softball, handball, golf, and track and field. Botswana is an associate member of the International Cricket Council. Botswana became a member of The International Badminton Federation and Africa Badminton Federation in 1991. Currently, the Botswana Golf Union offers an amateur golf league in which golfers compete in tournaments and championships. Botswana won its first Olympic medal in 2012 when Nijel Amos won silver in the 800 metres. In 2011 Amantle Montsho became world champion in the 400 metres and won Botswana's first athletics medal on the world level. Another famous Botswana athlete is high jumper Kabelo Kgosiemang, three times African champion. The card game bridge has a strong following; it was first played in Botswana over 30 years ago and grew in popularity during the 1980s. Many British expatriate school teachers informally taught the game in Botswana's secondary schools. The Botswana Bridge Federation (BBF) was founded in 1988 and continues to organise tournaments. Bridge has remained popular and the BBF has over 800 members. In 2007, the BBF invited the English Bridge Union to host a week-long teaching program in May 2008. Botswana has made great strides in educational development since independence in 1966. At that time there were very few graduates in the country and only a very small percentage of the population attended secondary school. Botswana increased its adult literacy rate from 69% in 1991 to 83% in 2008. With the discovery of diamonds and the increase in government revenue that this brought, there was a huge increase in educational provision in the country. All students were guaranteed ten years of basic education, leading to a Junior Certificate qualification. Approximately half of the school population attends a further two years of secondary schooling leading to the award of the Botswana General Certificate of Secondary Education (BGCSE). Secondary education in Botswana is neither free nor compulsory. After leaving school, students can attend one of the seven technical colleges in the country, or take vocational training courses in teaching or nursing. The best students enter the University of Botswana, Botswana College of Agriculture, and the Botswana Accountancy College in Gaborone. Many other students end up in the numerous private tertiary education colleges around the country. Notable amongst these is Botho University, the country's first private university which offers undergraduate programmes in Accounting, Business and Computing. Another international university is the Limkokwing University of Creative Technology which offers various associate degrees in Creative Arts. Other tertiary institutions include Ba Isago, ABM University College, New Era, Gaborone Institute of Professional Studies etc. Tremendous strides in providing quality education have been made by private education providers such that a large number of the best students in the country are now applying to them as well.A vast majority of these students are government sponsored. A larger influx of tertiary students is expected when construction of the nation's second international university, The Botswana International University of Science and Technology, is completed in Palapye. The quantitative gains have not always been matched by qualitative ones. Primary schools in particular still lack resources, and the teachers are less well paid than their secondary school colleagues. The Botswana Ministry of Education is working to establish libraries in primary schools in partnership with the African Library Project. The Government of Botswana hopes that by investing a large part of national income in education, the country will become less dependent on diamonds for its economic survival, and less dependent on expatriates for its skilled workers. Those objectives are in part pursued through policies in favour of vocational education, gathered within the NPVET (National Policy on Vocational Education and Training, aiming to "integrate the different types of vocational education and training into one comprehensive system". Botswana invests 21% of its government spending in education. In January 2006, Botswana announced the reintroduction of school fees after two decades of free state education though the government still provides full scholarships with living expenses to any Botswana citizen in university, either at the University of Botswana or if the student wishes to pursue an education in any field not offered locally, such as medicine, they are provided with a full scholarship to study abroad. The Ministry of Health in Botswana is responsible for overseeing the quality and distribution of healthcare throughout the country. Life expectancy at birth was 55 in 2009 according to the World Bank, having previously fallen from a peak of 64.1 in 1990 to a low of 49 in 2002. The Cancer Association of Botswana is a voluntary non-governmental organisation. The association is a member of the Union for International Cancer Control. The Association supplements existing services through provision of cancer prevention and health awareness programmes, facilitating access to health services for cancer patients and offering support and counseling to those affected. Like elsewhere in Sub-Saharan Africa, the economic impact of AIDS is considerable. Economic development spending was cut by 10% in 2002â"3 as a result of recurring budget deficits and rising expenditure on healthcare services. Botswana has been hit very hard by the AIDS pandemic; in 2006 it was estimated that life expectancy at birth had dropped from 65 to 35 years. However, after Botswana's 2011 census current life expectancy is estimated at 54.06 years. This revision shows the difficulty of accurately estimating the prevalence and impact of HIV/AIDS in the absence of hard numbers. The prevalence of HIV/AIDS in Botswana was estimated at 24% for adults in 2006, giving Botswana the second highest infection rate in the world after nearby Swaziland. In 2003, the government began a comprehensive program involving free or cheap generic anti-retroviral drugs as well as an information campaign designed to stop the spread of the virus. With a nationwide Prevention of Mother-to-Child Transmission program, Botswana has reduced HIV transmission from infected mothers to their children from about 40% to just 4%. Under the leadership of Festus Mogae, the Government of Botswana solicited outside help in fighting HIV/AIDS and received early support from the Bill and Melinda Gates Foundation, the Merck Foundation, and together formed the African Comprehensive HIV AIDS Partnership (ACHAP). Other early partners include the Botswana-Harvard AIDS Institute, of the Harvard School of Public Health and the Botswana-UPenn Partnership of the University of Pennsylvania. According to the 2011 UNAIDS Report, universal access to treatment â" defined as 80% coverage or greater â" has been achieved in Botswana. Potential reasons for Botswana's high HIV prevalence include concurrent sexual partnerships, transactional sex, cross-generational sex, and a significant number of people who travel outside of their local communities in pursuit of work. The polyamorous nature of many sexual relationships further impacts the health situation, to the extent that it has given rise to a Love Vocabulary that is unique to the region. The Botswana Tourism Organisation is the country's official tourism group. Primarily, tourists visit Gaborone due to the city having numerous activities for visitors. Hotels include; The Lonrho Lansmore Masa Square, a 5-Star hotel, and The Gaborone Sun, a luxury hotel that also features a casino. The Lion Park Resort is Botswana's first permanent amusement park and hosts events such as birthday parties for families. In Botswana, there are also destinations which include; The Gaborone Yacht Club and The Kalahari Fishing Club. In addition, natural attractions for tourists in Botswana include; The Gaborone Dam and Mokolodi Nature Reserve. There are golf courses which are maintained by the Botswana Golf Union (BGU). The Phakalane Golf Estate is a multi-million dollar clubhouse that offers both hotel accommodations and access to golf courses. Museums in Botswana include; - Botswana National Museum in Gaborone - Kgosi Bathoen II (Segopotso) Museum in Kanye - Kgosi Sechele I Museum in Molepolole - Khama III Memorial Museum in Serowe - Nhabe Museum in Maun - Phuthadikobo Museum in Mochudi - Supa Ngwano Museum Centre in Francistown - Commonwealth of Nations - Communications in Botswana - Cuisine of Botswana - Outline of Botswana - Postage stamps and postal history of Bechuanaland Protectorate - Transport in Botswana - Tuli block Notes and references - Denbow, James & Thebe, Phenyo C. (2006). Culture and Customs of Botswana. Westport, CT: Greenwood Press. ISBN 0-313-33178-2. - Official website - Botswana entry at The World Factbook - Botswana from UCB Libraries GovPubs - Botswana at DMOZ - Botswana from the BBC News - Wikimedia Atlas of Botswana - Key Development Forecasts for Botswana from International Futures
<urn:uuid:daf73a77-4760-4078-858c-876a82da99e6>
CC-MAIN-2017-17
http://colinelavocat.blogspot.com/2015/07/botswana.html
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917121355.9/warc/CC-MAIN-20170423031201-00014-ip-10-145-167-34.ec2.internal.warc.gz
en
0.951733
6,596
2.953125
3
While solar energy is perfect for many people, for others it may not be the best choice. Since so many factors are involved in determining whether solar power is a viable option for you, using this guide will definitely help you to understand more about the technology and decide. First, you should consider all of the benefits solar power provides. Generally, solar power can provide you with an affordable solution to powering your home or office. This can be extremely useful if you are in area with high cost electricity, or in remote areas. While the initial costs can be high, over the lifespan of the panels it will pay for itself and more. Solar power is also generally low maintenance. Tip: If you invest in an alternative energy system, make sure you have access to quality customer service. If something goes wrong with your system, a qualified technician should come and fix it quickly. You should first consider the area you are located in as your first decision over choosing solar power. If you live in area that gets a lot of sun light year round, obviously solar power could prove to be extremely beneficial, however if you live in an area that only gets marginal sunlight, or are extremely north or south you could go weeks or longer without enough sunlight. Creating a solar system varies from location to location. You need to figure out how many panels are required to generate enough power to keep your batteries charged, and if you are connecting to the grid to output energy, you need to know how much you need to send in order for the electric company to pay you. Tip: Use solar-powered outdoor lights. Outdoor lighting elements are a great place to use solar power, since there is easy access to sunlight. In most cases, a low-end solar panel system can offset some of the costs of electricity while leaving you still reliant on the grid. On the upscale version, you can store enough power for weeks and send electricity back through the grid. Prices can vary accordingly; however, there are hundreds of programs available to help you with the starting costs such as rebate programs, tax credits, and even government grants. With some simple research, you can find a program that matches your needs and can dramatically reduce your costs from thousands of dollars to a few hundred. Again, keep in mind that over time, the system will pay for itself, and by using the credits and reduced bills, you can actually make a net profit from nearly all solar systems. Just keep in mind that all solar systems come with disadvantages too. In areas with little light, it is nearly impossible to generate enough electricity without installing large arrays of panels. This can quickly add up in cost and negate the positive effects. Depending on whether you rent or own you may not be able to get proper permits to have the panels installed. While the newer technology and style of panels are far more efficient than traditional panels, they can still be hard to match up with your property style. Solar power is a wonderful technology, but it can be a bad choice for some areas. As long as you consider all of the pros and cons of solar power you can form an informed decision about whether it is right for you. Use this guide to improve your chances of saving money, and in turn money on your utility bills. When you think of harnessing the power of the sun in your home, you likely think of ripping your roof out and replacing it with solar panels. In some regions, this is not feasible, due to lack of sunlight, regulations or cost. Still, there are ways you can take advantage of solar power without going all out. Keep reading into the following paragraphs for more on how you can accomplish this. Any sources of illumination you have on the outside of your home or in your landscape should not draw from your electricity. There are simply too many options now for solar-powered outdoor lights to not take avail of this. From front porch lights to garage lamps to ground level lights along your sidewalk, you can find a variety of options that are all solar powered. They collect the energy of the sun during the day, even on cloudy ones, and then store it. Most shine the night through, perhaps except the last few hours before dawn during the longest winter nights. Tip: One simple way to update your home with green technology is, to install solar panels. These can help decrease the amount of energy you use, and save you some money. In addition to lighting up your yard, you can care for your landscape with solar-powered tools. You can mow your lawn with a solar-powered mower, and a number of power yard tools exist that can draw their juice from the sun. Solar power can also be used to recharge batteries and portable electronics used inside the home. Solar powered fans exist that you can use for cooling interior spaces. This can work in the home, in attic environments and rooms with windows. Devices also exist that you can add to your car for ventilation. These come in very handy in preventing the greenhouse effect from turning your car into an oven when parked in daylight. Tip: Reuse rain water. This can be done using a rain barrel attached to a rain chain or drain spout. Cars themselves are actually a place of recent research and development that allow for the sun’s energy to help you save at the gas pump. Fully solar cars are not yet practical enough for mass production. However, some hybrids feature solar power for internal electronics and to get more range out of a fully charged engine. Many cars can also be altered and modified to a degree to let solar paneling assist with electronics and air conditioning. Even if you do not or can not replace your whole roof with solar panels for power production, consider a solar powered water heater. This hot water can be used primarily for showering, but also possibly machine use in laundry of both dishes and clothes, if enough hot water is available. Using the sun for heating water spares your power bill one its biggest sources of demand, so the costs of solar water heater installation can pay themselves back over a long enough timeframe in the form of utility savings. Sinking many dollars into pricey solar panels all over your roof is not necessary if you want to harness the power of the sun in your home. The sun’s power is free to you and available every day, even cloudy ones depending on the technology. Cut back on your home’s environmental impact and maybe even save some money with the ideas presented within this article. Solar energy provides a clean renewable source of energy. Not only can it save you money in the end, but it can also power your electronics and appliances without reliance on outside power sources. This guide will help you understand why solar power is so important, and ways that you can begin to utilize it. One of the most crucial benefits of solar power is that it helps the environment as the panels produce energy. With reduced amounts of gas or other resources burned, it helps to maintain the ecosystems of the world. This means cleaner air for everybody. There is also less drilling required, which can prevent disasters such as the destructive gulf oil spill. Tip: To help you live a lifestyle less reliant on non-renewable energy, try turning off and unplugging your computer and other electronic devices, you can reduce your energy consumption and energy dependency. By doing this, it will make switching to a more expensive but more sustainable green energy source a lot easier to do. Aside from being better for the environment, using solar power is great for your budget. Most solar panel systems can be incredibly cheap when you use government tax incentives. In addition, over the lifetime of the system, it will not only pay for itself, but you can sell the excess energy produced, which means the power company will pay you. Adding a new system to your home increases the value of your home. Newer technologies can even be colored or patterned so they match your home as well. Every year the technology becomes more efficient and affordable. Systems are durable and are definitely a great investment for the lifetime of your home. Tip: Take an active part in your local community if you find that green energies are a common concern. You will learn more about alternative energy solutions and get a chance to convince local authorities to adopt green energies or offer tax incentives and other advantages. To get started with a new solar system you simply need to figure out how much electricity your home consumes, and then decide which system is right for you. There are many different types of tools available to help you with this task. One of the most popular methods uses a special device that attaches to your wall socket, and then you attach your appliances and electronics. A wattage displays showing how much electricity they use, which you can then move from room to room and simply total the usage. Now that you have a starting wattage, seek out a system that has a little more output than what you need. This is important if you ever expand in the future. Get estimates from sources online and local companies to see who can give you the best price and quality of services. Tip: The green energy solution you choose should depend on the kind of area you live in. If you live in a rural area or near the ocean, wind power will probably work best. After you have decided which company fits your needs best, search online to find which rebates or other incentives you can apply to the total bill. This will definitely reduce the total cost and provide you with tremendous savings. You can even find government grants in some cases, which will pay for the total cost, or nearly all of it. Lastly, before you have purchased your system, you need to decide if you wish to be completely reliant on your solar system, or if you would like to tie into the grid. Most people will choose to stay tied into the grid in case there is ever a system problem, however costs and other factors can also play a significant role. Depending on your local and state laws, you may be able to sell your excess energy to the power company and make money each month. No matter which system you decide to buy, you can rest easy knowing that you will definitely be helping the environment and have a solid backup plan in place in case there is an emergency. Solar power is dependable, and each year gets more and more affordable. Upgrade today and see firsthand how much you can save. Of all of the green energy choices, solar energy can bring in the most money. The sun comes up and shines for free and you can use it for nothing, if you have a solar system array on your roof. Check this article out to find out some of the ways that you can use to save money with solar energy. Solar power can help you in a number of ways. You can buy a set of outdoor lights that use solar power and put them in the ground in your front yard; then, you’ll have a lighting system to greet your guests without adding to your power bill. Any appliance that you can attach to your electrical system can benefit from solar power. Tip: When you can, take showers rather than baths. Running a bath uses up to 40% more water than a shower does, which means more energy is being used and your water bill will sky rocket. Adding solar power to a water heater is a common entry point for many people into solar energy for the home. Instead of installing a full array of panels, you just install a handful, and run the circuitry right to your hot water heater. Then, the power goes right from the sun for use in your hot water heater. If you aren’t using your hot water heater, the energy goes into your grid for use throughout the rest of the house. The best place for solar cells on most homes is the roof, because that has the most access to the sun without blockage from trees, telephone poles and other structures. You want to make sure, though, that the pitch of your roof will point the panels toward the incoming sunlight. A contractor can help you make that determination as to the best placement. Tip: A great tip for green energy use is to ensure that your home is properly insulated and has a high R value with the insulation. The best insulation has a higher R value. If you notify the electric company that you are installing solar panels, then you can arrange to sell them your excess power. If you do not use all of the energy that your solar panels generate in an entire month, then you should get a check back from your utility. Most utilities are glad to have the contributions, because electricity use is increasing in the United States each year, and the more customers there are with solar panels, the more relief the grid will get, especially in the hottest part of the summer. You’ll want to check with your HOA (homeowner’s association) to find out if you can put solar panels in. Most associations do permit this, but there are some who will not allow it. The opposition to solar panels is not as passionate as the opposition to wind turbines is, because solar panels do not stand dozens of feet in the air and block views. However, you definitely want to find out before you place an order with a contractor. Hopefully, this article has given you some ideas about things to consider when pondering a solar energy array for your house. Using these points as a springboard, you should be able to figure out what questions to ask your local government representative in charge of energy, and the contractors whom you interview when finding one to install your array. Let’s face it; there are many reasons why solar panels would benefit your home. It could increase the property value, and it will definitely cut down on your carbon footprint and reduce your electric bill each month. The purpose of this article is to give you some basic tips about this form of power. Basically, solar panels take the rays from the sun and convert them into electricity. The photovoltaic cells inside each panel have an internal mechanism that takes the light and makes into convertible power that it can then send into your electrical grid. The warmer the temperature, the more efficient the process. This is why, even on a cold day, the sun will not give you as much power as it might on a hot day that has some cloud cover. Tip: Buy Energy Star products. In the typical home, appliance’s make up about 20 percent of the electricity use. There are some basic solar ovens that will heat water for you. All you have to do is pour water into a pot and place it inside the oven. During the summer, you will generally place the oven at a 30 degree angle to the sun, you will place it at 60 degrees in the winter when the sun is lower in the sky. The water inside will come to a boil over time, and it won’t cost you a dime. Solar panels have been a power option since the 1970’s. Recent advances in technology, coupled with recent discoveries about the dwindling availability of fossil fuels, have made them once again a more popular option. You can even get rebates or tax credits for solar panel installation, depending on the state or county in which you live. If you have a really sunny month, you might even contribute so much energy to your local grid that the utility sends you a check instead of you paying a bill. Tip: Harness wind power in your home. Wind power is probably one of the cleanest sources that we have available now, and using it just might cut your electric bill down by up to 90%. Your solar panel array can connect to the grid just like any other device that creates power. Once your panels are hooked up to your electrical system, it will start adding power back to the grid on the first sunny morning. The benefit is that every kilowatt-hour that it adds back means you are taking less from your utility, resulting in a smaller bill. Whether your home will appreciate or not on the basis of a solar panel installation will really depend on where you live. Obviously, in more upscale areas, panels are likely to give your value a boost, but you will want to talk to a local realtor to find out if panels are a likely value enhancer in your community. Tip: Try heating your home with a wood pellet stove. The pellets burned in a pellet stove are made of highly compact sawdust. There is no research back on what will happen when we start having to throw these solar panels away because they are old and worn out. However, in the short term, solar panels in use mean that there are fewer fossil fuels being pumped or mined out of the earth and turned into electricity. There are no environmental by-products to solar energy, so it’s a win-win so far. Going solar can save you money over time, and future generations may thank you for helping the environment today. One of the greatest difficulties with adopting a green energy policy for your family is that the issue is frequently more complicated than simply choosing to support solar energy. Use the tips below to use your commitment to the environment and solar energy as a means of encouraging your family’s commitment to education rather than creating a situation where you appear to say one thing and do another. Teaching commitment to an idea without adopting it is possible and is frequently the case with solar energy. Explain your commitment to solar energy even if it’s not used in your home by discussing other factors that influenced your decision such as cost. You can further illustrate your point by including some less expensive solar energy element into your home such as deck or garden lighting. Tip: Appliances can be a big, unnecessary energy drain. Find out how much energy each of your appliances are using. Clip out news articles about solar energy developments including those that discuss non-scientific aspects of solar energy such as funding or imports to keep solar energy as a recurring topic of discussion in your household. It’s important for children to understand that there is usually not a simple answer to important issues. Use solar energy as an opportunity to discuss unintended consequences such as pollution that’s created from the manufacture of raw materials or disposal of products using solar power. Ask your child to think of examples of other good things that sometimes have negative consequences such as creating a compost pile in your yard that also creates a bad smell or attracts pests. Tip: A great old fashioned way to heat your home and to save on energy is to use a wood burner. There are newer, more modern versions of wood burners called pellet stoves. Ask questions to introduce new aspects of learning about solar energy such as, “Which do you support more, solar thermal electricity or photovoltaic solar energy?” Or, “Does the water usage by solar thermal plants make you worry about water conservation?” Do not demonize specific industries such as those that produce fossil fuels or those that use dangerous raw materials. Consider the use of environmentally unsafe materials in everything from medical devices to medical equipment to the manufacture of pharmaceuticals before you create a bias against all dangerous substances for your family. Tip: A significant green energy initiative is to is having a professional do a home energy audit. These professionals will assess your home and find areas where you can save money and conserve energy. Do not criticize all governmental involvement in your life regarding taxes or laws if you’re supporting solar energy in your household because solar energy is largely dependent on government support both in terms of policy and funding in order to advance in our society. Explain how the government helps a lot of businesses such as small businesses by letting them pay less money in certain ways. Use a discussion of the solar industry as an opportunity to discuss how it can support the American economy by creating jobs in the US. Challenge your child to find “Made in the USA” labeling on various products in your home. If your child asks you about all the “Made in China” labels on your home products, discuss how cost is a consideration and how it saves your family money to purchase such products, which is also important for your family. When you’re committed to solar energy you’ll want your family to understand your commitment in a realistic way. Use the suggestions above to clarify your commitment to solar energy without sending mixed-messages to your family. It is a wise idea to get solar energy in one’s home. However, most people don’t know much about getting solar panels installed except that it is expensive. While that may be true, it is worth the cost. Here are some things you have to know about getting solar panels installed, so that you can enjoy your solar energy and have everything go well. First, you have to make sure you have a good salesman. A good salesman is one who listens to you and answers all your questions. If you got stuck with a salesman who is just interested in making the sale, ask for someone else. You are going to spend too much money for some joker to rush you through the process! If you cannot find a knowledgeable salesman to help you at that company, there are more to choose from. Tip: In certain areas, you might be able to sell your power to the main grid. Call your power supplier to find out more about their policies. Compare prices. This is not a time for you to go for the cheapest offer, but you do want to find a fair price. You are going to find a range of prices, and the most important thing you can do is to find out why that range exists. Does one price include maintenance? Does one price include high wattage panels? You need to find that out so you can choose a price based on your needs. This the one time where you want to spend money for quality. Then, you need to find out what your energy needs are! Use an electricity bill from your most expensive month, and count that as an example of how much energy you use. That way, you can calculate a bit extra. Next, check out wattages of solar panels. You want to make sure that you get panels that will satisfy your needs. You can always add more panels later if you have to, but you try to go with what you need today. Tip: Consider getting a “freezer on bottom” refrigerator as a way to help save energy. Everyone knows that hot air rises, so it makes perfect sense to keep your coolest appliance as close to the ground as possible. That said, you also have to be aware of the inverter box. That is what converts the solar energy into AC current so you can use it. The inverter is often used as a tactic by salespeople, who offer you a high kilowatt inverter without telling you that you also need high wattage solar panels for it to make a difference. Choose a complete system, and get someone to honestly help you through that process. Find out if you need a license to have solar panels on your property. You can get the paperwork fairly quickly, but ensure that you are able to do have it done without paying large fines. It would be a huge problem if you had the work done and then had a great deal of trouble with your township. So make sure you check that out very soon in the process so that you don’t have to spend thousands more than you want to spend. After reading this article, you should feel more secure about your decision to put solar panels on your property. You should also have more information at hand, to ensure the process will go smooth. Take the time to read this article again and perhaps create a checklist for what you need to do to make sure that your solar energy experience is a good one.
<urn:uuid:2eeae948-2c88-412e-b405-f6f72f7b9631>
CC-MAIN-2017-17
http://www.sustainableenergyresearch.co.uk/page/2
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917121355.9/warc/CC-MAIN-20170423031201-00014-ip-10-145-167-34.ec2.internal.warc.gz
en
0.954293
4,861
2.65625
3
Avian Influenza: a global threat needing a global solution © Koh et al; licensee BioMed Central Ltd. 2008 Received: 22 July 2008 Accepted: 13 November 2008 Published: 13 November 2008 There have been three influenza pandemics since the 1900s, of which the 1919–1919 flu pandemic had the highest mortality rates. The influenza virus infects both humans and birds, and mutates using two mechanisms: antigenic drift and antigenic shift. Currently, the H5N1 avian flu virus is limited to outbreaks among poultry and persons in direct contact to infected poultry, but the mortality rate among infected humans is high. Avian influenza (AI) is endemic in Asia as a result of unregulated poultry rearing in rural areas. Such birds often live in close proximity to humans and this increases the chance of genetic re-assortment between avian and human influenza viruses which may produce a mutant strain that is easily transmitted between humans. Once this happens, a global pandemic is likely. Unlike SARS, a person with influenza infection is contagious before the onset of case-defining symptoms which limits the effectiveness of case isolation as a control strategy. Researchers have shown that carefully orchestrated of public health measures could potentially limit the spread of an AI pandemic if implemented soon after the first cases appear. To successfully contain and control an AI pandemic, both national and global strategies are needed. National strategies include source surveillance and control, adequate stockpiles of anti-viral agents, timely production of flu vaccines and healthcare system readiness. Global strategies such as early integrated response, curbing the disease outbreak at source, utilization of global resources, continuing research and open communication are also critical. Since the 1700s, there have been ten to thirteen influenza outbreaks or probable pandemics, of which three have occurred since the beginning of the 20th century: the 1918–1919 Spanish flu pandemic, the 1957–1958 Asian flu pandemic and the 1968–1969 Hong Kong flu pandemic . Of the three pandemics, the 1918–1919 pandemic was the most severe. The 1918–1919 strain of influenza was unusual because of the high rate of mortality among victims between the ages of 15 and 35 years. Deaths from influenza are usually due to secondary bacterial infection but many deaths during the 1918–1919 pandemic were caused directly by the virus itself. It appears that the immune system in young persons paradoxically went into over-drive while battling the influenza virus and progressed into an immunologic storm that killed the victims . This was in contrast to the pandemics of 1957–1958 and 1968–1969 which were much milder. There were several reasons for this: the influenza strains were less virulent, the patterns of mortality were more typical of a usual seasonal influenza outbreak (i.e. it was concentrated among the very young and very old) and doctors were able to use antibiotics to treat secondary bacterial infections. The Attack Rate is the percentage of the population that becomes ill from an infection while case fatality rate refers to the percentage of infected people who die from the infection. Experts generally agree that the attack rates of the past 3 influenza outbreaks in the last century did not differ markedly and is estimated to be 25% to 30%. Using similar evidence, experts estimate the case fatality rate during the 1918 outbreak to be about 2.5% whereas the case fatality rates during the 1957–1958 and 1968–1969 episodes were below 0.2% . The genes of the influenza virus can mutate in 2 main ways: (1) antigenic drift which involves small errors being incorporated into a virus gene sequence when the virus makes copies of itself and (2) antigenic shift involving an exchange of genes between two types of viruses (e.g. between avian and human forms of influenza virus) when both viruses are present in the same animal or human . As a result of these mutations, the influenza virus changes its protein coat (antigens) and allows them to find new susceptible non-immune populations to infect. Both mechanisms of genetic mutation have the possibility of producing a new virus that can be easily transmitted between humans and initiate a pandemic. Scientists think that the 1918 influenza pandemic virus was a result of antigenic drift while the 1957–1958 and 1968–1969 influenza pandemic virus was a result of antigenic shift . The influenza virus has been in existence for centuries and has been constantly infecting both humans and animals (including birds). The avian influenza (AI) virus (also called avian flu or bird flu virus) is a subtype that causes contagious respiratory disease mainly in birds . Wild waterfowls, especially ducks, are natural reservoirs and can carry the virus without manifesting symptoms of the disease and spread the virus over great distances. Domesticated poultry are also susceptible to avian flu and can cause varying symptoms ranging from reduced egg production to rapid death. The severe form of the disease is called "highly pathogenic avian influenza" (sometimes abbreviated as HPAI) and is associated with near 100% mortality rates among domesticated birds. AI has become endemic in several parts of Asia and it is believed that this is a result of unregulated poultry rearing practices in rural areas of developing countries. This is of concern because such birds often live in close proximity to humans and this increases the chance of genetic re-assortment between avian and human influenza viruses which may produce a mutant strain that is easily transmitted between humans [7, 8]. In the past, avian influenza viruses have rarely caused severe disease in humans. However, in Hong Kong during 1997, a highly pathogenic strain of avian influenza of H5N1 subtype crossed from birds to humans who were in direct contact with diseased birds during an avian influenza outbreak among poultry. The cross-infection was confirmed by molecular studies which showed that the genetic makeup of the virus in humans were identical to those found in poultry. The H5N1 virus caused severe illness and high mortality among humans: among 18 persons who were infected, 6 died. The outbreak ended after authorities slaughtered Hong Kong's entire stock of 1.5 million poultry. Since then, AI among birds has been reported all over the world and one of the factors responsible for the spread is the trans-oceanic and trans-continental migration of wild birds . Most deaths from AI have occurred in Indonesia to date and nearly all of the human cases resulted from close contact with infected birds . However, there has been a reported cluster of plausible human-to-human transmission of the H5N1 virus within an extended family in the village of Kubu Sembelang in north Sumatra, Indonesia, in May 2006 . Strains of influenza virus are classified into subtypes by their protein coat antigens, namely haemagglutin (HA) and neuramidase (NA). Of the 15 HA subtypes known, H1, H2 and H3 are known to have circulated among humans in the past century and hence, most people have gained immunity to interrupt the transmission of the virus. However, the H5N1 strain is unfamiliar to most humans and our low herd immunity to it poses a pandemic threat. There are thought to be three pre-requisites for a viral pandemic to occur: (1) the infectious strain is a new virus subtype which the population has little or no herd immunity; (2) the virus is able to replicate and cause serious illness and (3) the virus has the ability to be transmitted efficiently from human to human. The H5N1 virus satisfies the first two pre-requisites of a pandemic but has not developed the ability to be transmitted easily from human to human, yet. Lessons from the SARS Outbreak The recent Severe Acute Respiratory Syndrome (SARS) virus outbreak in Asia saw another type of virus called the coronavirus spread widely in a short time. However, the SARS outbreak is considered to be "minor" when compared to the 1918–1919 influenza outbreak because less than 800 persons died from SARS worldwide whereas 40 to 50 million people died worldwide in the 1918 influenza pandemic . However, the rapid spread of SARS to Asia, Australia, Europe and North America during the first two quarters of 2003 illustrates the speed that an AI pandemic can spread across the world. The major reason why SARS was quickly contained was that people with SARS were not contagious before the onset of case-defining symptoms which allowed effective control measures based on case-identification . However, a person with influenza infection is contagious before the onset of case-defining symptoms, which limits the effectiveness of isolation of cases as a control strategy for this illness . The Feasibility of Early Containment Measures The endemic nature of the avian flu among domestic birds and their close co-existence with humans in rural areas of Asia makes this part of world a likely epicenter of an AI pandemic. Two international teams of researchers used computer modeling to simulate what may happen if avian flu were to start being transmitted efficiently between people in Southeast Asia [17, 18]. Both groups showed that a carefully selected and orchestrated combination of public health measures could potentially stop the spread of an avian flu pandemic if implemented soon after the first cases appear. Interventional strategies simulated include an international stockpile to 3 million courses of flu antiviral drugs, treating infected individuals and everyone in their social networks, closure of schools and workplaces, vaccinating (even with a low-efficacy vaccine) half the population before the start of a pandemic and quarantine measures. Targeted anti-viral treatment was a crucial component of all combined strategies and increasing public health measures needed to be greatly increased as the virus became more contagious. While the researchers said that implementing such a combination of approaches was challenging because it required a coordinated international response, the models did show that containing an avian flu pandemic at its source was theoretically feasible. Strategies to Contain and Cope with an Avian Flu Pandemic To successfully contain and control an AI pandemic, both national and global strategies are needed . National strategies need multi-pronged approaches and involve source surveillance and control, adequate stockpiles of anti-viral agents, timely production of flu vaccines and healthcare system readiness. Source Surveillance and Control When the H5N1 flu virus becomes easily transmissible from human to human, the earlier this fact is known, the more time there will be to gather and deploy available public health resources. Currently, the World Health Organisation (WHO), United Nations and other international agencies are trying to contain the H5N1 epidemic among poultry flocks in Asia and have set up monitoring systems to detect new outbreaks (especially human-to-human cases) early. Currently, there are ongoing efforts to mass produce and stockpile vaccines against the H5N1 strain. Recent models built on data from the 1918 flu pandemic predict that 50 million-80 million people could die and the overwhelming majority of deaths are likely to occur in the developing countries. Unfortunately, total global capacity for flu vaccine manufacture in the first 12 months is estimated at only 500 million doses. Moreover, flu vaccine production faces many constraints: the vaccine is cultured in eggs and this is a lengthy process which cannot be speeded up . Fortunately, alternative sources of virus culture cells are being investigated. With avian flu affecting poultry and eggs, the egg supply required for vaccine production may itself be disrupted. Intellectual property rights and liability from adverse effects from vaccines are other issues that impede manufacturers from increasing vaccine production. It should also be noted that if the influenza pandemic strain turns out not to be the H5N1 variety, then the stockpiled vaccines would be useless and wasted. Deciding who to vaccinate is another challenge. Currently, influenza vaccination is recommended to the elderly and those with medical conditions which put them at higher risk for hospitalization and death if they become infected with influenza. However, some critics have argued that younger and healthier individuals should be given priority because they are more mobile than older, less healthy people and are therefore more likely to spread the flu to others. Another factor in favour of giving priority to younger people is that the seasonal flu vaccine produces a weaker immune response in the elderly. Moreover, if the flu pandemic has characteristics of the 1918–1919 pandemic, then the young and healthy are at higher risk of death. Even if supplies were adequate for all age groups, mass immunization for a potential pandemic still has its risks. In 1976, four US soldiers developed swine flu in an army camp and there was concern that it could become a pandemic like the 1918 Spanish flu. Although some health officials expressed doubts about the likelihood of an epidemic, the government initiated a mass inoculation programme for the entire US population. After hundreds of people receiving the vaccine came down with Guillian-Barre syndrome, the US government terminated the campaign and indemnified manufacturers, ultimately paying $93 million in claims . There is a light at the end of the tunnel. The WHO recently announced plans to stockpile H5 influenza vaccine and create a policy framework for vaccine allocation and recommendations for its use . Several recent developments in H5 vaccines have made this stockpile feasible: the development of H5N1 vaccines with adjuvants that reduce the required dose as much as fourfold and the finding that adjuvant-enhanced vaccines may provide cross-protection against strains that have undergone up to seven years of genetic drift . Furthermore, the manufacturing capacity of 500 million doses is calculated on a requirement for three strains of flu virus for standard vaccinations; in crisis mode, three times as much monovalent pandemic flu vaccine could be produced. Anti-viral drugs are thought to be backbone of a management plan of an avian flu pandemic . Only two anti-viral drugs have shown promise in treating avian influenza: oseltamivir (Tamiflu®) and zanamivir (Relenza®). A treatment of Tamiflu® includes 10 pills taken over five days while Relenza® is administered by oral inhalation. The US Food and Drug Administration has approved both anti-viral drugs for treating influenza but only Tamiflu® has been approved to prevent influenza infection. Because antivirals can be stored without refrigeration and for longer periods than vaccines, developing a stockpile of antivirals has advantages as part of an overall strategy to control a flu epidemic. However, there are limitations to the use of antivirals: Tamiflu® needs to be taken within 2 days of initial flu symptoms for it to be effective, but many people may not be aware that they have the flu early in the disease. Some research in animals and recent experience in the use of the drug to treat human cases have also found that Tamiflu may be less effective against the recent strains for the current H5N1 virus than the 1997 strain . Improper compliance to antivirals by irresponsible individuals during an outbreak may results in the emergence of a drug-resistant strain. Lastly, there are current concerns about the safety of Tamiflu® which has been associated with increased psychiatric symptoms among Japanese adolescents . Healthcare System Readiness Every country's healthcare system would be stretched to the limit in the event of a global pandemic of bird flu. The ability of healthcare facilities to maintain strict infection control measures would be challenged. The sudden surge in health manpower and facility need would be acutely felt among healthcare workers, epidemiologists and laboratory technicians. Countries must set up AI pandemic contingency plans and high-level coordinating committees comprising of representatives from multiple ministries and agencies. An avian flu that is easily transmissible between humans would spread rapidly all over the world. The economic cost of an avian pandemic to all countries would be phenomenal and, if allowed to last for months, become exponential [28–30]. Early detection and control of an AI pandemic will also require a coordinated international response. Controlling avian flu is for the good of global public health and all countries have an interest and obligation to do so. Firstly, the response to the influenza threat would need an integrated cross-sector approach, bringing together animal and human health, areas of rural development and agriculture, economics, finance, planning and others. Partnerships are needed at both international and national levels. Next, there is certainly a priority on curbing the disease "at source" in the agricultural sector, thereby reducing the probability of a human epidemic. International resources are also needed for surveillance on avian influenza outbreaks and human-to-human transmission. It is also important to strike a balance between short and long term measures. Avian flu is becoming endemic in parts of East Asia and will require a long effort to suppress it. Meanwhile, a human pandemic may still emerge from a different strain of flu virus. Thus it makes sense for the international community to also undertake broader long-term measures to strengthen the institutional, regulatory and technical capacity of the animal health, human health and other relevant sectors in Asia. While country-level preparedness and leadership is essential for success, it must be backed by global resources. Even though the benefits of containing a pandemic are overwhelming, individual governments may still be daunted by the social, political and economic costs of various policy measures. Richer countries may have to support poorer countries in financial and non-financial means in the fight against a flu pandemic, for the sake of international good. The Global Outbreak Alert & Response Network (GOARN), a technical collaboration of existing institutions and networks who pool human and technical resources for the rapid identification, confirmation and response to disease outbreaks, is one such international body that supports global preparedness against bird flu. However, for such an organization to succeed, open communication and international cooperation is essential. Lastly, there is a critical need to share information rapidly with experts, policymakers and the worldwide community at large. Honest public communication will be critical as evidenced by China's denial of a local SARS outbreak initially which delayed early containment measures. Recently, the Bill & Melinda Gates Foundation, the Pasteur Institute and the Wellcome Trust, began planning, with major medical-research funders and other stakeholders, several projects to enhance the research effort and reduce the risks from the threat of pandemic influenza over coming decades . In the next few years, they plan to develop, maintain and disseminate a central inventory of funded research activities that are relevant to human influenza to ensure that stakeholders are well-informed. They will also coordinate road-mapping exercises to identify knowledge gaps to assist funders and researchers in establishing research-funding priorities, with specific focus on vaccines, drug therapies and epidemiology/population science (for example, diagnostics, surveillance, transmission and modelling), in the hope of developing a cohesive health-research agenda for pandemic influenza. In the words of the late Director General of World Health Organization, Dr Lee Jong Wook, '"it is only a matter of time before an avian flu virus acquires the ability to be transmitted from human to human, sparking the outbreak of human pandemic influenza...we don't know when this will happen but we do know that it will happen". Factors that suggest that an AI pandemic would be less severe than past influenza pandemics include advances in medicine such as the availability of antiviral medications and vaccines, and international surveillance systems. However, there are also factors that suggest than an avian influenza pandemic could be worse than the 1918 pandemic, such as a more densely populated world, a larger immunocompromised population of elderly and AIDS patients, and faster air travel and interconnections between countries and continents which will accelerate the spread of disease. Nevertheless, unlike the past, we have the prior knowledge of a possible impending pandemic and the knowledge of how to contain and control it. Preparedness, vigilance and cooperation, on local, national and international levels, are our best weapons against a deadly bird flu pandemic. Summary of Implications for GPs Currently, the H5N1 avian flu virus is limited to outbreaks among poultry and persons in direct contact to infected poultry. Avian influenza (AI) is endemic in Asia where birds often live in close proximity to humans. This increases the chance of genetic re-assortment between avian and human influenza viruses which may produce a mutant strain that is easily transmitted between humans, resulting in a pandemic. Unlike SARS, a person with influenza infection is contagious before the onset of case-defining symptoms. Researchers have shown that carefully orchestrated of public health measures could potentially limit the spread of an AI pandemic if implemented soon after the first cases appear. Both national and international strategies are needed: National strategies include source surveillance and control, adequate anti-viral agents and vaccines, and healthcare system readiness; international strategies include early integrated response, curbing disease outbreak at source, utilization of global resources, continuing research and open communication. - Kilbourne ED: Influenza pandemics of the 20 th century. Emerg Infect Dis 2006, 12: 9–14.PubMed CentralPubMedView ArticleGoogle Scholar - Kobasa D, Jones SM, Shinya K, et al.: Aberrant innate immune response in lethal infection of macaques with the 1918 influenza virus. Nature 2007, 445: 319–23. 10.1038/nature05495PubMedView ArticleGoogle Scholar - Brundage JF: Cases and deaths during influenza pandemics in the United States. Am J Prev Med 2006, 31: 252–6. 10.1016/j.amepre.2006.04.005PubMedView ArticleGoogle Scholar - Webster RG, Govorkova EA: H5N1 influenza – continuing evolution and spread. N Eng J Med 2006, 21: 2174–7. 10.1056/NEJMp068205View ArticleGoogle Scholar - Reid AH, Fanning TG, Janczewski TA, Taubenberger JK: Characterization of the 1918 "Spanish" influenza virus neuraminidase gene. Proc Natl Acad Sci USA 2000, 97: 6785–90. 10.1073/pnas.100140097PubMed CentralPubMedView ArticleGoogle Scholar - Chai LYA: Avian influenza: basic science, potential for mutation, transmission, illness symptomatology and vaccines. In Bird flu: a rising pandemic in Asia and beyond?. 1st edition. Edited by: Tambyah P, Leung PC. Singapore: World Scientific Publishing; 2006:1–13.View ArticleGoogle Scholar - Normille D: Epidemiology: Indonesia taps village wisdom to fight bird flu. Science 2007, 315: 50. 10.1126/science.1136529View ArticleGoogle Scholar - Abikusno N: Bird flu in Indonesia. In Bird flu: a rising pandemic in Asia and beyond?. 1st edition. Edited by: Tambyah P, Leung PC. Singapore: World Scientific Publishing; 2006:85–97.View ArticleGoogle Scholar - Food and Agriculture Organisation (FAO): Latest HPAI cumulative maps (24 Jul 06 – 24 Jan 07). [http://www.fao.org/ag/againfo/programmes/en/empres/maps.html] - H5N1 outbreaks in 2005 and major flyways of migratory birds. United Nations Food and Agriculture Organisation [http://www.fao.org/ag/againfo/subjects/en/health/diseases-cards/migrationmap.html] - Normille D, Enserink M: Avian influenza: with change in the seasons, bird flu returns. Science 2007, 315: 448. 10.1126/science.315.5811.448View ArticleGoogle Scholar - World Health Organisation: Avian Influenza ("bird flu") and the significance of its transmission to humans. [http://www.who.int/mediacentre/factsheets/avian_influenza/en/print.html] - Declan B: Pandemic 'dry run' cause for concern. Nature 2006, 441: 554–5. 10.1038/441554aView ArticleGoogle Scholar - World Health Organization: Summary of probable SARS cases with onset of illness from 1 November 2002 to 31 July 2003. [http://www.who.int/csr/sars/country/table2004_04_21/en/index.html] - Anderson RM, Fraser C, Ghani AC, et al.: Epidemiology, transmission dynamics and control of SARS: the 2002–2003 epidemic. Philos Trans R Soc Lond B Biol Sci 2004, 359: 1091–105. 10.1098/rstb.2004.1490PubMed CentralPubMedView ArticleGoogle Scholar - Chowell G, Ammon CE, Hengartner NW, Hyman JM: Transmissioj dynamics of the great influenza pandemic of 1918 in Geneva, Switzerland: Assessing the effects of hypothetical interventions. J Theor Biol 2006, 241: 193–204. 10.1016/j.jtbi.2005.11.026PubMedView ArticleGoogle Scholar - Ferguson NM, Cummings DAT, Cauchemez S, et al.: Strategies for containing an emerging influenza pandemic in Southeast Asia. Nature 2005, 437: 209–14. 10.1038/nature04017PubMedView ArticleGoogle Scholar - Longini IM, Nizam A, Xu S, et al.: Containing pandemic influenza at the source. Science 2005, 309: 1083–7. 10.1126/science.1115717PubMedView ArticleGoogle Scholar - US Congressional Budget Office: A potential influenza pandemic: possible macroeconomic effects and policy issues. [http://www.cbo.gov/showdoc.cfm?index=6946&sequence=0] - Morse SS, Garwin RL, Olsiewski PJ: Next flu pandemic: what to do until the vaccine arrives? Science 2006, 314: 929. 10.1126/science.1135823PubMedView ArticleGoogle Scholar - Sencer DJ, Millar JD: Reflections on the 1976 swine flu vaccination program. Emerg Infect Dis 2006, 12: 29–33.PubMed CentralPubMedView ArticleGoogle Scholar - Yamada T, Dautry A, Walport M: Ready for avian flu? Nature 2008, 454: 162. 10.1038/454162aPubMedView ArticleGoogle Scholar - Leroux-Roels I, Borkowski A, Vanwolleghem T, Dramé M, Clement F, Hons E, Devaster JM, Leroux-Roels G: Antigen sparing and cross-reactive immunity with an adjuvanted rH5N1 prototype pandemic influenza vaccine: a randomised controlled trial. Lancet 2007, 370: 580–9. 10.1016/S0140-6736(07)61297-5PubMedView ArticleGoogle Scholar - Stephenson I, Bugarini R, Nicholson KG, Podda A, Wood JM, Zambon MC, Katz JM: Cross-reactivity to highly pathogenic avian influenza H5N1 viruses after vaccination with nonadjuvanted and MF59-adjuvanted influenza A/Duck/Singapore/97 (H5N3) vaccine: a potential priming strategy. J Infect Dis 2005, 191: 1210–5. 10.1086/428948PubMedView ArticleGoogle Scholar - Schünemann HJ, Hill SR, Kakad M, et al.: WHO Rapid Advice Guidelines for pharmacological management of sporadic human infection with avian influenza (H5N1) virus. Lancet Infct Dis 2007, 7: 21–31. 10.1016/S1473-3099(06)70684-3View ArticleGoogle Scholar - Beigel JH, Farrar J, Han AM, et al.: (The Writing Committee of the World Health Organisation (WHO) Consultation on Human Influenza A/H5). Avian influenza A(H5N1) infection in humans. N Eng J Med 2005, 353: 373–85.Google Scholar - Fuyuno I: Tamiflu side effects come under scrutiny. Nature 2007, 446: 358–9. 10.1038/446358aPubMedView ArticleGoogle Scholar - Bloom E, de Wit V, Carangal-San Jose MJ: Economics and Research Department Policy Brief: Potential economic impact of an avian flu pandemic on Asia. 2005. [http://www.asia-studies.com/policybrief.html]Google Scholar - Smith S: The economic and social impacts of avianinfluenza. [http://www.avianinfluenza.org/economic-social-impacts-avian-influenza.php] - Koh GCH, Koh DSQ: The socioeconomic effects of an avian influenza pandemic. In Bird flu: a rising pandemic in Asia and beyond?. 1st edition. Edited by: Tambyah P, Leung PC. Singapore: World Scientific Publishing; 2006:127–46.View ArticleGoogle Scholar - Lee JW: Opening remarks at the meeting on avian influenza and pandemic human influenza. [http://www.who.int/dg/lee/speeches/2005/flupandemicgeneva/en/print.html] This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
<urn:uuid:85f7b2aa-5ff4-4ac6-937e-4a3cbba13f72>
CC-MAIN-2017-17
http://apfmj.biomedcentral.com/articles/10.1186/1447-056X-7-5
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917118310.2/warc/CC-MAIN-20170423031158-00540-ip-10-145-167-34.ec2.internal.warc.gz
en
0.923014
6,156
3.265625
3
Treaty of Paris (1814) |DEFINITIVE TREATY OF PEACE,| |Concluded at Paris on the 30 day of| In the name of the most Holy and Undivided Trinity. His majesty the King of the united Kingdom of Great Britain and Ireland, and his allies, on the one part; and his majesty the King of France and of Navarre, on the other part; animated by an equal desire to terminate the long agitations of Europe, and the sufferings of mankind, by a permanent peace, founded upon a just repartition of force between its states, and containing in its stipulations the pledge of its durability, and his Britannic majesty, together with his allies, being unwilling to require of France, now that, replaced under the paternal government of her kings, she offers the assurance of security and stability to Europe, the conditions and guarantees which they had with regret demanded from her former government, their said majesties have named plenipotentiaries to discuss, settle, and sign a treaty of peace and amity; namely, His majesty the King of the united Kingdom of Great Britain and Ireland, the Right Honourable Robert Stewart, Viscount Castlereagh, his principal secretary of slate for foreign affairs, &c. &c. &c. the Right Honourable George Gordon, Earl of Aberdeen, his ambassador extraordinary and plenipotentiary to his imperial and royal apostolic majesty; the Right Honourable William Shaw Cathcart, Viscount Cathcart, his ambassador extraordinary and plenipotentiary to his majesty the Emperor of all the Russias; and the Honourable Sir Charles William Stewart, his envoy extraordinary and minister plenipotentiary to hie majesty the King of Prussia; and his majesty the King of France and Navarre, Charles Maurice de Talleyrand Perigord, Prince of Benevente, his said majesty's minister and secretary of state for foreign affairs; who, having exchanged their full powers, found in good and due form, have agreed upon the following articles: Art. I.—There shall be from this day forward perpetual peace and friendship between his Britannic majesty and his allies on the one part, and his majesty the King of France and Navarre on the other, their heirs and successors, their dominions and subjects respectively. The high contracting parties shall devote their best attention lo maintain, not only between themselves, but, inasmuch as depends upon them, between all the states of Europe, that harmony and good understanding which are so necessary for their tranquillity. II.—The kingdom of France retains its limits entire, as they existed on the 1st of January, 1792. It shall farther receive the increase of territory comprised within the line established by the following article: III.—On the side of Belgium, Germany, and Italy, the ancient frontiers shall be re-established аз they existed on the 1st of January, 1792, extending from the North Sea. between Dunkirk and Nieuport to the Mediterranean, between Cagnes and Nice, with the following modifications : — 1.—In the department of Jemappes, the cantons of Dour, Merbes-le-Chateau, Beaumont, and Chimay, shall belong to France; where the line of demnrkation comes in contact with the canton of Dour, it shall pass between that canton and those of Buseau and Paturage, and likewise further on it shall pass between the canton of Merbes-le-Chateau, and those of Binch and Thuin. 2.—In the department of the Sembre and Meuse, the cantons of Walcourt, Florennes, Beauraing, and Gedinne, shall belong to France; where the demarkation reaches that department, it shall follow the line which separates the cantons from the department of Jemappes, and from the remaining cantons of the department of Sambre and Meuse. 3.—In the department of the Moselle, the new demarkation, at the point where it diverges from the old line of frontier, shall be formed by a line to be drawn from Perle to Fremersdorff, and by the limit which separates the canton of Tholey from the remaining cantons of the said department of the Moselle. 4.—In the department of La Sarre, the cantons of Saarbruck and Arneval shall continue to belong to France, as likewise the portion of the canton of Lebach which is situated to the south of a line drawn along the confínes of the villages of Herchenbach, Ueberhofen, Hilsbach, and Hall, (leaving these different places out of the French frontier,) to the point where, in the neighbourhood of Querselle, (which place belongs to France,) the line which separates the cantons of Arnevol and Ottweller reaches that which separates the cantons of Arneval and Lebach. The frontier on this side shall be formed by the line above described, and afterwards by that which separates the canton of Arneval from that of Bliescastel. 5.—The fortress of Landau having, before the year 1792, formed an insulated point in Germany, France retains beyond her frontiers a portion of the departments of Mount Tonnerre and of the Lower Rhine, for the purpose of uniting the said fortress and its radius to the rest of the kingdom. The new demarkation from the point in the neighbourhood of Obersteinbach. (which place is left out of the limits of France.) where the boundary between the department of the Moselle and that of Mount Tonnerre reaches the department of the Lower Rhine, shall follow the line which separates the cantons of Weissenbourg and Bergzabern (on the side of France) from the cantons of Pirmasens, Dahn, and Annweiler, (on the side of Germany,) as far as the point near the village of Volmersheim where that line touches the ancient radius of the fortress of Landau. From this radius, which remains as it was in 1792, the new frontier shall follow the arm of the river de la Queich, which on leaving the said radius at Queichheim (that place remaining to France,) flows near the villages of Merlenheim, Knittelsheim, and Belheim (these places also belonging to France) to the Rhine, which from thence shall continue to form the boundary of France and Germany. The mainstream (Thalweg) of the Rhine, shall constitute the frontier; provided, however, that the changes which may hereafter take place in the course of that river shall not affect the property of the islands. The right of possession in these islands shall be re-established as it existed at the signature of the treaty of Luneville. 6.—In the department of the Doubs, the frontier shall be so regulated as to commence above the Ranconniére, near Locle, and follow the crest of Jura between the Cerneux, Pequignot, and the village of Fontenelles, as far as the peak of that mountain, situated about seven or eight thousand feet to the north-west of the village of La Brevine, where it shall again fall in with the ancient boundary of France. 7.—In the department of the Leman, the frontiers between the French territory, the Pays de Vaud, and the different portions of the territory of the republic of Geneva, (which is to form part of Switzerland.) remain as they were before the incorporation of Geneva with France. But the cantons of Frangy and of St. Julien, (with the exception of the districts situated to the north of a line drawn from the point where the river La Loire enters the territory of Geneva, near Chancy, following the confínes of Sesequin, Laconex, and Seseneuve, which shall remain out of the limits of France,) the canton of Reignier, with the exception of the portion to the east of a line which follows the confínes of the Muras, Bussy, Pers, and Cornier, (which shall be out of the French limits) and the canton of La Roche (with the exception of the places La Roche and Armanoy with their districts) shall remain to France. The frontier shall follow the limits of these different cantons, and the line which separates the districts continuing to belong to France, from those which she does not retain. In the department of Montblanc, France acquires the sub-prefecture of Chambery, with the exception of the cantons of L'Hospitol, St. Pierre d'Albigny, la Rocette, and Montmelian, and the sub-prefecture of Annecy, with the exception of the portion of the canton of Faverges, situated to the east of a line passing between Ourechaise and Marlens on the side of France, and Marthod and Ugine on the opposite side, and which afterwards follows the crest of the mountains as far as the frontier of the canton of Thones; this line, together with the limit of the cantons before mentioned, shall on this hide form the new frontier. On the side of the Pyrenees, the frontiers between the two kingdoms of France and Spain remain such as they were on the 1st of January,1792, and a joint commission shall be named on the part of the two crowns for the purpose of finally determining the line. France on her part renounces all right of sovereignty (suzeraineté) and of possession over all the countries, districts, towns, and places, situated beyond the frontier described, the principality of Monaco being replaced on the same footing on which it stood before the 1st of January, 1792. The allied powers assure to France the possession of the principality of Avignon, of the Comtat Venaissin, of the Comté of Montbeilliard, together with the several insulated territories which formerly belonged to Germany, comprehended within the frontier above described, whether they have been incorporated with France before or after the 1st of January, 1792. The powers reserve to themselves, reciprocally, the complete right to fortify any point in their respective states which they may judge necessary for their security. To prevent all injury to properly, and protect, according to the most liberal principles, the property of individuals domiciliatcd on the frontiers, there shall be named, by each of the states bordering on France, commissioners, who shall proceed, conjointly with French commissioners, to the delineation of the respective boundaries. IV.—To secure the communications of the town of Geneva with other parts of the Swiss territory situated on the lake, France consents that the road by Versoy shall be common to the two countries. The respective governments shall amicably arrange the means for preventing smuggling, regulating the posts, and maintaining the said road. V.—The navigation of the Rhine, from the point where it becomes navigable unto the sea, and vice versá, shall be free, so that it can be interdicted to no one:—and that at the future congress, attention shall be paid to the establishment of the principles according In which the duties to be raised by the states bordering on the Rhine may be regulated, in the mode the most impartial, and the most favourable to the commerce of all nations. The future congress, with a view to facilitate the communications between nations, and continually to render them less strangers to each other, shall likewise examine and determine in what manner the above provision can be extended to other rivers, which, in their navigable course, separate or traverse different states. VI.—Holland, placed tinder the sovereignty of the house of Orange, shall receive an increase of territory. The title and exercise of that sovereignty shall not in any case belong to a prince, wearing, or destined to wear a foreign crown. The states of Germany shall be independent, and united by a federative bond. Switzerland, independent, shall continue to govern herself. Italy, beyond the limits of the countries which are to revert to Austria, shall be composed of sovereign states. VII.—The island of Malta and its dependencies shall belong in full right and sovereignty to his Britannic majesty. VIII.—His Britannic majesty stipulating for himself and his allies, engages to restore to his most Christian majesty, within the term which shall be hereafter fixed, the colonies, fisheries, factories, and establishments of every kind, which were possessed by France on the 1st of January 1792, in the seas and on the continents of America, Africa, and Asia, with the exception however of the islands of Tobago and St. Lucie, and of the Isle of France and its dependencies, especially Rodrigues and Les Sechelles, which several colonies and possessions his most Christian majesty cedes in full right and sovereignty to his Britannic majesty, and also the portion of St. Domingo ceded to France by the treaty of Basle, and which his most Christian majesty restores in full right and sovereignty to his Catholic majesty. IX.—His majesty the King of Sweden and Norway, in virtue of the arrangements stipulated with the allies, and in execution of the preceding article, consents that the island of Guadaloupe be restored to his most Christian majesty, and gives up all the rights he may have acquired over that island. X.—Her most faithful majesty, in virtue of the arrangements stipulated with her allies, and in execution of the 8th article, engages to restore French Guyana as it existed on the 1st of January, 1792, to his most Christian majesty, within the term hereafter fixed. The renewal of the dispute which existed at that period on the subject of the frontier, being the effect of this stipulation, it is agreed that the dispute shall be terminated by a friendly arrangement between the two courts, under the mediation of his Britannic majesty. XI.—The places and forts in those colonies and settlements, which, by virtue of the 8th, 9th, and 10th articles, are to be restored to his most Christian majesty, shall be given up in the state in which they may be at the moment of the signature of the present treaty. XII.—His Britannic majesty guarantees to the subjects of his most Christian majesty the same facilities, privileges, and protection, with respect to commerce, and the security of their persons and property within the limits of the British sovereignty on the continent of India, as are now or shall be granted to the most favoured nations. His most Christian majesty, on his part, having nothing more at heart than the perpetual duration of peace between the two crowns of England and of France, and wishing to do his utmost to avoid any thing which might affect their mutual good understanding, engages not to erect any fortifications in the establishments, which are to be restored to him within the limits of the British sovereignty upon the continent of India, and only to place in those establishments the number of troops necessary for the maintenance of the police. XIII.—The French right of fishery upon the bank of Newfoundland, upon the coasts of the island of that name, and of the adjacent islands in the Gulf of St. Lawrence, shall be replaced upon the footing in which it stood in 1792. XIV.—Those colonies, factories, and establishments, which are to be restored to his most Christian majesty by his Britannic majesty or his allies beyond the Cape of Good Hope within the six months which follow the ratification of the present treaty. XV.—The high contracting parties having, by the 4th article of the convention of the 23d of April last, reserved to themselves the right of disposing, in the present definitive treaty of peace, of the arsenals and ships of war, armed and unarmed which may be found in the maritime places restored by the second article of the said convention; it is agreed, that the said vessels and ships of war, armed and unarmed, together with the naval ordnance and naval stores, and all materials for building and equipment, shall be divided between France and the countries where the said places are situated, in the proportion of two-thirds or France, and one-third for the power to whom the said places shall belong. The ships and vessels on the stocks, which shall not be launched within six weeks after the signature of the present treaty, shall be considered as materials, and after being broken up, shall be, as such, divided in the same proportions. Commissioners shall be named on both sides, in settle the division and draw up a statement of the same, and passports or safe conducts shall be granted by the allied powers for the purpose of securing the return into France of the workmen, seamen, and others in the employment of France. The vessels and arsenals existing in the maritime places which were already in the power of the allies before the 23d of April, and the vessels and arsenals which belonged to Holland, and especially the fleet in the Texel, are not comprised in the above stipulations. The French government engages to withdraw, or cause to be sold, every thing which shall belong to it by the above stipulations, within the the space of three months after the division shall have been carried into effect. Antwerp shall for the future be solely a commercial port. XVI.—The high contracting parties, desirous to bury in entire oblivion the dissensions which have agitated Europe, declare and promise, that no individual, of whatever rank or condition he may be, in the countries restored and ceded by the present treaty, shall be prosecuted, disturbed, or molested, in his person or property, under any pretext whatsoever, either on account of his conduct or political opinions, his attachment either to any of the contracting parties, or to any government which has ceased to exist, or for any other reason, except for debts contracted towards individuals, or acts posterior to the date of the present treaty. XVII.—The native inhabitants and aliens, of whatever nation or condition they may be, in those countries which are to change sovereigns, as well in virtue of the present treaty as of the subsequent arrangements to which it may give rise, shall be allowed a period of six years, reckoning from the exchange of the ratifications, for the purpose of disposing of their property, if they think It, whether it be acquired before or during the present war; and retiring to whatever country they may choose. XVIII.—The allied powers desiring to offer his most Christian majesty a new proof of their anxiety to arrest, as far as in them lies, the bad consequences of the disastrous epoch terminated by the present peace, renounce all the sums which their governments claim from France, whether on account of contracts, supplies, or any other advances whatsoever to the French governments, during the different wars that have taken place since 1792. His most Christian majesty, on his part, renounces every claim which he might bring forward against the allied powers on the same grounds. In execution of this article, the high contracting parties engage reciprocally to deliver up all titles, obligations, and documents, which relate to the debts they may have mutually cancelled. XIX.—The French government engages to liquidate and pay all debts it may be found to owe in countries beyond its own territory, on account of contracts, or other formal engagements between individuals, or private establishments, and the French authorities, as well for supplies, as in satisfaction of legal engagements. XX.—The high contracting parties, immediately after the exchange of the ratifications of the present treaty, shall name commissioners to direct and superintend the execution of the whole of the stipulations contained in the 18th and 19th articles. These commissioners shall undertake the examination of the claims referred to in the preceding article, the liquidation of the sums claimed, and the consideration of the manner in which the French government may propose to pay them. They shall also be charged with the delivery of the titles, bonds, and the documents relating to the debts which the high contracting parties mutually cancel, so that the approval of the result of their labours shall complete that reciprocal renunciation. XXI.—The debts which in their origin were specially mortgaged upon the countries no longer belonging to France, or were contracted for the support of their internal administration, shall remain at the charge of the said countries. Such of those debts as have been converted into inscriptions in the great book of the public debt of France, shall accordingly be accounted for with the French government after the 22d of December, 1813. The deeds of all those debts which have been prepared for inscription, and have not yet been entered, shall be delivered to the governments of the respective countries. The statement of all these debts shall be drawn up and settled by a joint commission. XXII.—The French government shall remain charged with the reimbursement of all sums paid by the subjects of the said countries into the French coffers, whether under the denomination of surety, deposit, or consignment. In like manner, all French subjects employed in the service of the said countries, who have paid sums under the denomination of surety, deposit, or consignment, into their respective coffers, shall be faithfully reimbursed. XXIII.—The functionaries holding situations requiring securities, who are not charged with the expenditure of public money, shall be reimbursed at Paris, with the interest, by fifths and by the year, dated from the signature of the present treaty. With respect to those who are unaccountable, this reimbursement shall commence, at the latest, six months after the presentation of their accounts, except only in cases of malversation. A copy of the last account shall be transmitted to the government of their countries, to serve for their information and guidance. XXIV.—The judicial deposits and consignments upon the "caisse d'amortissement," in the execution of the law of 28 Nivose, year 13, (18th of January, 1805,) and which belong to the inhabitants of the countries which France ceases to possess, shall, within the space of one year from the exchange of the ratifications of the present treaty, be placed in the hands of the authorities of the said countries, with the exception of those deposits and consignments interesting French subjects, which last will remain in the "caisse d'amortissement," and will be given up only on the production of the vouchers, resulting from the decisions of competent authorities. XXV.—The funds deposited by the corporations and public establishments in the "caisse de service" and in the "caisse d'amortissement," or other "caisse" of the French government, shall be reimbursed by fifths, payable from year to year, to commence from the date of the present treaty; deducting the advances which have taken place, and subject to such regular charges as may have been brought forward against these funds by the creditors of the said corporations, and the public establishments. XXVI.—From the 1st day of January, 1814, the French government shall cease to be charged with the payment of pensions, civil, military, and ecclesiastical, pensions for retirement, and allowances for reduction, to any individual who shall cease to be a French subject. XXVII.—National domains acquired for valuable considerations by French subjects in the late departments of Belgium, and of the left bank of the Rhine, and the Alps, beyond the ancient limits of France, and which now cease to belong to her, shall be guaranteed to the purchasers. XXVIII.—The abolition of the "droits d'Aubaine," "de Detraction," and other duties of the same nature, in the countries which have reciprocally made that stipulation with France, or which have been formerly incorporated, shall be expressly maintained. XXIX.—The French government engages to restore all bonds, and other deeds, which may have been seized in the provinces occupied by the French armies or administrations; and in cases where such restitution cannot be effected, these bonds and deeds become and continue void. XXX.—The sums which shall be due for all works of public utility not yet finished, or finished after the 31st of December, 1812, whether on the Rhine, or in the departments detached from France by the present treaty, shall be placed to the account of the future possessors of the territory, and shall be paid by the commission charged with the liquidation of the debts of that country. XXXI.—All archives, maps, plans, and documents whatever, belonging to the ceded countries, or respecting their administration, shall be faithfully given up at the same time with the said countries: or if that should be impossible, within a period not exceeding six months after the cession of the countries themselves. This stipulation applies to the archives, maps, and plates, which may have been carried away from the countries during their temporary occupation by the different armies. XXXII.—All the powers engaged on either side in the present war shall, within the space of two months, send plenipotentiaries to Vienna, for the purpose of regulating, in general congress, the arrangements which are to complete the provisions of the present treaty. XXXIII.—The present treaty shall be ratified, and the ratifications shall be exchanged within the period of fifteen days, or sooner if possible. In witness whereof, the respective plenipotentiaries have signed and affixed to it the seals of their arms. Done at Paris, the 30th of May, in the year of our Lord 1814. - (L. S.) LE. PRINCE DE BENEVENT. - (L. S.) CASTLKREAGH. - (L. S.) ABERDEEN. - (L. S.) CATHCART. - (L. S.) CHARLES STEWART, Lieut-general. I. — His most Christian majesty, concurring without reserve in the sentiments of his Britannic majesty, with respect to a description of traffic repugnant to the principles of natural justice of the enlightened age in which we live, engages to unite all his efforts to those of his Britannic majesty, at the approaching congress, to induce all the powers of Christendom to decree the abolition of the slave-trade, so that the said trade shall cease universally, as it shall cease definitively, under any circumstances, on the part of the French government, in the course of five years; and that during the said period, no slave merchant shall import or sell slaves, except in the colonies of the state of which he is a subject. II. — The British and French government shall name, without delay, commissioners to liquidate the accounts of their respective expenses far the maintenance of prisoners of war, in order to determine the manner of paying the balance which shall appear in favour of the one or the other of the two powers. III. — The respective prisoners of war, before their departure from the place of their detention, shall be obliged to discharge the private debts they may have contracted, or shall at least give sufficient security for the amount. IV. — Immediately after the ratification of the present treaty of peace, the sequesters which since the year 1792 (one thousand seven hundred and ninety-two) may have been laid on the funds, revenues, debts, or any other effects of the high contracting parties or their subjects, shall be taken off. The commissioners mentioned in the 2d article shall undertake the examination of the claims of his Britannic majesty's subjects upon the French government, for the value of the property, moveable or immoveable, illegally confiscated by the French authorities, as also for the tolat or partial loss of their debts or other property, illegally detained, under sequester since the year 1792. (one thousand seven hundred and ninety-two.) France engages to act towards British subjects in this respect, in the same spirit of justice which the French subjects have experienced in Great Britain: and his Britannic majesty, desiring to concur in the new pledge which the allied powers have given to his most Christian majesty, of their desire to obliterate every trace of that disastrous epoch, so happily terminated by the present peace, engages on his part, when complete justice shall be rendered to his subjects, to renounce the whole amount of the balance which shall appear in his favour for the support of prisoners of war, so that the ratification of the report of the above commissioners, and the discharge of the sums due to British subjects, as well as the restitution of the effects which shall be proved to belong to them, shall complete the renunciation. V. — The two high contracting parties, desiring to establish the most friendly relations between their respective subjects, reserve to themselves, and promise to come to a mutual understanding and arrangement, as soon as possible, upon their commercial interests, with the view of encouraging and increasing the prosperity of their respective states. The present [additional] articles shall have the same force and validity as if they were inserted word for word in the treaty patent of this day. They shall be ratified, and the ratifications shall be exchanged at the same time. Done at Paris. the 30th of May, in the year of our Lord 1814. - (L. S.) LE. PRINCE DE BENEVENT. - (L. S.) CASTLEREAGH. - (L. S.) ABERDEEN. - (L. S.) CATHCART. - (L. S.) CHARLES STEWART, Lieut-general. At the same time the same definitive treaty of peace was concluded between France and Austria, Russia, and Prussia, respectively; and signed on the part of the former by the Prince of Benevente, for Austria by Prince Metternich and Count Stadion, for Russia by Count Rasumoffsky and Count Nesselrode, and for Prussia by Baron Hardenburg and Baron Humboldt; with the following additional articles: — Additional article to the treaty with Austria The high contracting parties, wishing to efface all traces of the unfortunate events which have oppressed their people, have agreed to annul explicitly the effects of the treaties of 1805 and 1809, as far as they are not already annulled by the present treaty. In consequence of this determination, his most Christian majesty promises, that the decrees passed against French subjects, or reputed French subjects, being or having been in the service of his imperial, royal, and apostolic majesty, shall remain without effect; an also the judgments which may have been given in execution of these decreee. Additional article to the treaty with Russia The duchy of Warsaw, being under the administration of a provisional council established in Russia, since that country has been occupied by her armies, the two high contracting parties have agreed to appoint immediately a special commission, composed of an equal number of members on either side, who shall be charged with the examination, liquidation, and all the arrangement relative to the reciprocal claims. Additional article to the treaty with Prussia Though the treaty of peace concluded at Basle, on the 8th of April, 1795; that of Tilsit, on the 9th of July, 1807; the convention of Paris, of the 20th of September, 1808; as well as all the conventions and acts whatsoever, concluded since the peace of Basle between Prussia and France, are already virtually annulled by the present treaty, the high contracting powers have nevertheless thought fit to declare expressly that the treaties cease to be obligatory for all their articles, both patent and secret, and that they mutually renounce all right, and release themselves from all obligation which might result from them. His most Christian majesty promises that the decrees issued against French subjects, or reputed Frenchmen, being, or having been in the service of his Prussian majesty, shall be of no effect, as well as the judgments which may have been passed in execution of those decrees. - Edward Baines, William Grimshaw. History of the Wars of the French Revolution from the breaking out of the war in 1792 to ther restoration of general peace in 1815.... Bangs, 1855. pp. 342-347 - Edward Hertslet (1875). The map of Europe by treaty; showing the various political and territorial changes which have taken place since the general peace of 1814, London, Butterworths. p. 2-28 - Arthur Wellesley Wellington (2nd Duke), Henry Frederick Ponsonby Supplementary Despatches and Memoranda of Field Marshal Arthur, Duke of Wellington, K.G., J. Murray, 1862. pp. 120-131 - William Cobbett, History of the Regency and Reign of King George the Fourth s.n., 1830. pp. 249-260 - T F. Jefferies, The Gentleman's Magazine, Volume 84 Part 1, 1814. pp. 634-640 - The Supplementary Despatches... version includes an additional paragraph: "As soon as the Commissioners shall have performed their task, maps shall be drawn, signed by the respective Commissioners, and posts shall be placed to point out the reciprocal boundaries." - The Supplementary Despatches... version includes the word "additional"
<urn:uuid:1ed91157-ebcc-434a-83ff-f0d5df7aa066>
CC-MAIN-2017-17
https://en.wikisource.org/wiki/Treaty_of_Paris_(1814)
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917121355.9/warc/CC-MAIN-20170423031201-00013-ip-10-145-167-34.ec2.internal.warc.gz
en
0.954324
6,961
2.84375
3
From Fourth International, vol.6 No.2, February 1945, pp.36-41. Transcribed, marked up & formatted by Ted Crawford & David Walters in 2008 for ETOL. Greece is undoubtedly among the most backward and poorest countries of Europe. For over a century it has been condemned to the status of a semi-colony of the major European Powers. Foreign kings have been imposed on the Greek people and have exercised their oppressive rule for the benefit of the foreign bankers and the small clique of Greek capitalists and landowners. The Greek people have been ground down under a terrible weight of poverty. The per capita income of the average Greek is 17% that of the average British income. The wealth of the country has been skimmed off by the western bankers and the Greek capitalists. Little remained for the masses. But despite the economic backwardness and extreme poverty, Greece gave birth, as the present civil war testifies, to one of the most dynamic and revolutionary working classes of Europe. The Greek workers, deeply courageous and self-sacrificing, stepped forward, after the last war, as the leader, the only possible leader of the masses in its struggle for progress and emancipation. The revolutionary movement is developing in Greece with such vigor, it can be safely predicted that regardless of what difficulties and setbacks may be in store, Greece is destined to play an heroic part in the great European revolution, in the struggles of the European peoples for their emancipation. The history of modern Greece as an independent state dates back less than 120 years. Under the inspiration of the great French revolution, a wave of nationalism swept over Europe at the start of the 19th century. Beginning with the Serb revolt in 1804, national revolution blazed for a century in the Balkans, finally sweeping Turkey back to the western defenses of Constantinople in 1913. The Greeks, who preserved their national consciousness and culture for over 800 years under Turkish rule, raised the banner of revolt against the Ottoman empire in 1821. The Greek War of Independence, which dragged on for over eight years, evoked the greatest enthusiasm and won the wholehearted support of revolutionists and liberals throughout Europe. England, France and Russia, anxious to bring the revolutionary war to a close, finally came to an agreement with the Sultan in 1829 to recognize a small independent Greece, a fraction of present-day Greece, with a population of no more than 600,000. The new tiny Greek state was certainly launched in an inauspicious manner. The vast majority of Greeks still lived outside its borders. The financial situation was desperate. Greece already owed the sum of $15,000,000 to the British banks. The financial debt was further increased by the expenses of the long war with Turkey. Another loan had to be floated in 1833 to set the country on its feet. The oppressive taxes leveled on the peasantry by the new government drove many to take to the hills. Brigandage, which has a long history throughout the Balkans, once more took on serious proportions. The three “Protecting Powers” who had underwritten the new state immediately began hunting around for a suitable king for the country. They first offered the crown to Prince Leopold of Saxe-Coburg, who later became King of the Belgians. But he declined. The Allied diplomats finally settled on Prince Otho of Bavaria, 17 years old when he ascended the newly-created Greek throne. Of course, the Greeks had not fought for eight years a bloody costly war to exchange the Turkish Sultan for a 17-year old Bavarian Prince. The three “Protecting Powers” assured the Greeks, however, that a constitution would be promulgated. This promise, like so many others, was never kept. The National Assembly, which was supposed to draw up the constitution, was never summoned. The country continued to be ruled as a royal dictatorship by a Regency of 3 Bavarians. The Greek people were bitterly disappointed that their overthrow of the Turkish oppressors had brought them not freedom but the dictatorial rule of Bavarian princes, acting as clerks for the British, French and Russian ruling classes. In 1843,a new revolt spread over Greece and forced King Otho to call the National Assembly and promulgate a Constitution. This too remained largely a dead letter and 20 years later in 1862, a popular revolution forced the King off the throne. Otho abdicated and left Greece on a British warship. The three “Protecting Powers” promptly set to work to find a new king for the Greeks. Their choice finally fell on Prince William George of Denmark, also 17 years of age. As continued financial support to Greece depended upon acceptance of the Monarch, the Greek National Assembly approved the decision. To soften the blow to the Greek masses, who had just staged an anti-monarchist revolution, the British Government announced that along with the King they would cede to Greece the Ionian island, and the three “Protecting Powers” likewise undertook to remit $20,000 a year from the interest of the loan of 1833, which sum, however, was to be added to the King’s Civil List. Now that the new king was safely installed, the British bankers floated a new loan for Greece. To underline the country’s utter subservience to the Powers, the Treaty of 1864 expressly laid down that any one of the three Powers might send troops into Greek territory with the consent of the other two signatories. The consent of Greece was not necessary. Here was the balance sheet of thirty years of Greek Independence: the Greek nation encompassed no more than a fraction of the Greek people and it was hopelessly bankrupt and mortgaged to the British bankers. In truth, its independence was largely fictitious. It was in reality a semi-colony of Britain, France and Russia, forced to tolerate the rule of a foreign prince imposed upon it by its bond-holding “liberators” or as they dubbed themselves in those days, the “Protecting Powers.” The history of Greece epitomizes the fate of all the Balkan peoples as indeed of all small nations – the impossibility for small nations to achieve under capitalism real independence, as distinguished from formal political independence. Greece, like the other Balkan nations, was caught in the web of the struggle for Empire on the part of the major Powers. England and France, fearful of Russian expansion toward the Mediterranean, fought Russia in the Crimean war to prolong the existence of the Turkish Empire, and thus perpetuate Turkish oppression of the nations in the Near East. It was the studied diplomatic policy of England and France that the Turkish Empire had to he preserved for the maintenance of “stability” and the proper “balance of power” in Eastern Europe. Czarist Russia, the “prison-house of peoples,” despite its territorial ambitions, likewise feared and betrayed the national revolutionary movements in the Balkans. Thus, for over half a century, the Powers thwarted all attempts on the part of the Greek people in Crete, Thessaly, Epirus, the Aegean islands etc. to unite with the mother country. Again and again they dispatched their fleets to prevent secessions from the Turkish Empire. This century-old conspiracy of the major Powers to prevent the small nationalities of Eastern Europe from attaining national independence; to artificially prop up the Turkish Empire, “the sick man of Europe”; to play off the Balkan countries one against the other, the better to keep them subservient, has gone down in western diplomacy under the euphonious title of the “Eastern Question.” By the eighties, a new factor had entered Greek politics: the emergence of a capitalist class becoming richer and more powerful than the landowners. Trikoupis, Greece’s first great capitalist statesman, came to power in 1882. Greece experienced a brief period of capitalist expansion, a pale reflection of the enormous progress of capitalism in western Europe, With the aid of British capital, the railway system was extended, the Corinth Canal was opened, new public works were begun. By 1893, the bubble had already burst. A devastating economic crisis swept Greece, resulting in the first large scale emigration to the United States. Four years later, the revolution in Crete against Turkey and for unification with Greece brought on Greece’s war with Turkey. For thirty years, Crete had been fighting to reunite with Greece but had always been thwarted by the “Powers.” The 1896 revolution in Crete produced a wave of nationalism in Greece; Greek troops were dispatched to the island and Greece was soon at war with Turkey. Greece suffered disastrous defeat. Turkish troops occupied Thessaly for a year. Greece lost its strategical positions along its northern frontier and was forced to pay the huge indemnity of $20,000,000. The Turkish war made complete its vassalage to the European bankers. From 1833 to 1862 Greece was barely able to pay back short-term loans and to meet the interest on its indebtedness contracted during the War of Independence and in 1833. From 1862 to 1893 the effort to meet interest due the foreign bond-holders together with the annual budget deficits lead to complete bankruptcy. Greece was no longer able to meet the interest payments and set aside the amounts called for to pay off the principal. The disastrous war of 1897 finished off the process. The European bond-holders declared that the payment of the Turkish indemnity could not take priority over their bond payments nor would they grant another loan unless the three “Protecting Powers” guaranteed it. This time, in guaranteeing the new loan, the “Protecting Powers” stripped Greece of its sovereign powers. An International Finance Commission virtually took charge of Greek finances and guaranteed payment of the war indemnity and interest on the National Debt. Crete, whose national revolution led to the Graeco-Turkish war, was put under international control, with the island divided into British, French, Russian and Italian spheres. Greece’s humiliation was complete. Ten years later, the Greek capitalists made an heroic effort to convert Greece into a modern capitalist state. The emergence of a strong bourgeois class in the Near-East and the growing rivalry and conflict of the Western imperialists brought to a climax the century-old struggles of the Balkan peoples. In 1908, the Turkish Committee of Union and Progress (Young Turk Movement) composed of the secondary army officers and supported by the Turkish bourgeoisie issued a Pronunciamento and forced the establishment of Constitutional government in Turkey. The rise of Turkish nationalism gave birth to a new oppression of the Greeks and Armenians in Turkey. Economic boycotts were organized against Greek merchants and ship-owners, some of the wealthiest of whom resided in Constantinople, Smyrna and the interior of Asia Minor. The Greek capitalist class, both of Greece and Turkey, alarmed at this development, embarked on their heroic attempt to reunite Greece and hurl the Turks out of Europe. The following year, 1909,a “Military League”, in imitation of the Young Turk movement, was organized in Greece and under threat of a coup d’etat demanded a Constitutional government of the Greek Monarchy. The court camarilla capitulated. 1910 marks the beginning of Constitutional government in Greece. The Military League called the Cretan national revolutionist, Venizelos, into Greece, to head the government. Venizelos, who dominated Greek politics for the next two decades, became Greece’s capitalist statesman par excellence. He founded the Liberal Party, the authentic party of Greek capitalism, which now began to rule in its own name. Under Venizelos the government was reorganized from top to bottom along modern capitalist lines. The “spoils system” was abolished, civil service was reformed, agrarian reform was introduced with the division of the feudal estates in Thessaly. Foreign experts were called in to reorganize Greek finances: a British naval mission reorganized the navy, a French military mission reorganized the army. Education was made free, compulsory and universal. A new public works program of road and railway construction was begun. The capitalists, under Venizelos, were striving mightily to create a modern capitalist state. Two years later the Balkan Alliance between Greece, Serbia and Bulgaria was sealed and the three countries hurled their armies against Turkey. The Turkish army was crushed. Then in 1913, Greece in alliance with Serbia fought the second Balkan war against its ex-ally, Bulgaria, for the lion’s share of the spoils and again Greece emerged victorious. Venizelos became a national hero. Greece had grown to a nation of 6,000,000, ten times its original population. Greece now included Crete, most of the Aegean islands, the Epirus, Thessaly and even parts of non-Greek Macedonia. The struggle for Greek unity was almost complete. From 1910 to 1915 Greek foreign commerce increased from 300,000,000 to 500,000,000 drachmae. From 1910 to 1913 the revenues of the Greek government increased by a third. But all this progress was illusory. It did enrich a small clique of Greek bankers, merchants and shipowners. But it only burdened the already impoverished masses with new taxes and finally plunged Greece into more terrible hunger and crisis. The Greek capitalists could not raise the standard of living of the Greek masses. They only deepened the country’s bankruptcy and its subservience to Western Imperialism. The Greek and Serbian victories in the two Balkan wars dislocated the “balance of power”, strengthened nationalist aspirations inside the Austro-Hungarian and Russian empires and hastened the outbreak of the World War. Greece was soon occupied by Allied troops. Venizelos, representing the big capitalists, wanted to bring Greece into the war on the Allied side, determined to swim in the sea of imperial intrigues and Big Power conflicts. Just as the Greek capitalists were able to create Greater Greece by means of the two Balkan wars, so now they believed the providential opportunity had arrived to realize their program of Pan-Hellenism, the recreation of a Hellenic empire stretching from Constantinople to the Adriatic. King Constantine and the court camarilla, convinced of Germany’s eventual victory, decided to pursue a more modest course and maintain Grecian neutrality during the War of the Giants. Realizing that Constantine could not be pressured into acquiescence in his plans, Venizelos set up a parallel National Government in Salonika, and proceeded with the help of Greek and Allied bankers to set up a new National Army. By 1917, the “Protecting Powers” gave de facto recognition to Venizelos “revolutionary” government and demanded the abdication of King Constantine. They suddenly reminded themselves that the king had violated his oath to rule as a Constitutional Monarch. The Allies designated his son, Prince Alexander as successor. Venizelos returned to Athens at the head of French Negro troops. His first act was to suspend the Constitution and rule by Emergency Decrees; a cloud of spies and informers descended upon the country; the prisons were filled with “political suspects”; Greece was placed under Martial Law. The capitalists began to rule under a scarcely disguised police dictatorship, the main method of their rule for the ensuing 23 years. Under the leadership of Venizelos, the Greek capitalists made the fateful gamble to realize their dream of a modern Hellenic Empire. All of Greece was used as a counter in their desperate game. When the Allies signed their Armistice with Germany, the war first began in deadly earnest as far as the Greek masses were concerned. Venizelos sold the Greek army to the British imperialists to prove his “reliability” and “cooperativeness.” He sent 100,000 Greek soldiers into the Ukraine to fight with the forces of General Denikin against the Soviet Government. Then in May 1919 Venizelos, spurred on by Lloyd George, ordered Greek troops to occupy Thrace and Smyrna. The Greek army was soon pressing on to the interior of Asia Minor. Venizelos was pushed forward by the Allies at the San Remo Conference to force Allied terms upon Turkey. In return Greece was promised a further enlargement of territory. The war between Greece and Turkey dragged on. It had already cost $300,000,000 and an enormous number of lives. The newspapermen were remarking cynically that the English at Asia Minor were determined to fight to the last Greek. In 1922, the French imperialists now at conflict with the British and viewing Greece as simply the tool of British imperialism, armed the Turkish army and enabled it to annihilate the Greek forces. There began the Turkish massacres of the Greek population in Asia Minor and the expulsion of about three-quarter million Greeks from Turkey. To prevent any further atrocities, Greece and Turkey arranged by treaty an “exchange” of populations. Greece was utterly ruined. The country had been at war almost uninterruptedly for ten years. It was hopelessly in bankruptcy. The National Debt had grown to fantastic proportions. The drachma was worthless. The poverty-stricken country of 6 million people was suddenly inundated by the arrival of one and a half million homeless, starving refugees. So ended the great “adventure” of the Greek capitalists. The Graeco-Turkish war brought to a close the period of Greek irredentism. For a hundred years Greek political life was dominated by the “Great Idea”, the aim of annexing the “unredeemed” Greek lands and establishing a united Greek state. It was for this that the people had permitted themselves to be bled white. Now bourgeois Nationalism had bankrupted itself. The Greek bourgeoisie no longer possessed even a glimmer of a progressive mission. A new factor had entered the arena of Greek politics; the working class. Inspired by the Russian revolution, a very influential Communist movement sprang up in Greece. (The Social Democrats were never a very important force in Greece.) The trade unions began growing very rapidly and came under the influence of the young Communist Party. The old battle cries of Nationalism, Republicanism and Constitutionalism now began giving way before the new problem of Greek politics – the struggle between labor and capital. The bourgeoisie, mortally frightened by the red spectre began to unite its ranks. The old political lines between Monarchists and Republicans became more and more blurred. Coalition governments composed of both factions became the rule. Whether under the Republican or Monarchist façade, the capitalists would carry through their program and maintain their rule only by dictatorship and bloody terror. No sooner did the working class enter the political stage as an independent force, than the bourgeoisie turned savagely reactionary. The alliance with foreign imperialism became a life and death necessity for the preservation of its rule over the rebellious masses. Bourgeois democracy was a luxury that the Greek capitalists could no longer afford. Ever since 1920, Greece has been in the throes of terrible economic crisis. The trade balance sheet had a standing deficit of at least 50%. One quarter of the national income was paid out yearly to meet the National Debt; another 20% for the military establishment, another 14% for the upkeep of the governmental bureaucracy. The already high taxes were enormously increased. The cost of living sky-rocketed. The capitalists shifted the full burden of military disasters, foreign loans and the upkeep of a huge military establishment onto the shoulders of the already overburdened and impoverished masses. The Greek masses answered the attempt to drive them down to inhuman levels by militant class action. The Greek working class is relatively small – 400,000 in a country of 7,000,000 people. Greece remains primarily an agricultural country whose peasantry is one of the poorest in all Europe. But even in agricultural Greece, the proletariat quickly stepped forward as the leader of the peasantry and the oppressed masses as a whole. The trade unions embraced one quarter of the proletariat, about 100,000 with the majority of the unions under the direct influence of the Communist Party. There also grew up a strong peasant cooperative movement, embracing approximately 250,000 members. There existed a number of left agrarian parties but the Communist Party won the dominant influence even among the poor sections of the peasantry. The economic crisis produced a raging political crisis, which reflected itself in the extreme instability of the governmental superstructure. From 1920 until the Metaxas regime in 1936, one political regime followed another with the greatest rapidity. And as none of the bourgeois political parties could find sufficient support in the masses and quickly exhausted themselves in the struggle with the difficulties growing out of the economic bankruptcy of Greece the army again emerged as the regulator of political life. Scarcely a year went by without a coup d’etat or a threatened coup d’etat. The Greek masses reacted violently against the war and the dictatorship, and decisively defeated Venizelos at the polls in the 1920 election. A plebiscite was rigged up and King Constantine was recalled. Three years later, in an attempt to deflect the anger of the masses and shift responsibility for the tragedy of the Greek defeat in the war with Turkey, Col. Plastiras (who heads the present government) at the head of a Military Junta forced the abdication of King Constantine and executed the key Monarchist leaders as punishment for the 1922 disaster. The new King George II was forced to leave the country and in 1924, a new plebiscite was held and the Republic proclaimed. The Republicans and Monarchists united to rule under the Republican banner. But even this unification could not produce stability in the government, as governmental shifts and combinations were powerless to mitigate the economic disaster. The following year, General Pangalos staged a coup d’etat and set up a dictatorship. A year later, appeared a new “strong man”, General Kondylis, who organized a new coup d’etat. The capitalists then attempted a new government headed by their old leader Venizelos. But to no avail. The Greek crisis continued to grow worse. By 1930, as the economic crisis convulsed the whole world, Greece was choking to death. Over one-quarter of the entire working class was unemployed. The cost of living in 4 years had increased twenty-fold, while wages had only increased twelve-fold. The people were starving. The Greek masses began fighting back. Between 90 and 100,000 workers took part in strikes, which largely bore a political character. Simultaneously a peasant movement against taxes spread throughout the countryside. Armed clashes between strikers or insurgent groups of peasants and the gendarmerie became commonplace. Venizelos replied by passing a bill suppressing the Communist Party and the so-called revolutionary trade unions. (The Stalinists split the Greek trade union movement during the Third Period.) The press was muzzled and the first Emergency Bill for the Security of the State was passed, which inaugurated the practice later to become notorious under the Metaxas dictatorship of banishing tens of thousands of workers and peasants to the barren Aegean islands by simple executive order. The thoroughly frightened Greek bourgeoisie came to the conclusion that the king was indispensable for the creation of a “strong government.” The Greek bourgeoisie had come to such a pass that they could no longer rule without a “crowned idiot” heading the State. Kondylis, a former Republican general, staged a new coup d’etat in 1935. He immediately banned all public meetings and suppressed the papers that opposed his dictatorship or the return of the king. The whole staff of Rizospastis, the Stalinist daily, was arrested and exiled. A new fake plebiscite was stage-managed by the army and it was soon announced that 98% had voted in favor of the monarchy. (The Kendylis plebiscite became an international joke.) King George II returned to Greece. Venizelos, who had previously come to an agreement with the king, specifically called on his Liberal Party not to oppose the Monarch. To round out the picture, the Stalinists, hot on the trail of carrying out the policies of the Seventh World Congress of the Comintern sent a delegation to King George II whom they hailed as a “guarantee against Fascism and against any authoritarian regime.” King George received the delegation and was given assurance that the Communist Party had decided to function “within the framework of the present regime.” The new elections of January 1936 resulted in a parliamentary deadlock. The Venizelist and anti-Venizelist combinations won 142 and 143 seats respectively in the Chamber of Deputies. The Communist Party with 15 members held the balance of power. Meanwhile a strike movement was spreading throughout the country. The bourgeoisie alarmed by the growing class struggles at home and with the events in Spain and France staring them in the face determined to wipe out once and for all the menacing working class movement. The word went down that no combinations should be made with the CP parliamentary fraction, that a “strong government” was necessary. The King appointed Metaxas, a Monarchist general, whose party had won the smallest number of seats, seven, in the election, to head the government. The Chamber met in April and overwhelmingly voted to prorogue for 5 months empowering Metaxas to govern by decree. The bourgeoisie flung this provocation into the face of the labor movement prepared to crush the opposition which they knew would follow. From April to August 4, when Metaxas proclaimed his dictatorship, events moved rapidly. The tobacco workers, numbering 45,000, considered one of the most militant sections of the Greek working class were on strike for higher wages throughout northern Greece. On May 9 a general strike was called in Salonika in sympathy with the tobacco workers. Metaxas promptly issued an Emergency Decree mobilizing railwaymen and tramwaymen under military orders. Troops were sent out against the demonstrators in Salonika. The crowds appealed to the soldiers and fraternization began between the soldiers and workers. The gendarmerie were then called out and shot into crowds. 30 demonstrators including 2 women were killed. The day has gone down in Greek labor history as the “Black Saturday” massacre. Next morning 100,000 attended the funeral of the murdered men and women shouting “Revenge.” The Greek working class always revolutionary, was now surging forward. The revolutionary tide was rising hourly. Preparations were immediately announced for an all-Greece strike. The strike demands were: - Liberation of everybody arrested; - Pensions and indemnities for the victims of the terror; - Dismissal of the guilty officials; - Withdrawal of the Emergency Decree; - Resignation of Metaxas and his cabinet. The following day, the general strike had already spread throughout northern Greece. Metaxas ordered the fleet to Salonika and redoubled the terror. Thousands of workers were arrested and summarily exiled to the penal islands. The “revolutionary” unions were outlawed and union funds declared confiscated. In July the Social Democratic trade union bureaucrats, thoroughly frightened by the turn of events, agreed to conduct with the Stalinists, who headed the so-called revolutionary trade unions, a joint struggle against Metaxas’ dictatorial decrees. A joint Congress of the Unitarian Trade Union Federation (“revolutionary”) and the General Trade Union Federation (reformist) was held in Athens on July 28. The united session of the Executive Committees announced their decision to call a one-day protest strike in Athens on August 5 and, as against the previous threat to call a general strike, appealed to the workers throughout Greece “to hold themselves ready” for a general all-Greece protest strike if the government rejected the workers” demands. This was exactly the moment for which Metaxas had been waiting. On August 4, one day before the scheduled protest strike, he placed machine guns on all the main street intersections in Athens, abolished Parliament, banished the working class leaders and proclaimed the Dictatorship. Within a year, 13,000 political exiles were reported living on the barren Aegean islands while thousands more were in the prisons awaiting decision on their cases. Five drachmae (cents) a day were allotted the prisoners for their subsistence. Thousands died from cold, hunger and the polluted water. Doses of castor oil were fed workers to extort confessions. Ancient forms of torture were again revived. “Liberty,” Metaxas proclaimed, “was a 19th century illusion.” The Greek working class was decisively defeated in 1936 and was unable to prevent the imposition of the Metaxas dictatorship because of the criminal policy of its Stalinist leadership. It is unquestionable that in 1936 Greece was in the throes of a revolutionary crisis. The Greek workers were prepared to overthow capitalist rule and join hands with the peasantry to form a government of Workers and Farmers. The Communist Party dominated the whole working class movement and likewise enjoyed strong support in the countryside. It was known at the time of the Salonika general strike in May 1936 that both the soldiers and sailors in the fleet were very sympathetic to the workers’ cause. All the major strike movements of 1936, moreover, were under the direct leadership of the Communist Party. Yet Metaxas was able to impose his bloody dictatorship with hardly a struggle. What is the explanation? It can be summed up in a few words: the fatal policy of the People’s Front. For over five years, the Greek Stalinists in common with the Stalinists throughout the world, had disoriented and disorganized the Greek labor movement with their suicidal ultra-leftist policies of the Third Period. They were instrumental in splitting the trade union movement. They wore out the Greek masses by their adventurist tactics. By 1936, on instructions from the Comintern, they had made an about face and began their ultra-opportunist course of the People’s Front. Instead of organizing the workers for decisive revolutionary action and working to draw the peasants of the countryside into the struggle throughout the fateful months between April and August 1936, when the working class was in deep revolutionary ferment, the Stalinists busied themselves with a campaign to force the Liberal Party to organize with them a People’s Front. The Liberal Party, however, had heard its master’s voice and turned down the Stalinist offer. They were busy easing the way for Metaxas. The Stalinists wasted the whole six months in these criminal negotiations – six months that should have been employed to mobilize the broad masses for the revolutionary assault on the capitalist government. Just as in Spain, bourgeois democracy had become an illusion, a reactionary snare in Greece in 1936. The only alternatives were Metaxas or Soviet power. There existed in Greece in 1936 no third alternative. Sklavanos, leader of the Stalinist Parliamentary fraction, explained in an interview just a few weeks before Metaxas proclaimed his dictatorship that Greece was not in a revolutionary situation (!); that moreover, Greece had many feudal vestiges and would first have to make a democratic revolution before the country was ready for Socialism; that the task of the Greek proletariat was to forge a bloc with the liberals – the People’s Front – to prevent the formation of a dictatorship and to uphold democratic rights! That was the program of the Stalinists in 1936. Small wonder that Metaxas was able to crush the workers’ movement and impose with hardly a struggle, his bloody rule. It must be further remembered that Greece is a small country. As present events testify, working class international solidarity and aid is a life-and-death question for the Greek masses and the success of their revolution. In 1936, the Stalinists, with the aid of the Social Democrats, effectively strangled the revolutionary struggles of the masses in Spain, France and elsewhere in Europe by means of their perfidious People’s Fronts. It was therefore a foregone conclusion that Reaction would likewise triumph in a small country like Greece. The Trotskyist movement, which went back in Greece to 1928, had a correct revolutionary progam to meet the situation. The Trotskyists, however, split in 1934 and their forces were too weak in 1936 to challenge the Stalinists for the leadership of the labor movement. Although it attempted to copy in every respect the Mussolini and Hitler regimes, the Metaxas dictatorship never enjoyed any mass support. Despite Metaxas’ “social” demagogy and his mountebank performances (he called himself “the first workman and the first peasant of Greece”), the Metaxas government, from its first days to its last, was nothing more than a police-military dictatorship. Metaxas’ regime which lasted four years – it collapsed after the invasion of Greece in 1940 – based itself on armed force and murderous terror. Even so, it lasted as long as it did only because of the temporary exhaustion and disorientation of the Greek working class brought about by the 1936 debacle. This work is in the Public Domain under the Creative Commons Common Deed. You can freely copy, distribute and display this work; as well as make derivative and commercial works. Please credit the Encyclopedia of Trotskism On-Line as your source, include the url to this work, and note any of the transcribers, editors & proofreaders above. Last updated on 5.9.2008
<urn:uuid:4e610886-1c31-4bb3-af10-2f1778ab0a83>
CC-MAIN-2017-17
https://www.marxists.org/history/etol/newspape/fi/vol06/no02/greece01.htm
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917118963.4/warc/CC-MAIN-20170423031158-00423-ip-10-145-167-34.ec2.internal.warc.gz
en
0.970558
6,931
3.625
4
Once a month for the next 5 years, 20,000 people across the United States will find a package containing 62 pills in their mailboxes. As participants in a clinical trial, the recipients agreed to swallow two of the pills daily. But inevitably as the years pass, some pill packets will become buried under a stack of letters, or forgotten in a drawer. After all, these pills contain only vitamin D, fish oil, or an inert placebo—a person doesn’t need them to make it through the day. Plus, no one monitors who takes the pills daily and who does not. In another study, 871 pregnant women swallow a vitamin D or a placebo pill every day for the duration of their pregnancy. Then every year for 3 years after they’ve given birth, clinicians will evaluate their children for signs of asthma, in search of clues about the relationship between the essential vitamin and the respiratory disorder. But the study is scheduled to last only 3 years, so it won’t include children who begin to wheeze at age 6, when childhood asthma most often strikes. A better vitamin D trial might send health-care professionals out to personally deliver pills to each of the first trial’s 20,000 participants. It might also test various doses of supplements, because no one knows how much is best. The asthma trial might include more women, run for a longer period of time, and test childhood supplementation, too. But then they’d also cost millions more, and in contrast to many drug trials, Pharma isn’t footing the bill. Profits from vitamin sales pale in comparison to those of most drugs, and therefore a company would struggle to recoup the money it spent testing supplements. Unfortunately, prevention trials require large sample sizes and long-term follow-up, making them incredibly expensive. Indeed, the National Institutes of Health has granted about $32 million for these two trials alone. But researchers aren’t giving up. With limited budgets, vitamin D investigators are working hard to keep costs down, while still giving the vitamin a fighting chance to prove itself. Deficiencies of vitamin D have been linked to cancer, diabetes, strokes, and other maladies, and at least 12 imperfect clinical trials on its preventive powers have been set in motion since 2008. And while some scientists worry their cost-trimming shortcuts will render the results useless, others remain optimistic. Perhaps this smorgasbord of trials will reveal unpredictable benefits of taking one’s vitamins. In 2008, epidemiologist JoAnn Manson at Harvard Medical School in Boston received NIH funding to lead the largest vitamin D intervention trial yet. In observational studies, Vitamin D had shown promise for lowering the risk of a wide range of diseases, but Manson felt the field would benefit from a large clinical trial that more rigorously tested the vitamin’s power. This sentiment only grew when she analyzed about 1,000 reports on vitamin D metabolism, intake, and impact on human health as a member of a panel convened by the Institute of Medicine (IOM) in 2009. The panel decided that while the benefit of the nutrient for bones is real, helping to promote bone strength while staving off diseases such as rickets, osteomalacia, and osteoporosis, the evidence of nonskeletal benefits was inconclusive[1. Institute of Medicine, Dietary Reference Intakes for Calcium and Vitamin D, National Academies Press, 2011.]—an uncertainty that continues to linger.[2. M. Chung et al., “Vitamin D with or without calcium supplementation for prevention of cancer and fractures: An updated meta-analysis for the US Preventive Services Task Force,” Ann Intern Med, 155:827-38, 2011.] The decision infuriated many scientists—some of whom had documented the correlation between high blood levels of vitamin D and lower rates of colorectal cancer, diabetes, asthma, influenza, multiple sclerosis, and an array of other ailments. And in many cases, researchers can point to ways the vitamin might bring about benefits. The hormone derived from vitamin D, called 1,25-dihydroxyvitamin D3, or calcitriol, can turn on or off hundreds of genes in the body, thereby participating in processes ranging from cell proliferation to immune system regulation. But the IOM panel concluded that without large-scale prevention trials confirming the ultimate result of high levels of vitamin D, it could not say for certain whether insufficiency contributes to cancer or any other nonskeletal disease. Others argue that the correlational studies provide enough evidence to recommend that people maintain a higher concentration of vitamin D in their blood, and that difficult, expensive, and often inconclusive prevention trials, particularly those for relatively rare or unpredictable diseases, are a waste of time. “The success of the RCT [randomized controlled trial] in evaluating medical treatments has, perhaps, blinded nutritionists, regulators, and editors to the fact that it is a method ill-suited for the evaluation of nutrient effects,” Robert Heaney, an endocrinologist at Creighton University School of Medicine wrote in a 2008 commentary published in The Journal of Nutrition.[3. R.P. Heaney, “Nutrients, endpoints, and the problem of proof,” J Nutr, 138:1591-95, 2008.] And it seems the public isn’t waiting for clinical trial data. Spurred by headlines about its potential benefits, US consumer sales of vitamin D supplements rocketed from $50 million in 2005 to $550 million in 2010, according to estimates from the Nutrition Business Journal. Enthusiasm for the vitamin echoes among doctors and natural-food advocates, who are pushing for doses higher than the 400 to 600 International Units (IU) that the government currently recommends for maintaining healthy bones. However, Manson, a refined woman of measured words, is acutely aware of the disappointment that has trailed the hyping of vitamins over the decades. Vitamin E, a fat-soluble antioxidant, gained a reputation for fighting cancer in the 1990s, when observational studies found that people who took supplements had lower rates of the disease. But the buzz died out in 2008 when a 35,000-person clinical trial on vitamin E and selenium was terminated prematurely after people taking the supplements showed a slightly higher risk of developing prostate cancer than the control group. Similarly, in 1996 two large clinical trials dumbfounded fans of beta-carotene, a substance that humans convert into vitamin A after consuming it in fruits and vegetables. One trial found that it raised the risk of lung cancer and heart disease, and the other ended anticlimactically after 12 years with the conclusion that beta-carotene supplements performed no differently than placebo. “You have to look at these previous randomized trials as cautionary tales,” Manson says, “because they show that time and time again, everyone jumped on the bandwagon and then the randomized trials did not have favorable results, and in fact, the risks outweighed the benefits.” At the same time, however, this is exactly why large-scale trials are necessary, she says. Though they aren’t perfect, such trials are the only way to discover whether vitamin D causes better health, or simply indicates it. “For example, people who are physically active tend to spend more time outdoors walking, hiking, or playing tennis. They get more sun exposure”—and thus more vitamin D—“but the real benefit might be physical activity,” says Manson. “There are so many potential confounders, and this is just one we know about.” Manson designed her 5-year, $22 million study, called VITAL (VITamin D and omegA-3 triaL), to be cost effective. For a point of comparison, VITAL costs just $200 per person per year, whereas a rate of at least $1,000 is typical for many nutrient trials, Manson says. Rather than require in-person visits for all 20,000 participants, she decided to mail the participants their pills, in four randomized combinations—either 2,000 IU of vitamin D3 and 1 gram fish oil (omega-3 fatty acid), one of those plus a placebo, or two placebos—to be taken daily. In addition to reducing costs, she says this lessens the burden on busy participants. And in order to increase the chances that the trial would detect an effect, the participants are all over age 50, and therefore more likely to develop a disease. Furthermore, in addition to cancer and heart disease, VITAL investigators will assess dozens of other outcomes. They’ll learn when patients are diagnosed with cancer, diabetes, and other diseases, and in a subset of the participants, periodic clinical visits will allow doctors to measure blood sugar levels, cognitive performance, lung function, heart function, muscle strength, weight, and much more. Time and time again, everyone jumped on the bandwagon and then the randomized trials did not have favorable results, and in fact, the risks outweighed the benefits. —JoAnn Manson, Harvard Medical School Scientists critical of the VITAL study question whether the daily dose of 2,000 IU is enough to distinguish the treatment group from the controls. If this were a drug trial, the placebo group would go without the drug completely. But it’s unethical to ask anyone to go without vitamin D. Doctors inform all participants that they can take up to 800 IU of vitamin D daily (the national recommendation for people over 70 years old) in addition to the pills they receive in the mail. If they do, the control group will sustain more than adequate levels. But some participants might decide to break the rules and head to the nearest corner store for high-dose supplements after being told that vitamin D may help prevent cancer and other diseases. And of course, many participants won’t follow through with taking the pills they’ve been sent in the mail. “You hope drop-ins and drop-outs will be equal on both sides, but they may not be,” warns biostatistician Gary Cutter at the University of Alabama at Birmingham. A higher dose of vitamin D would widen the gap between the treatment and the control group, but Manson isn’t swayed. She says 2,000 IU will lift the treatment arm well above the level suggested to help protect against nonskeletal diseases, while she expects the controls to stabilize at levels sufficient for healthy bones. “Sure, we could have tested higher doses, but then right off the bat, we might have had safety issues,” Manson says. Indeed, the trials that found harm in vitamin E and beta-carotene have been criticized for testing too high a dose. Furthermore, elderly participants in two independent clinical trials fell more often when they received whopping doses of vitamin D once a year[4. K.M. Sanders et al., “Annual high-dose oral vitamin D and falls and fractures in older women,” JAMA, 303:1815-22, 2010.] or once every 3 months5—although in the latter study the effect was not statistically significant. Nonetheless, in other disease-prevention trials, investigators are gunning for better compliance and a fighting chance of showing an effect by doling out large, periodic doses of vitamin D. In the United Kingdom, a trial looking at the effect of vitamin D on respiratory infections (including the flu) is giving participants 120,000 IU of the vitamin every 2 months. And participants in the treatment arm of a vitamin D trial for type 2 diabetes prevention take an average dose of 89,684 IU once per week. Despite the rather extreme dose, none of the first 50 participants to hit the 6-month mark in the diabetes trial have had increased calcium in their blood and urine—the first sign of harm to bubble up in vitamin D studies, says lead trial investigator Mayer Davidson of Charles Drew University of Medicine and Science in Los Angeles. And those enrolled for 2 months in the UK trial also have normal blood calcium concentrations. Davidson chose the high dose—one that some researchers call potentially dangerous—to ensure that if the nutrient does in fact affect glucose metabolism and prevent diabetes, he’s sure to catch it. Plus, the participants in the study require more vitamin D than usual because most of them are obese, and fat serves as a sink for fat-soluble vitamins. In addition to their weight, the people Davidson’s team recruited have other risk factors for diabetes: they’re African American or Latino; diabetes runs in their families; they have high blood pressure and impaired glucose tolerance or impaired fasting glucose, also called pre-diabetes; and they had low blood levels of vitamin D before the study began. Likewise, Anastassios Pittas at Tufts University plans to enroll patients at risk of diabetes in another prevention trial, in which he and his team will administer 4,000 IU of vitamin D daily. By enrolling at-risk populations, Davidson and Pittas hope to see an effect on diabetes with just hundreds of participants within a few years’ time. Investigators who study relatively rare diseases face the biggest challenge. In December, multiple sclerosis (MS) researchers gathered in Chicago to plot a trial to prevent the debilitating disease characterized by excessive inflammation and nerve damage. But because fewer than eight people per 100,000 in the United States acquire MS each year, hundreds of thousands of healthy individuals would need to enroll. Plus, while MS usually occurs sometime between ages 25 and 40, vitamin D’s putative protective power could begin in the womb, requiring a trial to run for decades to notice such effects. “It’s an unfortunate time to get funded for a long-term prevention trial,” says Cutter, a self-described skeptic, after attending the meeting. “We might have to do minimalist data collection and give up a lot of things we want to know. Even a 5-year study is very expensive,” he says, “and 5 years might not be enough.” Adrian Martineau, at the Centre for Primary Care and Public Health of the Barts and The London School of Medicine and Dentistry, faces an analogous hurdle. He says that latent tuberculosis infections seem to activate less frequently in people who have plenty of vitamin D. But because latent infections only become active 5 percent of the time, a trial in the United Kingdom that randomizes 14,000 people with latent infections would still not be large enough to demonstrate the effect of supplements, he says. Thus, the need for evidence from clinical trials places researchers who focus on tuberculosis in industrialized countries, MS, or other relatively rare disorders in a complicated position. But many researchers continue to push for such trials. George Ebers, a neurologist at the Wellcome Trust Centre for Human Genetics in the United Kingdom, for example, is sure that supplements could prevent some cases of MS, based on observational studies and experiments that show how vitamin D tames inflammation in animal models. Now he just wants a clinical trial to prove it. “It’s mainly about convincing other people at this point,” he says. Only a few vitamin D trials have assessed nonskeletal diseases thus far, and their combined verdict is inconclusive. One of those trials, aimed at testing fractures, found that vitamin D combined with calcium helped prevent breast, lung, and colon cancers and leukemia, though the result was not one investigators had designed the trial to test, and it was determined from a small sample size. On the other hand, a randomized clinical trial, conducted as part of a large-scale and multifaceted investigation called the Women’s Health Initiative, concluded that vitamin D and calcium supplements didn’t reduce cancer incidence or mortality, and appeared to increase the risk of urinary tract stones.[5. P. Glendenning et al., “Effect of three-monthly oral 150,000 IU cholecalciferol supplementation on falls, mobility, and muscle strength in older postmenopausal women: A randomized controlled trial,” J Bone Miner Res, doi:10.1002/jbmr.524, 2011.] However, critics of this study point to the high rate of dropouts, and the low dose of vitamin D given to the treatment group (just 400 IU daily). And they say the urinary tract stones could be due to the calcium taken alongside vitamin D. But the Women’s Health Initiative trial wasn’t a complete failure for proponents of vitamin D, as signals of a positive effect of the vitamin have begun to emerge from the data. For example, women in the treatment group who had not been taking vitamins before the trial began did show diminished rates of breast and colorectal cancer.[6. M.J. Bolland et al., “Calcium and vitamin D supplements and health outcomes: A reanalysis of the Women’s Health Initiative (WHI) limited-access data set,” Am J Clin Nutr, 94:1144-49, 2011.] Because investigators hadn’t designed the trial to detect this outcome a priori, however, the results need to be confirmed in a new clinical trial. That said, it brings science closer to understanding how and when vitamin D matters, says John Milner, chief of the National Cancer Institute’s Nutrition Science Research Group. Indeed, answers may never be simple when it comes to nutrition. One reason why studies have arrived at conflicting conclusions may be that individual needs vary, says Milner. He hopes that ongoing trials, despite their imperfections, will help unravel the contributions of genetics and diet. “There is some evidence that individuals with certain genetic variations require more vitamin D because they have an inability to absorb or metabolize vitamin D effectively,” he says. And because nutrients interact, a person’s diet also has the potential to alter the effect of supplements. Notably, unlike most clinical trials, which tend to enroll health-conscious Caucasians, the medley of vitamin D trials currently taking place has attracted a diversity of people. African Americans account for 43 percent of participants in the trial on childhood asthma and 25 percent of Manson’s VITAL trial (if all goes according to plan), while Latinos comprise 85 percent of Davidson’s type 2 diabetes trial members. This composition of participants will help researchers determine whether certain ethnicities, or even smaller subsets of individuals, are more responsive to vitamin D supplements than others—a situation that might mask any effects of the vitamin in more homogenous trials. “In nutrition we talk about maintaining normal adequacy, but some people may require more vitamins than others, and identifying those populations will really be the future of nutrition,” says Milner. “It’s the classic ‘one size does not fit all.’ I’m hoping we can identify biomarkers that tell us who will really benefit, and who doesn’t need to worry.” Vitamin D prevention trials for nonskeletal disorders Prevention trials make drug trials look easy. They require more participants and a longer duration, and investigators must trust that the participants take the vitamins they’re given, and don’t decide to up their dose by buying over-the-counter supplements. Below is a list of ongoing trials that are trying to beat the odds to conclusively nail down the benefits of vitamin D for nonskeletal disorders, such as cancer, heart disease, and diabetes. |NAME OF TRIAL||PRIMARY OUTCOME||START DATE||DURATION OF TREATMENT||STATUS|| |DOSE OF VITAMIN D3| |NCT01169259||All cancers, heart disease, and stroke||July 2010||5 years||Recruiting participants||20,000 healthy men over 50 and women over 55||2,000 IU daily| |To be announced (lingering ethical approvals have delayed the official listing of this trial)||nfections, cognitive decline, blood pressure increase, decline in muscle strength, risk of non-vertebral fractures||June 2012||3 years||Approved||2,000 men and women over 70 who have had a fracture or a fall||2,000 IU daily| |NCT01052051||All cancers||Jan. 2009||4 years||Ongoing||2,332 healthy postmenopausal women over 55||2,000 IU (and 1,500 mg calcium) daily| |NCT01463813||All cancers and cardiovascular disease||Jan. 2012||4 years||Approved||18,000 healthy men over 60 and women over 65||3,200 IU or 1,600 IU daily| |NCT00920621||Asthma or recurrent wheeze at 3 years old||Sept. 2009||1 year||Ongoing||871 pregnant women whose babies have a family history of asthma, eczema, or allergic rhinitis||4,000 IU daily| |NCT00685594||Type 2 diabetes||March 2008||5 years||Ongoing||517 adults with impaired glucose tolerance||20,000 IU per week| |NCT00876928||Type 2 diabetes||March 2009||1 year||Ongoing||186 Latino and African American adults over age 40 with risk factors for diabetes||Dose determined by BMI; average dose is 89,684 IU per week| |NCT01069874||Influenza and other respiratory infections||March 2010||1 year||Recruiting participants||Approx. 290 permanent residents or staff at 116 independent living units||120,000 IU once every 2 months| Correction (March 6, 2012): The illustration, "How the Body Processes Vitamin D," has been relabeled to correctly reflect that food products are a source of both the D2 and D3 forms of vitamin D. The Scientist regrets the error.
<urn:uuid:3f8e3b06-2964-46d3-8555-9aa85b3a2fe9>
CC-MAIN-2017-17
http://www.the-scientist.com/?articles.view/articleNo/31763/title/Vitamin-D-on-Trial/
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917120349.46/warc/CC-MAIN-20170423031200-00425-ip-10-145-167-34.ec2.internal.warc.gz
en
0.931498
4,535
2.84375
3
- OIPA in India - Maj. Gen. R. M. KharebChairman, AWBI - Smt. Jayanthi NatrajanMinister for Environment and Forest - Hon'ble Sardar Manmohan SinghPrime Minister of India, New Delhi - HE Smt. Pratibha Devisingh PatilPresident of India, New Delhi - Hon'ble Speaker Lok Sabha - Hon'ble Shri Hamid AnsariVice President of India - Shri Anjani KumarDirector, AWD - Central Zoo Authority Cheetah needs strong legislation to survive in India - Sukanya Kadyan Height: 30+ inches at shoulder Weight: 69-140 lbs. Body length: 4 feet Tail length: 28.5 inches The world's fastest land mammal, the cheetah, is the most unique and specialized member of the cat family and can reach speeds of 70 mph. Unlike other cats, the cheetah has a leaner body, longer legs, and has been referred to as the "greyhound" of the cats. It is not an aggressive animal, using flight versus fight. With its weak jaws and small teeth--the price it paid for speed, it cannot fight larger predators to protect its kills or young. The cheetah is often mistaken for a leopard. Its distinguishing marks are the long teardrop-shaped lines on each side of the nose from the corner of its eyes to its mouth. The cheetah's coat is tan, or buff colored, with black spots measuring from 78 to 1.85 inches across. There are no spots on its white belly, and the tail has spots that merge to form four to six dark rings at the end. The tail usually ends in a bushy white tuft. Male cheetahs are slightly larger than females and have a slightly bigger head, but it is difficult to tell males and females apart by appearance alone. The fur of newborn cubs is dark and the spots are blended together and barely visible. During the first few weeks of life, a thick yellowish-gray coat, called a mantle, grows along the cub's back. The dark color helps the cub to blend into the shadows, and the mantle is thought to have several purposes, including acting as a thermostatic umbrella against rain and the sun, and as a camouflage imitating the dry dead grass. The mantle is also thought to be a mimicry defense, causing the cub to resemble a ratel, or honey badger, which is a very vicious small predator that is left alone by most other predators. The mantle begins to disappear at about three months old, but the last traces of it, in the form of a small mane, are still present at over two years of age. The cheetah is aerodynamically built for speed and can accelerate from zero to 40 mph in three strides and to full speed of 70 mph in seconds. As the cheetah runs, only one foot at a time touches the ground. There are two points, in its 20 to 25 foot (7-8 metres) stride when no feet touch the ground, as they are fully extended and then totally doubled up. Nearing full speed, the cheetah is running at about 3 strides per second. The cheetah's respiratory rate climbs from 60 to 150 breaths per minute during a high-speed chase and can run only 400 to 600 yards before it is exhausted; at this time it is extremely vulnerable to other predators, which may not only steal its prey, but attack it as well. The cheetah is specialized for speed through many adaptations: It is endowed with a powerful heart, oversized liver, and large, strong arteries. It has a small head, flat face, reduced muzzle length allowing the large eyes to be positioned for maximum binocular vision, enlarged nostrils, and extensive air-filled sinuses. Its body is narrow, lightweight with long, slender feet and legs, and specialized muscles, which act simultaneously for high acceleration, allowing greater swing to the limbs. Its hip and shoulder girdles swivel on a flexible spine that curves up and down, as the limbs are alternately bunched up and then extended when running, giving greater reach to the legs. The cheetah's long and muscular tail acts as a stabilizer or rudder for balance to counteract its body weight, preventing it from rolling over and spinning out in quick, fast turns during a high-speed chase. The cheetah is the only cat with short, blunt semi-retractable claws that help grip the ground like cleats for traction when running. Their paws are less rounded than the other cats, and their pads are hard, similar to tire treads, to help them in fast, sharp turns. It has been estimated that in 1900, more than 100,000 cheetahs were found in at least 44 countries throughout Africa and Asia. Today the species is extinct from +20 countries and between 10,000 to 12,500 animals remain, found mostly in small-pocketed populations in 24 to 26 countries in Africa and -100 in Iran. The cheetah is classified as an endangered species, and listed in Appendix I (which includes species that are most threatened) of the Convention of International Trade in Endangered Species (CITES). Prior to the 20th century, cheetahs were widely distributed throughout Africa and Asia, and were originally found in all suitable habitats from the Cape of Good Hope to the Mediterranean, throughout the Arabian Peninsula and the Middle East, from Israel to India, and through the southern provinces of the former Soviet Union. Today, the Asian cheetah is nearly extinct, due to a decline of available habitat and prey. The species was declared extinct in India in 1952, and the last reported cheetah was seen in Israel in 1956. Today, the only confirmed reports of the Asian cheetah comes from Iran, where less than 100 occur in small isolated populations. Free-ranging cheetahs still inhabit a broad section of Africa, including areas of North Africa, the Sahel, East Africa, and southern Africa. Viable populations may be found in less than half of the countries where cheetahs still exist. These declining populations mean that those cheetah which do survive, come from a smaller, less diverse gene pool. Populations continue to decline from loss of habitat, decline of prey species, and conflict with livestock farming. Throughout Africa, cheetahs are not doing well in protected wildlife reserves due to increased competition from other larger predators, such as lion and hyenas, and most protected areas are unable to maintain viable cheetah populations. Therefore, a large percentage of the remaining cheetah populations are outside of protected reserves, placing them in greater conflict with humans. There are now only two remaining population strongholds: Namibia/Botswana in southern Africa, and Kenya/Tanzania in East Africa. The cheetah's greatest hope for survival lies in the relatively pristine countryside of Namibia, which is home to the world's largest remaining population of cheetah. However, even in Namibia, the cheetah's numbers drastically declined by half in the 80s, leaving an estimated population of less than 2,500 animals. At the beginning of the 1990s, when CCF began its work with the farming community, a gradual change has occurred within Namibia, and over the last couple of years the population has stabilized. CCF's research has shown that farmers have more tolerance for cheetahs and are killing less, and those that are being killed are linked to livestock losses, or that they are calling CCF to help them. The cheetah is generally considered to be an animal of open country and grass lands. This impression is probably due to the ease of sighting the cheetah in the shorter grass. However, cheetahs use a wider variety of habitats, and are found often in dense vegetation and even mountainous terrain. Since cheetahs rely on sight for hunting, they are diurnal: more active in the day than night. In warm weather, they move around mostly during the early morning and late in the afternoon when the temperatures are cooler. Cheetahs prey on a variety of species from rabbits to small antelope, and the young of larger antelope. Their hunting technique is to stalk as close as possible to the prey, burst into full speed, tripping the prey with a front paw and, as the prey falls, biting it by the throat in a strangulation hold. Cheetahs are more social in their behaviors than once thought. They will live singly or in small groups. Female cheetahs are sexually mature at 20 to 24 months. The mating period lasts from one day up to a week. The female's gestation period is 90 to 95 days, after which she will give birth to a litter of up to 6 cubs. She will find a quiet, hidden spot in the tall grass, under a low tree, in thick underbrush, or in a clump of rock. Cheetah cubs weigh between 9 to 15 ounces when born. Although cheetah cubs are blind and completely helpless at birth, they develop rapidly. At 4 to 10 days of age, their eyes open, and they begin to crawl around the nest area; at 3 weeks their teeth break through their gums. Due to the possibilities of predation from a variety of predators, the female moves her cubs from den to den every few days. For the first 6 weeks, the female has to leave the cubs alone most of the time, in order to hunt. Also, she may have to travel fairly long distances in search of food. During this time, cub mortality is as high as 90 percent in the wild, due to predation. The cubs begin to follow their mother at 6 weeks old, and begin to eat meat from her kills. From this time onward, mother and cubs remain inseparable until weaning age. The cubs grow rapidly and are half of their adult size at 6 months old; at 8 months old, they have lost the last of their deciduous teeth. About this time, the cubs begin to make clumsy attempts at stalking and catching. Much of the learning process takes the form of play behavior. The cubs stalk, chase and wrestle with each other and even chase prey that they know they cannot catch, or prey that is too large. The cubs learn to hunt many different species, including guinea fowl, francolins, springhares, and small antelope. They still are not very adept hunters at the time they separate from their mothers. The female leaves her cubs when they are between 16 to 18 months old to rebreed, starting the cycle over again. The cubs stay together for several more months, usually until the female cubs reach sexual maturity. At this time, the male cubs are chased away by dominant breeding males. Male cubs stay together for the rest of their lives, forming a coalition. Male coalition is beneficial in helping to acquire and hold territories against rival male cheetahs. Males become reproductively active between 2 and 3 years of age. Cheetahs & Humans The cheetah's long association with humans dates back to the Sumerians, about 3,000 BC, where a leashed cheetah, with a hood on its head, is depicted on an official seal. In early Lower Egypt, it was known as the MAFDET cat-goddess and was revered as a symbol of royalty. Tame cheetahs were kept as close companions to pharaohs, as a symbolic protection to the throne. Many statues and paintings of cheetahs have been found in royal tombs, and it was believed that the cheetah would quickly carry away the pharaoh's spirit to the after life. By the 18th and 19th centuries, paintings indicated that the cheetah rivaled dogs in popularity as hunting companions. The best records of cheetahs having been kept by royalty, from Europe to China, are from the 14th, 15th and 16th centuries. Hunting with cheetahs was not to obtain food, which royalty did not need, but for the challenge of sport. This sport is known as coursing. Adult wild cheetahs were caught, as they already had well developed hunting skills and were tamed and trained within a few weeks. The cheetahs were equipped with a hood, so they could not see the game they were to hunt, and were taken near the prey either on a leash, a cart, or the back of a horse, sitting on a pillow behind the rider. The hood was then removed and the cheetah dashed after the prey, catching it, after which the trainer would reward it with a piece of meat, and then take the cheetah back to the stable where it was kept. Many emperors kept hundreds of cheetahs, at any given time, in their stables. With this great number of cheetahs in captivity, it was recorded only once, by Emperor Jahangir, the son Akbar the Great, an Indian Mogul in the 16th century, that a litter of cubs was born. During his 49-year reign, Akbar the Great had over 9,000 cheetahs, in total, which were called Khasa or the "Imperial Cheetahs," and he kept detailed records on them. All of the cheetahs kept as "hunting leopards" were taken from the wild. Because of this continuous drain on the world populations, the numbers of cheetahs declined throughout Asia. In the early 1900's, India and Iran began to import cheetahs from Africa for hunting purposes. Other Survival Challenges Molecular genetic studies on free-ranging and captive cheetahs have shown that the species lacks genetic variation, probably due to past inbreeding, as long as ten thousand years ago. The consequences of such genetic uniformity have led to reproductive abnormalities, high infant mortality, and greater susceptibility to disease, causing the species to be less adaptable and more vulnerable to ecological and environmental changes. Unfortunately, captive breeding efforts have not proven to be meaningful to the cheetah's hope for survival. The similar experiences of the world's zoos have reaffirmed the traditional difficulties of breeding cheetahs in captivity. Despite the capturing, rearing, and public display of cheetahs for thousands of years, the next reproductive success, after Akbar the Great son's recorded birth of one litter in the 16th century, occurred only in 1956 at the Philadelphia Zoo. Unlike the other 'big cats', which breed readily in captivity, the captive population of cheetahs is not self-sustaining and, thus, is maintained through the import of wild-caught animals, a practice which goes against the goals of today's' zoological institutions. Although reproduction has occurred at many facilities in the world, only a very small percentage of cheetahs have ever reproduced and cub mortality is high. In the absence of further importations of wild-caught animals, the size of the captive population can be expected to decline, a trend, which coupled with the continuing decline of the wild population, leaves the species extremely vulnerable. We founded the Cheetah Conservation Fund (CCF) in 1990 to directly confront the above issues and to implement techniques for cheetah conservation in their natural habitat. The CCF is the only fully established, on-site, international conservation effort for the wild cheetah. A permanent base for this long-term effort was established in 1991 in Namibia, Africa-- home to the largest remaining, viable, population of cheetah. CCF's primary mission is to focus on conservatory and management strategies outside of protected parks and reserves. It conducts research, disseminates information, and implements conservation management techniques that will lead to the long-term survival of free-ranging cheetah. The project is directed by Laurie Marker. The over-all objective of CCF is to secure the survival of free-ranging cheetahs in suitable African habitats. The CCF's long-term program focuses on: 1) cheetah research and conservation education; and 2) livestock and wildlife management, education, and training. In Namibia, programs are being developed that can be adapted for use in other African countries. The goal is to develop workable strategies for promoting sustainable cheetah populations, a goal which, in the end, is largely dependant on the willingness and the capacity of individuals and local communities where the cheetahs live. As a part of the long-term program, conservation efforts are being developed through the knowledge gained from the collection of base-line data including: - the distribution and movements of cheetahs through the Namibian farmlands; - the problems leading to the continued elimination of the cheetah; - the assessment of the over-all health of the free-ranging cheetah population; - the development of livestock farm management practices to reduce conflict with cheetahs; e) the development of livestock/wildlife management and education to sustain a balanced ecosystem that supports wildlife, and cheetah; and f) the adaptation of successful programs to other countries where cheetah are in need. The knowledge gained from this program will reveal the necessary information to employ strategies for the long-term survival of the species in Namibia, and will be significant to the conservation of cheetahs elsewhere in the their native range and contribute to maintenance of captive cheetahs, which are 99% from Namibian stock. Extinction is forever and survival is up to you and me---every last one of us! The Cheetah Conservation Fund is the conduit through which everyone can become involved. - OIPA in India - Chairman, AWBI Maj. Gen. R. M. Khareb - Minister for Environment and Forest Smt. Jayanthi Natrajan - Prime Minister of India, New Delhi Hon'ble Sardar Manmohan Singh - President of India, New Delhi HE Smt. Pratibha Devisingh Patil - Hon'ble Speaker Lok Sabha - Vice President of India Hon'ble Shri Hamid Ansari - Director, AWD Shri Anjani Kumar - Central Zoo Authority Cheetah – the only large wild mammalian species that India has lost – will now be reintroduced in the country's three identified grasslands. The move will help in restoration of grasslands and protecting many other endangered animals there. Cheetah (Acinonyx jubatus venaticus) was last spotted [in India] in Chhattisgarh in 1967. Cheetah will be obtained from Middle East, where North African Cheetah are bred, Iran, Namibia and South Africa. Initially, 18 cheetah will be brought to three sites proposed in the report, “Assessing the Potential for Reintroducing the Cheetah in India”, brought out by the Wildlife Trust of India and the Wildlife Institute of India. The report, presented to the Ministry of Environment and Forests here on Wednesday, has identified Kuno-Palpur and Nauradehi Wildlife Sanctuaries in Madhya Pradesh and Shahgarh Landscape in Jaisalmer in Rajasthan. All the three sites require an initial investment of Rs 100 crores each before the animals are imported in the next two to three years. Accepting the report, Jairam ramesh, Minister of State for Environment and Forests said: “It is important to bring cheetah back to our country. This is perhaps the only mammal whose name has been derived from Sanskrit language. It comes from the word chitraku which means spots. The way tiger restores forest ecosystem, snow leopard restores mountain ecosystem, Gangetic dolphin restores waters in the rivers, the cheetah will restore grasslands of the country.” Among the threatened species which are on the brink of extinction are carcal (Caracal caracal), the India wolf Canis lupus pallipes) and the three endangered species of the bustard family – the Houbara (Chlamydotis undulate macqueenii), the lesser florican (Sypheotides indica) and the most endangered of them all – the great Indian bustard (Ardeotis nigriceps). Like tiger and elephant, the cheetah will also need a distinctive status, Mr. Ramesh said, adding that he would now take up the matter with State governments to bring them on board before actually starting the project which will be totally funded by the Centre. This would involve relocation of some families living in the core areas. He said initial negotiations for reintroduction of cheetah had started with Africa, Iran and the Middle East. He said that Kuno-Palpur [in the Sheopur district of north western Madhya Pradesh] could become the only place in the world where tiger, lion and cheetah could survive together. The government had proposed to relocate the Gir Lion from Gujarat to this place, but the project had to be shelved following opposition by the Gujarat government. This wildlife sanctuary was home to tigers until some years ago. Among the large carnivores, cheetah are likely to present the lowest level of conflict with human interests, as they are not a threat to human life and are most unlikely to predate large livestock. The cheetah reintroduction would greatly enhance tourism prospects, especially at the sites, the cascading effects of which would benefit the local communities. Cheetah as a flagship would evoke a greater focus on the predicament of the much abused dry-land ecosystem and the need to manage them, which would benefit pastoralism in India where the largest livestock population in the world resides, report said. Cheetahs are being reintroduced in India but I am very sorry to say that where are the legal provisions, security cover, protection for this exotic animals being an imported one, where as present Wild Life Protection Act, 1972 needs amendment and the Prevention of Cruelty to Animals Act, `1960 toothless. Founder People for Animals ( PFA ) Haryana Naresh Kadyan, representative of the International Organisation for Animal Protection - OIPA in India already spoken and raised this issue before Wildlife Trust of India during survey and identification of space for reintroduction of Cheetah in India. So introduce strong legislation and protection cover for Cheetah before their import from foreign countries like Africa, as the Ministry of Environment and Forest, New Delhi sanctioned and disbursed the funds for this project, hence Her Excellency the President of India is here by requested to issue ordinance as protection cover and concerned Ministry of Environment and Forest may kindly move draft legislation for public comments with out any further delay. Abhishek Kadyan, Media Adviser to OIPA in India grievance registration number is : PRSEC/E/2011/05611 with the President of India's office and Sukanya Kadyan, Director to PFA Haryana registration Number is : MOEAF/E/2011/00153 with the Department of Administrative Reforms & Public Grievances. Supreme Court stays Cheetah reintroduction project : The Supreme Court stayed the implementation of the Cheetah Reintroduction Programme by which the Ministry of Environment and Forests (MoEF) had proposed to import the African large-sized feline to India. A forest bench comprising justices K.S. Radhakrishan and C.K. Prasad restrained the government from going ahead with the Rs. 300 crore project in the wake of questions being raised that a “totally misconceived” venture was pushed without consulting that National Board for Wildlife (NBW) which is a statutory body for the enforcement of the wild life law. The issue of relocating Cheetah from Namibia was raised during the hearing of the matter on reintroduction of Asiatic Lions from Gujarat’s Gir National Park and Sanctuary and surrounding areas to Palpur Kuno Sanctuary in Madhya Pradesh pursuant to a decision taken by the NBW. During its hearings, the bench was informed that the MoEF has decided to introduce African Cheetahs from Namibia into the same proposed habitat prompting senior advocate P.S. Narasimha, the amicus curiae in the case, to file an application seeking a stay on the implementation of the same. Mr. Narasimha said the proposal for reintroduction of Cheetah “has not been either placed before the Standing Committee of the National Board for Wildlife, nor has there been a considered decision taken in this regard”. He stated in an application that “scientific studies show that the African Cheetahs and Asian Cheetahs are completely different, both genetically and also in their characteristics” and the reintroduction of Cheetah was also against the International Union for the Conservation of Nature (IUCN) guidelines on translocation of wildlife species. “In fact, the (IUCN) guidelines categorically warn against the introduction of alien or exotic species. The African Cheetah obviously never existed in India. Therefore, it is not case of intentional movement of an organism into a part of its native range,” the application stated. RTI petition moved as below: The Secretary to the Ministry of Environment and Forest, Subject: Application under the Right to Information Act, 2005. India's Supreme Court has ordered the central government to suspend a move to reintroduce the cheetah which has been hunted to extinction in the sub-continent, local media reported. A two-judge bench, in fact, stayed the government's ambitious plan to import the big cats from Africa after a senior lawyer told the apex court that the plan had not been discussed with the National Board for Wildlife, a statutory body for the enforcement of wildlife laws in India. He told the court that scientific studies showed that the Asian cheetahs and African cheetahs are completely different, both genetically and also in their characteristics. Earlier the government had approved two wildlife reserves in the central state of Madhya Pradesh and the northern state of Rajasthan as homes for the imported cheetahs. Though the big cats vanished from India decades ago, conservationists say that fewer than 100 of cheetahs remain in Iran while the vast majority of the 10,000 cheetahs left in the world are in Africa. The UN affiliated OIPA chapter in India managed online petition and demands introduction of strong legislation and this petition was internationally supported because there are no protected cover for exotic species in the Wild Life Protection Act, 1972 there fore blind hippo was abused by the Jumbo circus till his death and many exotic birds are traded and abused openly in India, where as the Prevention of Cruelty to Animals Act, 1960 is toothless and offenses are non-cognizable in legal terms. Hence under RTI Act, 2005 following information may kindly be provided: 1. All relevant documents related to the project Cheetah since survey to Supreme Court of India stay via disbursement of funds. 2. Is Cheetah project was placed before the National Board of Wildlife for their approval, if yes then send me the copies of all relevant documents, if no then why it was not placed? 3. Under which legislation the exotic Cheetah species will be protected in India because it could not be covered under the Wild Life Protection Act, 1972 and offenses under the Prevention of Cruelty to Animals Act, 1960 are non cognizable in legal terms with minor punishments?. 4. How much amounts has been sanctioned and disbursed for the project Cheetah by the Ministry of Environment and Forest? Please supply us the copies of all relevant documents. 5. Is there any jurisdiction of the Central Zoo Authority of India and approval required for the project Cheetah from the CZA? 6. Is these Cheetah’s were being imported from a rescue centre’s situated in Foreign country to India and there may be side effects due to in breeding, under which circumstances this proposal was approved, sanctioned and funds disbursed? 7. Please supply us the copy of stay orders passed by the Supreme Court of India along with order passed to the concerned officials for strict compliance by the Ministry of Environment and Forest. 8. Please let us know the present status of the draft Animal Welfare Act, 2011; Draft on Dog breeders Rules; Draft on Pet Shop Rules; Draft on Fish Aquarium; Ban on Peacock feathers trade etc animal related drafts and bills. Abhishek Kadyan started this petition with a single signature, and now has 2,144 supporters. Start a petition today to change something you care about.
<urn:uuid:aa71187e-d144-4771-b1dc-671f4c0b9dcd>
CC-MAIN-2017-17
https://www.change.org/p/cheetah-needs-strong-legislation-to-survive-in-india-sukanya-kadyan
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917122174.32/warc/CC-MAIN-20170423031202-00602-ip-10-145-167-34.ec2.internal.warc.gz
en
0.959019
5,970
2.75
3
Shortcuts to Common Abbreviations and Acronyms: - Cable Modem - A device that connects a computer to the Internet via existing broadband cable networks. - A special area of memory, managed by a cache controller, that improvers performance by storing the contents of frequently accessed memory locations and their addresses. When the processor references a memory address, the cache checks to see if it holds that address. If it does, the information is passed directly to the processor; if not, a normal memory access takes place instead. A cache can speed up operations in a computer whose RAM access is slow compared with its processor speed, because the cache memory is always faster than normal RAM. - cache controller - A special-purpose processor, such as the Intel 82385, whose sole task is to manage cache memory. On newer processors, such as the Intel Pentium, cache management is integrated directly into the processor. - cache memory - A relatively small section of very fast memory (often called static RAM) reserved for the temporary storage of the data or instructions likely to be needed next by the processor. - A feature found on many browsers that stores a copy of visited Web pages on the user's hard disk. The next time you visit the site, the page is retrieved from your computer rather than through - CAP-Carrierless Amplitude - A version of QAM in which incoming data modulates a single carrier that is then transmitted down a telephone line. The carrier itself is suppressed before transmission ( it contains no information, and can be reconstructed at the receiver), hence the adjective "carrierless." - The NetWare command used to redirect printer output to a network printer. It is usually run in a batch file or login script. - A printed circuit board or adapter that you plug into your computer to add support for a specific piece of hardware not normally present on the computer. - card services - Part of the software support needed for PCMCIA hardware devices in a portable computer, controlling the use of system interrupts, memory, or power management. When an application wants to access a PC Card, it always goes through the card services software and never communicates directly with the underlying hardware. - Card Services - Under Windows, a protected-mode system component that is a VxD linked with the PC Card bus driver. Card Services passes the event notification from socket services to the PC Card bus driver, provides information from the computer's cards to the PC Card bus driver, and sets up the configuration for cards in the adapter sockets. Card Select Number - The handle created by the system BIOS or the operating system through the isolation process and assigned as a unique identifier to each Plug and Play card on the ISA bus. - Carrier Sense Multiple Access with Collision Detection (CSMA/CD) - Traffic management technique used by Ethernet. - carrier signal - In communications, a signal of chosen frequency generated to carry data, often used for long-distance transmissions. The data is added to this carrier signal modulation, and decoded on the receiving end by demodulation. - Cathode-Ray Tube ( CRT) - A display device used in computer monitors and television sets. - (Compatiblity Basic Input/Output System) Firmware service routines built into the IBM PS/2 series of computers with Micro Channel Architecture (MCA), generally considered to be a super-set of the original IBM PC BIOS. - (Comité Consultatif Internationale de Téléphonie et de Télégraphie.) An organization ,based in Geneva, that develops world-wide data communications standards. CCITT is part of the ITU (International Telecommunications Union). Three main sets of standards have been established: CCITT Groups 1-4 standards apply to facsimile transmissions; the CCITT V series of standards apply to modems and error detection and correction methods; and the CCITT X series standards apply to local area networks. - CCITT Groups 1-4 - A set of four CCITT recommended standards for facsimile transmissions. Groups 1 and 2 defined analog facsimile transmissions, and are no longer used. Groups 3 and 4 describe digital systems, as follows: - Group 3specifies a 9600 bps modem to transmit standard images of 203 dots per inch (dpi) horizontally by 98 dpi vertically in standard mode, and 203 dpi by 198 dpi in fine mode. - Group 4supports images up to 400 dpi for high-speed transmission over a digital data network like ISDN, rather than a dial-up telephone line. - CCITT V Series - A set of recommended standards for data communications over a telephone line, including transmission speeds and operational modes,issued by CCITT. - CCITT X Series - A set of recommended standards issued by CCITT to standardize protocols and equipment used in public and private computer networks, including the transmission speeds, the interfaces to and between networks, and the operation of user hardware. - Compact disc file system - Controls access to the contents of CD-ROM drives. - Acronym for Compact Disk-Interactive. A hardware and software standard disk format that encompasses data, text, audio, still video images, and animated graphics. The standard also defines methods of encoding and decoding compressed data, as well as displaying data. - Abbreviation for CD Recordable. A type of CD device that brings CD-ROM publishing into the realm of the small business or home office. From a functional point of view, a CD-R and a CD-ROM are identical; you can read CD-R disks using almost any CD-ROM drive, although the processes that create the disks are slightly different. Low-cost CD-R drives are available from many manufacturers, including Kao, Kodak, Mitsui, Phillips, Ricoh, Sony, TDK, 3M, and Verbatim. - Acronym for Compact Disk-Read-Only Memory. A high-capacity, optical storage device that uses compact disk technology to store large amounts of information, up to 650 MB ( the equivalent of approx. 300,000 pages of text), on a single 4.72" disk. A CD-ROM uses the constant linear velocity encoding scheme to store information in a single, spiral track, divided into many equal length segments. To read data, the CD-ROM disk drive must increase the rotational speed as the read head gets closer to the center of the disk, and decrease as the head moves back out. Typical CD-ROM data access times are in the range of 0.3 to 1.5 seconds; much slower than a hard disk. - CD-ROM disk drive - A disk device that uses compact disk technology for information storage. They are available with several different data transfer rates--Single-speed, double-speed,etc. all the way up to 16x. - CD-ROM Extended Architecture - (CD-ROM/XA) An extension to the CD-ROM format, developed by Microsoft, Phillips and Sony, that allows for the storage of audio and visual information on compact disk, so that you can play the audio at the same time you view the visual data. CD-ROM/XA is compatible with the High Sierra specification also known as ISO standard 9660. - Consumer Digital Subscriber Line (CDSL) is a proprietary technology trademarked by Rockwell International. EtherLoop is currently a proprietary technology from Nortel, short for Ethernet Local Loop. EtherLoop uses the advanced signal modulation techniques of DSL and combines them with the half-duplex "burst" packet nature of Ethernet. EtherLoop modems will only generate hi-frequency signals when there is something to send. The rest of the time, they will use only a low-frequency (ISDN-speed) management signal. EtherLoop can measure the ambient noise between packets. This will allow the ability to avoid interference on a packet-by-packet basis by shifting frequencies as necessary. Since EtherLoop will be half-duplex, it is capable of generating the same bandwidth rate in either the upstream or downstream direction, but not simultaneously. Nortel is initially planning for speeds ranging between 1.5Mbps and 10Mbps depending on line quality and distance limitations. - Central Office - A circuit switch that terminates all the local access lines in a particular geographic servicing area; a physical building where the local switching equipment is found. xDSL lines running from a subscriber's home connect at their serving central office. - central processing unit - CPU. The computing and control part of the computer. The CPU in a mainframe computer may be contained on many printed circuit boards; the CPU in a mini computer may be contained on several boards; and the CPU in a PC is contained in a single extremely powerful microprocessor. - Centronics parallel interface - A standard 36-pin interface in the PC world for the exchange of information between the PC and a peripheral such as a printer, originally developed by the printer manufacturer Centronics,Inc. The standard defines 8 parallel data lines, plus additional lines for status and control information. - Certified NetWare Engineer (CNE) - Someone who has passed the official exam offer by Novell. - Old code name for the PC-based hardware development platform. - CGA (Color/Graphics Adapter) - A video adapter introduced by IBM in 1981 that provided low-resolution text and graphics. CGA provided several different text and graphics modes, including 40- or 80-column by 25 line 16-color text mode, and graphics modes of 640 horizontal pixels by 200 vertical pixels with 2 colors, or 320 horizontal pixels by 200 vertical pixels with 4 colors. See also EGA , VGA, SuperVGA, and XGA. - charge-coupled device (CCD) - (CCD) A special type of memory that can store patterns of changes in a sequential manner. The light-detecting circuitry contained in many still and video cameras is CCD. - A method of providing information for error detection, usually calculated by summing a set of values. The checksum is usually appended to the end of the data that it is calculated from, so that data and checksum can be compared. - A DOS command that checks the record-keeping structures of a DOS disk for errors. - A communications channel or path between two devices capable of carrying electrical current. Also used to describe a set of components connected together to perform a specific task. - Circuit Switching - Refers to a characteristic common to most telephone networks where a single path or line must remain open between sender and receiver to enable transmission. - For hardware, the manner in which devices and buses are grouped for purposes of installing and managing device drivers and allocating resources. The hardware tree is organized by device class, and the operating system uses class installers to install drivers for all hardware classes. - Class A certification - An FCC certification for computer equipment, including mainframe and mini computers destined for use in an industrial, commercial, or office setting, rather than for personal use at home. The Class A commercial certification is less restrictive than the Class B certification. - class driver - A driver that provides system-required, hardware-independent support for a given class of physical devices. Such a driver communicates with a corresponding hardware-dependent port driver, using a set of system-defined device control requests, possibly with additional driver-defined device control requests. Under WDM, the class driver creates a device object to represent each adapter registered by minidrivers. The class driver is responsible for multiprocessor and interrupt synchronization. - A computer that has access to the network but doesn't share any of its own resources with the network. - Form of networking in which the work load is split between a client and the server computer. - clock/calendar board - An internal time-of-day and month-year calendar that is kept up-to-date by a small battery-backup system. This allows the computer to update the time even when turned off. - A mechanism used in certain chips that allows the chip to process data and instructions internally at a different speed from that used for external operations. - clock speed - Also known as clock rate. The internal speed fo a computer processor, normally expressed in MHz. The faster the clock speed, the faster the computer will perform a specifc operation, assuming the other components in the system, such as disk drives, can keep up with the increased speed. - Hardware that is identical in function to an original. - The smallest unit of hard disk space that DOS can allocate to a file, consisting of one or more contiguous sectors. The number of sectors contained in a cluster depends on the hard disk type. - CMOS ( Complementary Meta-Oxide Semiconductor) - A type of integrated circuit used in processors and for memory. CMOS devices operate at very high speeds and use very little power, so they generate very little heat. In the PC, battery-backed CMOS is used to store operating parameters such as hard disk type when the computer is switched off. - coaxial cable - A high-capacity cable used in networking. It contains an inner copper conductor surrounded by plastic insulation, and an outer braided copper or foil shield. Coax is used for broadband and baseband communications networks, and is usually free from external interference, and it permits very high transmission rates over long distances. - The compression/decompression system used to reduce media or transmission data volume for digitized audio or video data. - cold boot - The computer startup process that begins when you turn on power to the computer. A cold boot might be needed if a program or the operating system crashes in such a way that you cannot continue. If operations are interrupted in a minor way, a warm boot may suffice. - The command processor for MS-DOS based systems. It provides the C> prompt, and it interprest the usesr’s English commands and performs the operation requested. - command processor - Also called command interpreter. The command processor is that part of the operating system that displays the command prompt on the screen, interprets and executes all the commands and file names that you enter, and displays error messages when appropriate. It also contains the environment a memory area that holds values for important system definitions or defaults that are used by the system, and which can be changed by the user. - Commission International de l'Eclairage. - The international commission on illumination. Developer of color matching systems. - Common Information Model - Describes the WBEM data representation schema that is now a DMTF-sponsored industry standard. CIM evolved from HMMS (HyperMedia Management Schema). - CIM Object Manager - A key component of the WBEM architecture. A central message of WBEM is uniform data representation encapsulated in object-oriented fashion in the CIM. CIMOM provides a collection point and manipulation point for these objects. Formerly HMOM. - compact disk - CD. A non-magnetic, polished, optical disk used to store large amounts of digital information. Digital information is tored ont he CD as a series of microscopic pits and smooth areas that have different reflective properties. A beam of laser light shines on the disk so that the reflections can be detected and converted into digital data. - compatibility mode - An asynchronous, host-to-peripheral parallel port channel defined in the IEEE 1284-1944 standard. Compatible with existing peripherals that attach to the Centronics-style PC parallel port. - compatible ID - An ID used by the Plug and Play Manager to locate an INF to install a device if there was no match on the hardware IDs for the device. - complex instruction set computing (CISC) - A processor that can recognize and execute well over 100 different assembly-language instructions. See also RISC. - Component Instrumentation - A specification for DMI related to the service layer. - COM port - In DOS, the device name used to denote a serial communications port. In versions of DOS after 3.3 four COM ports are supported, COM1, COM2, COM3, and COM4. Also refers to,Component Object Model; the core of OLE. Defines how OLE objects and their clients interact within processes or across process boundaries. - composite video - A signal that combines the luminance, chrominance, and synchronized video information onto a single line. This has been the most prevalent NTSC video format. - The translation of data (video, audio, digital, or a combination) to a more compact form for storage or transmission. - Compression ratio - A comparison of the amount of space saved by data compression. A compression ratio of 2:1 ("two to one") results in a doubling of the storage capacity. - compressed video - A digital video image or segment that has been processed using a variety of computer algorithms and other techniques to reduce the amount of data required to accurately represent the content and thus the space required to store the content. - computation bound - A condition where the speed of operation of the processor actually limits the speed of program execution. The processor is limited by the number of arithmetic operations it must perform. - To join sequentially. - In DOS and OS/2 a special text file containing settings that control the way that the operating system works. It must be located in the root directory of the default boot disk, normally drive C, and is read by the operating system only onces as the system starts running. - The process of establishing your own preferred setup for an application program or computer system. Configuration information is usually stored in a configuration file so that it can be loaded automatically next time you start your computer. - configuration file - A file, created by an application program or by the operationg system, containing configuration information specific to your own computing environment. Application program config. files may have a file-name extension of CPG or SET; Window's config. files use the INI file-name extension. - Configuration Manager - The Windows Plug and Play system component that drives the process of locating devices, setting up their nodes in the hardware tree, and running the resource allocation process. Each of the three phases of configuration management--boot time, real mode, and protected mode--have their own configuration managers. - A negotiated method of communication between devices, whether implemented in hardware or software. - Connection and Streaming Architecture - Kernel-mode streaming in WDM. - In networking, the degree to which any given computer or application program can cooperate with other network components, either hardware or software, purchased from other vendors. - In NetWare, the file server's keyboard and monitor. - console operator - In NetWare, the user working at the file server's console. - constant angular velocity - (CAV) An unchanging speed of rotation. Hard disks use a constant angular velocity encoding scheme, where the disk rotates at a constant rate. This means that sectors on the disk are at the maximum density along the inside track of the disk; as the read/write heads move outwords, the sectors must spread out to cover the increased track circumference , and therefore the data transfer rate falls off. - constant linear velocity (CLV) - A changing speed of rotation. CD-ROM disk drives use a CLV encoding scheme to make sure that the data density remains constant. - controllerless modem - Also host-based controller. A modem that consists of a DSP without the usual microcontroller. The host CPU provides the AT command interpreter, modem-control functions, and v.42 bis implementation. Compare with software modem. - control method - A definition of how an ACPI-compatible operating system can perform a simple hardware task. For example, the operating system invokes control methods to read the temperature of a thermal zone. Control methods are written in an encoded language called AML. - conventional memory - The amount of memory accessible by DOS in PCs using an Intel processor operating in real mode, normally the first 640K. - The alignment of the three electron guns (one each for red,blue and green) in a monitor that create the colors you see on the screen. - cooperative multitasking - A form of multitasking in which all running applications must work together to share system resources. - A secondary processor used to speed up operations by taking over a specific part of the main processor's work. See floating-point coprocessor. - an unexpected program halt, sometimes due to hardware failure, but most often due to a software error, from which there is no recovery. - crimp tool - A special tool used to attach connectors to cables. - cyclical redundancy check - Error control protocol that has replaced the old checksum method of the Xmodem protocol. - Customer Premise Equipment - Simply put customer premise equipment is the equipment that big company's that provide broadband services use;i.e., voice ports,channel banks, PBX's,Integrated Access Mulitplexers,etc. This is as opposed to subscriber premise equipment. - A hard disk consists of two or more platters, each with two sides. Each side is further divided into concentric circles known as tracks; and all the tracks at the same concentric position on a disk are known collectively as a cynlider.
<urn:uuid:25b21548-5a47-46af-ae07-d673fdb00940>
CC-MAIN-2017-17
http://www.angelfire.com/ny3/diGi8tech/CGlossary.html
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917120187.95/warc/CC-MAIN-20170423031200-00366-ip-10-145-167-34.ec2.internal.warc.gz
en
0.884619
4,491
3.203125
3
(Editor’s note – Today is the birthday of Dr B.R Ambedkar. CRI is proud to present a 3 part series titled Bodhi Sattva’s Hindutva. In this part, Aravindan Neelakandan explores Ambedkar’s views on cultural unity, his analysis on Savarkarite strand of Hindu Nationalism and admiration for Hindu Reformer Swami Shradhanand) Terming Baba Saheb Ambedkar as a Hindu nationalist would be the ultimate blasphemy in ‘secular’ India. But if there is an ideology that can resonate with Dr.Ambedkar’s mindscape it is Hindutva – the much maligned Hindu nationalism. Dr.Ambedkar always struggled for justice and liberty. He naturally knew that the caste system was inherently unjust and anti-democratic. He wanted Hindu society to be free of this malaise. But to remove it one should understand the problem in its socio-historic context. In quest for such an understanding the good doctor arrived at a cardinal truth. It became a fundamental truth all his life. It is the ‘indubitable cultural unity’ of India. As early as 1916, in his famed paper presented at an anthropology seminar of Columbia University, Dr.Ambedkar made an observation that may well become the definition of what is today called the ‘cultural nationalism’ in Indian context: It may be granted that there has not been a thorough amalgamation of the various stocks that make up the peoples of India, and to a traveller from within the boundaries of India the East presents a marked contrast in physique and even in colour to the West, as does the South to the North. But amalgamation can never be the sole criterion of homogeneity as predicated of any people. Ethnically all people are heterogeneous. It is the unity of culture that is the basis of homogeneity. Taking this for granted, I venture to say that there is no country that can rival the Indian Peninsula with respect to the unity of its culture. It has not only a geographic unity, but it has over and above all a deeper and a much more fundamental unity—the indubitable cultural unity that covers the land from end to end. Caste then becomes a problem for Dr.Ambedkar – not of this ‘homogeneity’ but it is a problem because it ‘is a parceling of an already homogeneous unit’. In other words it fragments the cultural unity of Indian society and thus inhibits the development of national feeling among Indians. Yet he was a pragmatist and a visionary. Dr.Ambedkar would return to the same topic in 1940. While discussing the problem of partition, he became as he labeled himself, ‘the philosopher of partition’. And here rejecting the idea of territorial nationalism, he would emphasize a qualitatively different type of nationalism: If unity is to be of an abiding character it must be founded on a sense of kinship, in the feeling of being kindred. In short it must be spiritual. Judged in the light of these considerations, the unity between Pakistan and Hindustan is a myth. Indeed there is more spiritual unity between Hindustan and Burma than there is between Pakistan and Hindustan. The idea of Hindus and Buddhists belonging to a larger single spiritual culture is something axiomatic to Ambedkar. He justified partition because even Sikh axe could not resist the Islamist imperialism which was preventing the return of ‘Northern India to that spiritual and cultural unity by which it was bound to the rest of India before Hwen Thasang’. Dr.Ambedkar also cautioned Hindus that in the coming battles they would be a disunited force and their unity even if achieved would be unsustainable if the Hindu society remained casteist. In 1933 Mahatma Gandhi asked Dr.Ambedkar to give a message for his magazine ‘Harijan’. And Baba Saheb gave a statement which was crisp, blunt and more important prophetic: The Out-caste is a bye-product of the Caste system. There will be outcastes as long as there are castes. Nothing can emancipate the Out-caste except the destruction of the Caste system. Nothing can help to save Hindus and ensure their survival in the coming struggle except the purging of the Hindu Faith of this odious and vicious dogma. The ‘coming struggle’ Ambedkar had visualized was the partition and the pre-partition riots which were actually a series of well-planned riots unleashed on a population of disunited Hindus. It was his quest for justice and his constant worry about the survival of Hindus which led him on a quest for an alternative that will bring unity among Hindus of India. In his classic work ‘Annihilation of Caste’ (1944). Dr.Ambedkar makes it clear that it was caste which is making conversion of other religionists to Hinduism impossible. He is vision of Hinduism is a united strong Hinduism – battle ready and prepared to take on Abrahamic religions. To realize this battle-ready Hinduism and a united Hindu society, there is only one major crucial obstacle and that is caste. So it has to go not only for Hinduism to survive but for it to prosper: So long as caste remains, there will be no Sanghatan and so long as there is no Sanghatan the Hindu will remain weak and meek. …Indifferentism is the worst kind of disease that can infect a people. Why is the Hindu so indifferent? In my opinion this indifferentism is the result of Caste System which has made Sanghatan and co-operation even for a good cause impossible. In this context it should be noted that Baba Saheb Ambedkar was extremely appreciative of all held at Ratnagiri district/ genuine reform works that were taken up by Hindu nationalists. It was true of Veer Savarkar and Swami Shradhanand –both Hindu Maha Sabha leaders. Veer Savarkar diagnosed without mincing words that the scripture based caste system is a mental illness and he offered a cure to this social psychological disease plaguing the Hindu psyche, “the disease gets cured instantly when the mind refuses to accept it . While the whole traditional orthodoxy of Hindu traditional leadership was making a fetish out of Varna system as the basis of Hindu Dharma, Veer Savarkar boldly declared: Both chaturvarnya and caste divisions are but practices. They are not coterminous with Sanatana Dharma … Sanatana Dharma will not die if the present-day distortion that is caste division is destroyed. With regard to untouchability his clarion call to Hindu society was a heart-breaking cry, a lone voice in the wilderness: To regard our millions of co-religionists as ‘untouchables’ and worse than animals is an insult not only to humanity but also to the sanctity of our soul. It is my firm conviction that this is why untouchability should be principally eradicated. Untouchability should go also because its eradication is in the interests of our Hindu society. But even if the Hindu society were to partially benefit from that custom, I would have opposed it with equal vehemence. When I refuse to touch someone because he was born in a particular community but play with cats and dogs, I am committing a most heinous crime against humanity. Untouchability should be eradicated not only because it is incumbent on us but because it is impossible to justify this inhuman custom when we consider any aspect of dharma. Hence this custom should be eradicated as a command of dharma. From the point of view of justice, dharma and humanism, fighting untouchability is a duty and we Hindus should completely eradicate it. In the present circumstances, how we will benefit by fighting it is a secondary consideration. This question of benefit is an aapaddharma (duty to be done in certain exceptional circumstances) and eradication of untouchability is the foremost and absolute dharma. When Savarkar was at Ratnagiri, his movements as well as participation in political activities were both restricted. Yet he championed the cause of the Dalits and presided over the Mahar conference held at Ratnagiri districts. In his letter to Savarkar, expressing his inability to visit him owing to previous engagements, Dr.Ambedkar wrote: I however wish to take this opportunity of conveying to you my appreciation of the work you are doing in the field of social reform. If the Untouchables are to be part of the Hindu society, then it is not enough to remove untouchability; for that matter you should destroy ‘Chaturvarna’. I am glad that you are one of the very few leaders who have realised this. In 1933, Dr.Ambedkar’s Janata magazine in a special issue paid a tribute to Veer Savarkar to the effect that his contribution to the cause of the Dalits was as decisive and great as that of Gautama Buddha himself. Later Baba Saheb Ambedkar would come to the rescue of Veer Savarkar, when Savarkar was arrested for Gandhi murder. The most authoritative historian on Gandhi murder, Manohar Malgonkar, the author of the definitive volume on the subject ‘The Men Who Killed Gandhi’ (1978) revealed in 2008 that it became ‘incumbent upon him to omit certain vital facts such as, for instance, Dr Bhimrao Ambedkar’s secret assurance to Mr. L B Bhopatkar, that his client, Mr. V D Savarkar, had been implicated as a murder suspect on the flimsiest ground.” Another person held in high esteem by Dr.Ambedkar was Swami Shradhanand. Swami was at the fore front of the Hindu Sanghatan movement. He was one Hindu leader who fully realized that to achieve Sanghatan in the truest sense casteism had to die. Swami Shradhanand a fearless patriot was one of the foremost leaders of Gandhian movement during Khilafat agitation. Just after Amritstar massacre, when none in Congress was ready to preside over Congress session in Punjab, he came forward and bravely presided over the Congress Committee session at Amritstar. He repeatedly attacked casteism and upheld the rights of Dalits. He went on to establish ‘Dalit Uddhar Sabha’ in Delhi. He worked ceaselessly for the upliftment and liberation of Dalits till his life was cut shot tragically by the bullets of an Islamic fanatic in 1926. He was also initially an active supporter of Gandhian movement to win Dalits their rights. However he soon found that Gandhian leadership was not as committed to Dalit liberation as Swami expected it to be. In frustration Swami wrote to Mahatma Gandhi in 1921: The Delhi and Agra Chamars simply demand that they be allowed to draw water from wells used by the Hindus and Mohammedans and that water be not served to them (from Hindu water booths) through bamboos or leaves. Even that appears impossible for the Congress Committee to accomplish…. At Nagpur you laid down that one of the conditions for obtaining Swarajya within 12 months was to give their rights to the depressed classes and without waiting for the accomplishment of their uplift, you have decreed that if there is a complete boycott of foreign cloth up till the 30th September, Swarajya will be an accomplished fact on the 1st of October…I want to engage my limited energy in the uplift of the depressed classes. I do not understand whether the Swarajya obtained without the so-called Untouchable brethren of ours joining us will prove salutary for the Indian nation. In 1922 he had to resign his position from Depressed Classes Sub-Committee of Congress. Subsequently on 19th August 1923 at the Benares Hindu Maha Sabha annual session, Swami unveiled a grand action plan to remove the stigma of untouchability from Hindu society for ever. He brought a resolution which was attacked by the wolves of orthodoxy with such venom that the session almost went to the brink of collapse. The resolution Swami brought was for the basic dignity and fundamental human rights of Dalits: With a view to do justice to the so-called Depressed Classes in the Hindu Community and to assimilate them as parts of an organic whole, in the great body of the Aryan fraternity, this conference of Hindus of all sects holds: a. That the lowest among the depressed classes be allowed to draw water from common public wells, b. That water be served to them at drinking posts freely like that as is done to the highest among other Hindus, c. That all members of the said classes be allowed to sit on the same carpet in public meetings and their ceremonies with higher classes and, d. That their children (male and female) be allowed to enter freely and at teaching time to sit on the same form with other Hindu and non-Hindu children in Government, National and Denominational education institutions. He also formed ‘Dalit Uddhar Sabha’ to work for Dalit liberation. Ailing Swami was murdered treacherously by a Muslim fanatic on 23rd December 1926. Till the end of his life Swami fought for Hindu solidarity through abolition of social stagnation. Dr. Ambedkar admired Swami Shradhanand very much. Though critical of Hindu Maha Sabha as a political party, (for there were many prominent Hindu Maha Sabha leaders who were very orthodox and socially stagnant), he finds Swami a very sincere fighter for the Dalit cause. In his highly critical book ‘What Congress and Gandhi Have Done to the Untouchables’ Dr.Ambedkar examines the hasty way in which the Congress leadership abandoned their Dalit upliftment programme.: Was it because the Congress intended that the scheme should be a modest one not costing more than two to five lakhs of rupees but felt that from that point of view they had made a mistake in including Swami Shradhanand in the Committee and rather than allow the Swami to confront them with a huge scheme which the Congress could neither accept nor reject? The Congress thought it better in the first instance to refuse to make him the convener and subsequently to dissolve the Committee and hand over the work to the Hindu Mahasabha. Circumstances are not quite against such a conclusion. The Swami was the greatest and the most sincere champion of the Untouchables. There is not the slightest doubt that if he had worked on the Committee he would have produced a very big scheme. That the Congress did not want him in the Committee and was afraid that he would make big demand on Congress funds for the cause of the Untouchables is clear from the correspondence that passed between him and Pandit Motilal Nehru, the then General Secretary of the Congress… That Ambedkar found the Swami ‘the greatest and most sincere champion of the Untouchables’ is very interesting for this is a title which Baba Saheb though deserving never claimed for himself. This also calls to myth the Gandhian propaganda that Ambedkar-Gandhi conflict was because Ambedkar did not want someone else to be called the leader of the Untouchables. Dr.Ambedkar was able to see beyond empty words and party identities, the hearts of those who wanted really to stand by the Dalits in their quest for liberation. This holistic vision of understanding Dalit liberation as crucial for Hindu Sanghatan, in the largest sense of the term, always shaped Dr.Ambedkar’s attitudes and actions. His statement issued on the temple entry rights for Dalits in 1927 approaches the issue from a cultural-historical point of view and rejects any theistic need from his side: The most important point we want to emphasize is not the satisfaction you get from the worship of the image of God… Hindutva belongs as much to the untouchable Hindus as to the touchable Hindus. To the growth and glory of this Hindutva contributions have been made by Untouchables like Valmiki, the seer of Vyadhageeta, Chokhamela and Rohidas as much as by Brahmins like Vashishta, Kshatriyas like Krishna, Vaishyas like Harsha and Shudras like Tukaram. The heroes like Sidnak Mahar who fought for the protection of the Hindus were innumerable. The temple built in the name of Hindutva the growth and prosperity of which was achieved gradually with the sacrifice of touchable and untouchable Hindus, must be open to all the Hindus irrespective of caste. The important element of the statement is that Dr.Ambedkar replaces the term ‘Hinduism’ by Hindutva. In doing this he attempts to make the Hindus realize that the issue of Dalit liberation should be at the core of Hindu nationalist politics for that should be the logical development of the larger historical processes shaping Indian history. It was an appeal to do away with obscurantist traditional casteism and embrace a dynamic Hindu nationalism. Unfortunately Hindu orthodoxy and Hindu leadership failed him. So on 13th October 1935 Dr.Ambedkar made that famous declaration that while it was beyond his power to have been born an untouchable it was within his power to make sure that he would not die a Hinduu and he resolved that he would not die a Hindu. This was indeed a well calculated and well deserving blow to Hindu orthodoxy. But only Hindu nationalists actually understood both the seriousness of the situation as well as the just nature of Dr.Ambedkar’s reaction. Despite the despicable treatment of Hindu orthodoxy towards Dalits, Dr.Ambedkar still respected the monument of Hindutva and took national interest paramount in his choice of an alternative religion. He had detailed discussion with Dr.BS Moonje – the mentor of Dr.KB Hedgewar. What the consequences of conversion will be to the country as a whole is well worth bearing in mind. Conversion to Islam or Christianity will denationalize the Depressed Classes. If they go over to Islam the number of Muslims would be doubled; and the danger of Muslim domination also becomes real. If they go over to Christianity, the numerical strength of the Christians becomes five to six crores. It will help to strengthen the hold of Britain on the country. On the other hand if they embrace Sikhism they will not only not harm the destiny of the country but they will help the destiny of the country. They will not be denationalized. Dr.Ambedkar always took this care that he should never allow his people to get denationalized in their quest for justice and liberation. Closely related to this is the definition of the term ‘Hindu’. He wanted the Dalits to go out of the oppressive orthodoxy infested ‘Hindu religion’ but remain within ‘Hindu culture’. In discussing the problem of partition, Dr.Ambedkar makes a careful study of Savarkar’s definition of Hindus: According to Mr. Savarkar a Hindu is a person: “. . . .who regards and owns this Bharat Bhumi, this land from the Indus to the Seas, as his Fatherland as well as his Holy Land;—i.e., the land of the origin of his religion, the cradle of his faith. The followers therefore of Vaidicism, Sanatanism, Jainism, Buddhism, Lingaitism, Sikhism, the Arya Samaj, the Brahmosamaj, the Devasamaj, the Prarthana Samaj and such other religions of Indian origin are Hindus and constitute Hindudom, i.e., Hindu people as a whole.”… This definition of the term Hindu has been framed with great care and caution. It is designed to serve two purposes which Mr. Savarkar has in view. First, to exclude from it Muslims, Christians, Parsis and Jews by prescribing the recognition of India as a Holy Land as a qualification for being a Hindu. Secondly, to include Buddhists, Jains, Sikhs, etc., by not insisting upon belief in the sanctity of the Vedas as an element in the qualifications. Consequently the so-called aboriginal or hill-tribes also are Hindus: because India is their Fatherland as well as their Holy Land whatever form of religion or worship they follow. However, Dr.Ambedkar is not satisfied. Though culturally homogenous through historical processes, in his opinion Hindus had not yet made themselves a nation, in the modern sense of the term. They are fragmented. Hindus are a potential nation favoured by cultural unity but disunited politically. They need more modern homogenizing factors. Later in formulating those to whom the Hindu Code Bill would apply, Dr.Ambedkar has used the same frame of definition Veer Savarkar had used in his definition of Hindu: This Code applies, (a) to all Hindus, that is to say, to all persons professing the Hindu religion in any of its forms or developments, including Virashaivas or Lingayatas and members of the Brahmo, the Prarthana or the Arya Samaj; (b) to any person who is a Buddhist, Jaina or Sikh by religion; (c) (i) to any child, legitimate or illegitimate, both of whose parents are Hindus within the meaning of this section. (ii) to any child, legitimate or illegitimate, one of whose parents is a Hindu within the meaning of this section; provided that such child is brought up as a member of the community group or family to which such parent belongs or belonged; and (d) to a convert to the Hindu religion. This Code also applies to any other person, who is not a Muslim, Christian, Parsi or Jew by religion. When sectarian complainted about Buddhists, Jains and Sikhs being grouped together with Hindus in his Bill, he replied: Application of Hindu code to the Sikhs, Buddhists and Jains was a historical development and it would be too late sociologically to object to it. When the Buddha differed from the Vedic Brahmins, he did so only in matters of creed and left the Hindu legal framework intact. He did not propound a separate law for his followers. The same was the case with Mahavir and the ten Sikh Gurus. Why should Dr.Ambedkar who found Hinduism based on Smrithis and its stranglehold of orthodoxy, so despicable love Hindu culture and Hindustan so dearly? And how did this reflect in his actions throughout his life? That is what we shall see in the next two parts of this series. Dr.Bhimrao Ramji Ambedkar, Castes in India: Their Mechanism, Genesis and Development, (Originally a paper presented at an Anthropology Seminar at Columbia University on 9th May 1916), Siddharth Books, 1945:2009 p.7 Dr.Bhimrao Ramji Ambedkar, Thoughts on Pakistan, Thacker & Co., 1941, p.60 Dr.Bhimrao Ramji Ambedkar, ibid. p.59 Dr.Bhimrao Ramji Ambedkar, Message published in Harijan dated 11-Feb-1933 Dr.Bhimrao Ramji Ambedkar, Annihilation of Caste: With reply to Mahatma Gandhi, 1944:pdf document: p.30 V.D.Savarkar, Samagra Savarkar Vangmaya, Vol-3 ed. SR Date, Maharashtra Prantik Hindu Sabha, Pune, pp 497-9 V.D.Savarkar, SSV, Vol-3 1930: Essays on the abolition of caste, p.444 V.D.Savarkar, SSV, Vol-3 1927, p.483 Dr.Bhimrao Ramji Ambedkar’s letter quoted by Dhananjay Keer, Veer Savarkar, Popular Prakashan, 1950:1966, p.190 Janata special number, April 1933, p.2 (quoted in Dhananjay Keer, 1950:1966 p.195) Manohar Malgonkar, The Men Who Killed Gandhi, in the ‘Introduction’ to 2008 edition, Roli Books, 2008 Swami Shradhaanand letter to Mahatma Gandhi dated 9-Sep-1921 Amrita Bazar Patrika report, 17-Aug-1923 Dr.Bhimrao Ramji Ambedkar, What Congress and Gandhi Have Done to the Untouchables, Gautam Book Center, 1945:2009, p.23 Dr.Bhimrao Ramji Ambedkar, Bahiskrit Bharat, 27-Nov-1927: quoted in Dhananjay Keer, Dr.Ambedkar: Life and Mission, Popular Prakashan, 1990, p.96 Dr.Bhimrao Ramji Ambedkar, Times of India, 24-July-1936: quoted in Dhananjay Keer, Dr.Ambedkar: Life and Mission, Popular Prakashan, 1990, p.280 Dr.Bhimrao Ramji Ambedkar, Thoughts on Pakistan, Thacker & Co., 1941, p.136 The Draft of the Hindu Code Bill 1950, by Dr.B.R.Ambedkar: Part-I preliminary : 2.Application of Code Dr.Ambedkar in The Times of India, 7 February 1951: quoted in Dhananjay Keer, Dr. Ambedkar: Life and Mission, Popular Prakashan, 1990, p.427 Latest posts by Aravindan Neelakandan (see all) - ‘Valmiki conspired to Demolish Babri Structure’-Sting Operation Expose - April 7, 2014 - The Slandering Shadow - March 14, 2014 - Religious Censorship in India: a case study - February 24, 2014 Tags: Bodhi Sattva’s Hindutva
<urn:uuid:6ea6550d-7279-408e-b2ed-3cd84a7b4e3d>
CC-MAIN-2017-17
http://centreright.in/2012/04/bodhi-sattvas-hindutva-part-1/
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917120187.95/warc/CC-MAIN-20170423031200-00366-ip-10-145-167-34.ec2.internal.warc.gz
en
0.957134
5,493
2.515625
3
The German Invasion of Crete "The German Army has been ordered to take the island. It will carry out this order" Possession of Crete was of great strategic importance. For the British it was to maintain naval supremacy in the eastern Mediterranean. Suda provided the Mediterranean Fleet with a forward base 420 miles forward of Alexandria. For the Germans, the island of Crete would provide an ideal forward base in which to conduct offensive air and naval operations and to support the ground offensive in Egypt. It's capture would also deny Allied aircraft potential bases for striking at Germany's Ploesti oil fields in Romania. The staff of Luftflotte 4 - which had been committed to the Balkans under command of Alexander Löhr, conceived the idea of capturing the island and forwarded the plan to Göering at the time of the invasion of Greece. He thought highly of it but the Oberkommando der Wehrmacht would have preferred action against Malta. On the April 20, after a conference with Generalleutenant Kurt Student (Commander XI. Fliegerkorps), Hitler decided in favour of invading Crete rather than Malta; and five days later Directive No. 28 (Operation 'Merkur') was issued. This was to be a Luftwaffe operation under the executive responsibility of General Löhr. All units to take part in Operation Merkur were assembled within two weeks. But because of logistical problems this was postponed for a few days and the date of the attack was put back from May 16 to May 20. The XI. Fliegerkorps of General Sudent was to be responsible for the actual assault on the island. It had ten air transport wings with a total of approximately 500 JU 52 transports and 80 DFS 230 gliders available to airlift the attacking forces from the airfields in Greece. The assault troops consisted of: The Luftlande-Sturmregiment (Generalmajor Meindl); the 7. Flieger-Division (Generalleutnant Süssmann), and the 5. Gebirgs-Division (Generalmajor Ringel), which had been bought in to replace the 22. Infanterie-Division which could not be transferred in time from Romania, where it guarded the Ploesti oil fields. The absence of these specially trained troops (22. Infanterie-Division) was all the more regrettable because the division taking their place-5th Gebirgs Division-had no practical experience in airborne operations. Initially, the Luftwaffe had two invasion plans under consideration: The first one-submitted by Luftflotte 4 - called for airborne landings in the western part of the island between Maleme and Canea, and the subsequent seizure of the remaining territory by an eastward thrust of all airlanded troops; This plan had the advantage of enabling the invader to concentrate his forces within a small area and achieve local air and ground superiority. However, it's execution might have lead to extensive mountain fighting, during which the enemy would remain in possession of the Heraklion and Retimo airfields in the east. The second plan-submitted by XI Flieger-Korps envisaged the simultaneous air-drop of parachute troops at seven points; the most important of which were Maleme, Canea, Retimo, and Heraklion. This plan had the advantage of putting the Germans in possession of all strategic points on the island in one fell swoop; A mopping-up operation would do the rest. However, the plan incurred great risk because the weak forces dropped at individual points would be dispersed over a wide area, and the tactical air units would be unable to lend support-at all points-at the same time. Konteradmiral Karlgeorge Schüster had no German naval units under his command, being responsible for the organisation of convoys for landing further troops, and heavy equipment that could not be airlifted; (field guns, anti tank guns and panzers of Panzer-Regiment 31), ammunition, rations and other supplies. The transport vessels (small caiques) had been captured during the Greek campaign and were assembled in the port of Piraeus. The island of Crete is approximately 160 miles long and varies in width from 8 to 35 miles. The interior of the island is barren and covered by eroded mountains which, in the western part, rise to an elevation of 8,100 feet. There are few roads and water is scarce. The south coast descends abruptly toward the sea; the only usable port along this part of the coast is the small harbour of Sphakia. There are almost no north-south communications, and the only road to Sphakia which can be used for motor transportation ends abruptly, 1,300 feet above the town. The sole major-traffic artery runs close to the north coast and connects Suda Bay with the towns of Maleme, Canea, Retimo, and Heraklion. Possession of the north coast was vital for an invader approaching from Greece-if only because of terrain conditions. The British-whose supply bases were situated in Egypt-were greatly handicapped by the fact that the only efficient port was in Suda Bay. The topography of the island therefore favored the invader; particularly since the mountainous terrain left no other alternative to the British, but to construct their airfields close to the exposed north coast at Maleme, Retimo and Heraklion. The plan of attack which was finally adopted by Göering was a compromise solution. Some 15,000 combat troops were to be air-landed, and 7,000 men to be landed by sea. The first wave were to strike at H-Hour against two objectives: Troops of the Luftlande-Sturmregiment were to be landed by parachute drop at Maleme airfield-in the wake of 3. and 4. Kompanie landing in the gliders; Fallschirmjaeger-Regiment 3 units were to drop near Canae-in the wake of the gliderborne 1. and 2. Kompanie of the Luftlande-Sturmregiment. The second wave was to descend at H plus 8 hours on two other objectives; paratroops of Fallshirm-Jaeger-Regiment 2 dropping at Retimo, and those of Fallschirm-Jaeger-Regiment 1, at Heraklion. These forces were to link up from a distance of about ten to eighty miles apart as soon as possible. On the second day, follow-up mountain troops were to be airlifted to the three airfields which were to have been taken on the first wave. While Admiral Schüster's convoys landed the bulk of them-plus heavy equipment and supplies-mostly at Heraklion, (the greater part of 5. Gebirgs-Division, the panzer battalion and a motorcycle battalion) and Suda bay, but also any minor ports open to shipping. Overhead-providing strong tactical support-would be the fighters and bombers of VIII. Fliegerkorps. At the beginning of the German invasion of Crete, the island garrison consisted of about 27,500 British and Imperial troops and 14,000 Greeks under the command of Major-General Bernard C. Freyberg, the commanding general of the New Zealand division. The original garrison, numbering approximately 5,000 men, was fully equipped, whereas the troops evacuated from Greece were tired, disorganized, and equipped only with the small arms they had saved during the withdrawal. The Cretans offered their assistance to the defenders of their island, even though they had suffered heavily from air raids and most of their young men had been taken prisoner during the Greek campaign. The Greek and Cretan soldiers were mostly inadequately-armed recruits. There was a general shortage of heavy equipment, transportation and supplies. The armor available to the defenders consisted of eight medium, and sixteen light tanks, and a few personnel carriers; which were divided equally among the four groups formed in the vicinity of the airfields and near Canea. The artillery was composed of some captured Italian guns with a limited supply of ammunition, ten 3.7 inch howitzers, and a few antiaircraft batteries. The construction of fortifications had not been intensified until the Greek campaign had taken a turn for the worse. General Freyberg disposed his ground forces with a view to preventing airborne landings on the three airfields at Maleme, Retimo, and Heraklion; and seaborne landings in Suda Bay and along the adjacent beaches. He divided his forces into four self-supporting groups, the strongest of which was assigned to the defence of the vital Maleme airfield. Lack of transportation made it impossible to organize a mobile reserve force. During May 1941 the British air strength on Crete never exceeded thirty-six planes-less than half of which were operational. When the German preparatory attacks from the air grew in intensity and the British were unable to operate from their airfields, the latter decided to withdraw their last few planes the day before the invasion began. The British naval forces defending Crete were based on Suda Bay, where the port installations were under constant German air observation. During the period immediately preceding the invasion, intensive air attacks restricted the unloading of supplies to the hours from 2300 to 0330. The British fleet was split into two forces: a light one, consisting of two cruisers and four destroyers, was to intercept a seaborne invader north of Crete; and a strong one, composed of two battleships and eight destroyers, was to screen the island against a possible intervention of the Italian fleet northwest of Crete. The only aircraft carrier in the eastern-Mediterranean waters was unable to provide fighter cover for the forces at sea or the island defenders because it had suffered heavy fighter losses during the evacuation of Greece. As is now known-through the monitoring and decoding of German Enigma traffic-the British forces were well aware from Ultra intelligence intercepts of the German intentions against Crete. Their counter-measures were based on the assumption that an airborne invasion could not succeed without the landing of heavy weapons, reinforcements, and supplies by sea. By intercepting these with their Navy, they hoped to be able to decide the issue in their favour. "......I do not wish to sound overconfident, but I feel that at least we will give an excellent account. With the help of the Royal Navy, I trust Crete will be held." Major-General Bernard C. Freyberg. The First Wave: Elements of the I. Battalion, landed their DFS 230 gliders west and south of the airfield at 7:15 am. The 3. Kompanie landed as planned at the mouth of the dried up river Tavronitis and secured the area. The 4. Kompanie and the battalion staff landed south of the airfield and between them suffered heavy casualties from the 22nd New Zealand battalion on Hill 107. Stosstrupp Braun landed nine gliders in loose formation within a few hundred meters of the Travronitis bridge; under heavy fire and many casualties, the air landing troops assaulted and secured the bridge. The III. Battalion became badly dispersed and dropped into the middle of the 5th New Zealand Brigade-where they were destroyed as a fighting force-within minutes. The IV. Battalion dropped without too much difficulty just west of Tavronitis; It's 16. Kompanie had dropped south to gain control of the Tavronitis valley, coming up against bands of armed civilians. The II. Battalion - which was intended as the regiments reserve - parachuted as planned into the area east of the Spilia and encountered no opposition. One reinforcement platoon had been dropped further west near Kastelli, and came down amongst two Battalions of Greek troops and large bands of armed civilians - the parachutists were almost annihilated; the odds were too great for the thirteen survivors who had to surrender. Their lives were saved by the intervention of a New Zealand officer in charge of the Kastelli sector. The bodies of the missing were found Generalmajor Meindl had parachuted in with his regimental staff in the IV. Battalion sector at 7:15 am, but he was seriously wounded by automatic fire and command of the regiment was passed on to Major Stentzler (Commander of II. Battalion). A gliderborne assault by Kampgruppe Altmann (1. and 2. Kompanie of Luftlande-Sturmregiment) was to secure vital objectives near Canea, while the paratroops of Fallschirm-Jaeger-Regiment 3 dropped to the south-west of the town. The reinforced 2. Kompanie (under command of Altmann) landed on the southern part of the Akrotiri Peninsula north east of Canea in fifteen gliders, but losses were heavy; out of 136 men who landed-108 became casualties. The 1. Kompanie (under command of Oberleutnant Alfred Genz) - less one platoon-landed in nine gliders south east of Canea and captured the AA batteries. The group then withdrew southwards to join the other paratroops who had dropped there, as they were unable to link up with Kampfgroup Altmann. Fallshirm-Jaeger-Regiment 3 landed too widely scattered to form effective battle groups and were nearly all killed - before they reached the ground - as they dropped into positions held by the 10th New Zealand Brigade; they had come down in an area of countryside containing as many as 15,000 men. The Fallshirm-Pionier-Battalion dropped just north of Alikianou without too much difficulty. The combined efforts of the I. and II. Battalion succeeded in securing Agia, and the prison there was used as Headquaters for Oberst Heidrich and his regimental staff, who had dropped to the south-west of the village. Generalleutnant Wilhelm Süssman was to meet up with the staff of 7. Flieger-Division, who had landed in four gliders nearby, but the towing line on his glider broke shortly after take-off and he and the four man crew, crashing on the island of Aegina, were killed. During the day Fallschirm-Jaeger-Regiment 3. was unable to progress in the direction of Canea and the situation looked critical. None of the prime objectives assigned to the first wave had been secured by mid-day May 20. Hill 107 and Maleme airfield had not been taken by Luftlande-Sturmregiment and Fallschirm-Jaeger-Regiment 3. was hemmed-in around Agia in what was termed 'Prison Valley' with high casualties and numerous commanders dead. Communications with headquarters on the Creek mainland had been practically non-existent; headquarters under the impression that the operation was going to plan. Problems with refueling the JU52's and the dust on the Greek airfields was causing the second wave timetable to become disrupted. This forced the second drop to fly in small groups instead of en- masse. The Second Wave: At 3:00 p.m. Oberst Sturm's Fallschirm-Jäger-Regiment 2. - minus most of its II. Battalion, which had been assigned to the attack at Heraklion-landed at Retimo in a sector held by elements of the 19th Australian Brigade. Many were scattered and some troops were dropped in the wrong place, with many injured after landing on rocky ground. The I. Battalion (Major Kroh) landed just east of the airfield and captured the vineyard covered hill which overlooked it. The III. Battalion (Hauptmann Wiedemann) found himself in a similar situation at the eastern edge of the airfield and both groups decided to dig in. It was decided that the foot-hold perimeter near Maleme was the one position that could be exploited. Student decided to concentrate on Maleme and employ the 5. Gebirgs-Division there instead of the Heraklion sector. The new plan was to roll up the British and Dominion positions from the west. This was a very risky decision as a counter-attack by Freyberg would cost the gamble. Fortunately no counter-attack was forthcoming. But the loses of men versus aircraft was deemed acceptable. Oberst Utz (Commander Gebirgs-Jäger-Regiment 100) was in Maleme with his staff by early evening. 650 mountain troops had reinforced the Maleme sector and at 6:00 p.m. Oberst Ramcke had landed to take command of Gruppe West to begin re-organisation. The situation still remained serious for Gruppe Mitte near Retimo and Gruppe Ost near Heraklion as they faced some 7,000 allied soldiers. To the west of the airfield Hauptmann Wiedemann's III. Battalion had to dig-in near Perivolia just east of the town. The Germans managed to hold out for several days against determined counter-attacks by heavy artillery and armour. The Failure of the Seaborne Reinforcement A flotilla of 63 requisitioned vessels was put together to carry part of the 5. Gebirgs-Division to the island; since there were not enough aircraft to carry out both initial and rapid build up. Most of these commandeered vessels were caiques (fishing boats dependent on a sail and a small auxiliary engine). There were to be two flotillas-one to carry 2,250 mountain troops to Maleme-the other to carry 4,000 men to Heraklion. On the night of May 19th the first flotilla had arrived at the island of Milos and anchored there. A change of plan occurred on May 20th and both convoys were ordered to sail to Maleme. At a pace of only 7 knots, the first convoy was attacked by the British naval task force lead by Admiral Rawlings at around 11:00pm. For two and a half hours the British hunted the caiques down, sinking a large number of them. The second flotilla had set sail southwards from Milos on May 22nd when at about 9:30am when they within range of another naval task force. But due to Luftwaffe activity overhead Admiral King broke off the attack in fear of a mounting air-attack. This second flotilla was recalled to spare it the same fate as the first and no further seabourne landings were attempted until the island was in German Reinforcements and supplies were continually arriving on Crete and by 12:00pm the whole of I. Battalion of Gebirgs-Jäger-Regiment 100 had been bought in, followed by the II. Battalion, the I. Battalion of Gebirgs-Jäger-Regiment 85 and then Gebirgs-Pionier-Battalion 95under Major Schatte. Divisional commander General Major Julius Ringel flew in and assumed command of all forces in the Maleme area, and organised the forces there into three battle groups: Kampgruppe Schaette was to protect the Maleme area from any western threat and push westwards to capture Kastelli; the second group, made up of paratroops under command of Oberst Ramcke, was to strike northwards to the sea to protect the airfield and then extend eastwards along the coast; and third under command of Oberst Utz, was to move eastwards into the mainland, partly with a flanking movement across the mountains. As the three battle groups moved forward, I. Battalion (Gebirgs-Jäger-Regiment 85) headed eastwards of Kampgruppe Utz and reached the village of Modi in the afternoon-but encountered heavy defensive action by the New Zealanders. The I. Battalion (Gebirgs-Jäger-Regiment 100) had outflanked the position by marching across the mountains to the south and after a very determined defence, the village soon fell. Gebirgs-Pionier-Battalion 95 had come under attack from armed civilians (including women and children) on the west of the island. These bands had carried out atrocities on the dead and wounded-some suffering appalling torture before dying. East of Maleme the III. Battalion of Luftlande-Sturmregiment had suffered badly form such incidents, especially during the first night on the ground when Cretan partisans had mutilated all the dead and wounded they could find-about 135 men in total. After this the Germans announced that for every soldier killed in this fashion ten Cretans would be shot in reprisal; the Luftwaffe dropping leaflets warning the population of the measures that would be taken against partisan activity. During the day supplies were bought forward, and men landed at Maleme. About twenty aircraft were landing every hour-some carrying artillery, anti-tank guns and various heavy equipment. II. Battalion of Gebirgs-Jäger-Regiment 100-of which the remainder had landed in the morning - were sent eastwards to support Kampfgruppe Utz. Generalmajor Ringel was able to regroup as more reinforcements landed on the island. During the night of 24th-25th Gebirgs-Jäger-Regiment 100 gained contact with Oberst Heidrich's paratroops-surrounded in prison valley since the 20th. Gebirgs-Pionier-Battalion 95 had entered Kastelli to the west after air support from Stukas. On the 25th south west of Canaethe German troops comprised of Oberst Ramcke's paratroops on the left flank along the coastline; in the centre Kampfgruppe Utz with two Battalions of Gebirgs-Jäger-Regiment 100; and on the right the paratroops of Oberst Heidrich's regiment. Around Galatas in the afternoon fierce hand-to-hand combat raged between the mountain troops and the New Zealanders of the 10th New Zealand Brigade. The Germans succeeded in forcing their way into the village, but after a counter attack by two companies of the 23rd Battalion and the 5th New Zealand Brigade they were forced to lose ground. The next morning the mountain troops re-entered the village after the New Zealanders had withdrawn during the night. More troops were thrown against Canea by Generalmajor Ringel as he deployed a battle group comprising of two battalions of Gebirgs-Jäger-Regiment 141 (which had arrived on the 25th and the other on the 26th) under command of Oberst Jais on the right of Gebirgs-Jäger-Regiment 100. In front of Canea the mountain troops overcame a determined defence by British forces, and by afternoon the Gebirgs-Jäger-Regiment 100 had penetrated the town. Gebirgs-Jäger-Regiment 141 held-fast against counter attacks by New Zealand and Australian troops to the south west of Suda. This was actually rear guard action aimed at holding the Germans back, while the main body withdrew southwards towards Sphakia and the ships of the Royal Navy. When the Germans finally entered Canae and Suda bay, they found it deserted. Kampfgruppe Krakau had toiled trough the mountains further south and had flanked opposition and occupied the heights above Stilos on the 27th. A blocking position of artillery and tanks pinned down the mountain troops as they approached Stilos at about 6:30am. The Suda-Sphakia road was vital for the push eastwards; defence of it-vital for the British. With the arrival of anti-tank grenade riflemen and artillery and mortars the situation was turned in the German's favor. Late on May 27th Gebirgs-Artillerie-Regiment 95 was ordered by Generalmajor Ringel to advance east to pursue the retreating enemy and move as quickly as possible toward Retimo and Heraklion, to relieve paratroopers cut off there. Kampfgruppe Wittmann advanced at 3:50am-unchecked until just outside Suda where the road was cut by craters; a British commando unit had landed at Suda and had blocked the road. A flanking attack was mounted while mortars, anti-tank and mountain guns opened against the defenders. At midday, resistance was overcome and contact with elements of Kampfgruppe Krakau was gained-pursuit continued without interference as far as Kaina. Here, resistance was met with the main lay-force who staunchly held their ground. Kampfgruppe Wittman lacked good observation points for it's artillery, so had to wait for Kampfgruppe Krakau for support; the odds were turned in the German's favour by last light. The pursuit continued on the 29th, and Retimo was entered at 1:00pm-contact with III. Battalion of Fallschirm-Jäger-Regiment 2 established. The 29th was spent clearing up Retimo and several hundred prisoners were taken. The evacuation order of May 27th had not reached Allied troops in Retimo but Brigadier Chappel had received it at Heraklion and - except for the wounded - 4,000 men were embarked during the night of 28th 29th aboard ships under the command of Admiral Rawlings. When the paratroops closed in on the British positions at Heraklion early on the 29th, the airfield and town were taken without a shot being fired. At Retimo 700 prisoners were taken after surrendering to the bombardment of German artillery. After a detachment was left to guard the prisoners, Kampfgruppe Wittman resumed the march east at 7:30am. An hour later contact was made with the eastern group of Fallshirm-Jäger-Regiment 2. At 11:45 am contact was gained with a reconnaissance patrol from Fallschirm-Jäger-Regiment 1 which had been holding out in the Heraklion area since the afternoon of the first day. The advance continued with a couple of tanks (which had been landed by sea)-leading the way for safety. MAY 29-JUNE 1 The German command had failed to realise that the British evacuation was occurring south in the fishing village at Sphakia. Large forces had not been sent south towards the port and this was not rectified until May 31. At 8:50am on the May 29th I. Battalion of Gebirgs-Jäger-Regiment 100 of Kampfgruppe Utz was sent southwards and that afternnon the II. Battalion also moved-advancing until 6:00pm, when determined rearguard action was encountered just north of Kares. The attack was resumed by the mountain troops in the morning and further progress was made to reach a point about two and a half miles from the coast. By the evening of May 30th the whole of Crete, except the Loutro-Sphakia area, was in German hands. General Freyberg left the island that evening in a flying boat-sent to Sphakia to take him off the island. The Royal Navy evacuated almost 15,000 men to Egypt, and as a result of the naval activity several ships were damaged and sunk. The Germans were unable to push down to the coast until 9:00am June 1st-when British rearguard forces surrendered; the war diary of the 5. Gebirgs-Division recorded that final resistance was overcome at 4:00pm in the mountains north of Sphakia. The cost to the Germans of Operation Merkur was high. Of the 22,000 men committed for the operation approximately 6,000 were casualties. Key figures killed during the battle: Generalleutnant Süssmann, Major Braun, Major Scherber, and Oberleutnant van Plessen. The mountain troops lost 20 officers and 305 other ranks, killed in action; the missing-most of them drowned when the Royal Navy sunk the boats transporting them, numbered 18 officers and 488 other ranks. Of the nearly 500 transport aircraft involved, 271 had been lost. The British and Dominion casualties were 1.742 killed, 1,737 wounded and 11,835 taken prisoner. Special thanks to Patrick Kiser for the original Period Postkarkte |Site Created, Maintained and Copyrighted 2001 by Peter Denniston and Patrick Kiser|
<urn:uuid:e354b57d-cd81-4cd6-b7bf-b809fab34f25>
CC-MAIN-2017-17
http://www.gebirgsjaeger.4mg.com/kreta.htm
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917122167.63/warc/CC-MAIN-20170423031202-00249-ip-10-145-167-34.ec2.internal.warc.gz
en
0.972608
6,082
3.234375
3
If you are looking to purchase your first American Paint Horse Association (APHA) or Pinto Horse Association (PtHA) horse or just want to read up on some coat color material, I hope this page is of help to you. I have included information on coat color and coat patterns and a small amount of information on coat genetics and how to get your horse tested. I have also included a link to a great color calculator available that I often use. The cost for most color tests is about $25 and requires only a sample of pulled mane or tail hair with intact roots. Coat color genetics is very interesting, but remember, you don't ride color! Although, some knowledge of pattern genetics is necessary when breeding paints to avoid situations like the production of a Lethal White Overo (LWO/OLWS) which is discussed towards the bottom of the page. Jump to what you are looking for: -Coat color bases (black and red) (black/red factor) -Black and red punnet squares -Agouti and how to figure out your horse's black/red factor and agouti genetics -Coat patterns overview (tobiano, overo, tovero) -Pattern punnett squares, information on lethal whites (LWO, OWLS) -How and where to get them tested -Color chart (genotypes) ***I want to note here, before we get started, that in order for a horse to have any modifier like gray or dilution like cream or dun or other color factor like tobiano, overo, or roan, one of the parents MUST carry that gene. Genes like these do not pop out of nowhere and at least one of the parents MUST have given the offspring the gene; no exceptions. For example, two sorrel horses will not magically produce a gray, palomino, dun, etc. (Actually, two sorrel horses can only produce a sorrel horse) Now, let's get started... Horses can be broadly classified into two base pigments: Black and Red. →Horses with a black base pigment mostly have black points. Points are the lower legs, tail, mane, tips of ears, and nose. Examples of these coat colors include black, brown, bay, blue roan, buckskin, grullo, and perlino. The allele (E) represents the black factor and is dominant. Therefore, horses that have black points can be represented by (EE) or (Ee). Examples of horses with a black base pigment include Black Jack (EE), Dees Miss Peppi (Ee), Miss Poco Buck Swen (Ee), Star, Angel Joe Star Buck (EE), Spark of Faith (Ee), Blondys Blue Bonanza (Ee), and Swen Sparks Fly. Buckskin (E_A_Crcr) Bay (E_A_Tt) Buckskin (E_AaTt) Black (EEaaTt) →Horses with a red base pigment have red points. Examples of these coat colors include sorrel (or chestnut), red roan, red dun, palomino, and cremello. The allele (e) represents the red factor and is recessive. Therefore, horses that have red points can be represented by only (ee). Examples of horses with a red base pigment include Doc Alena Ember (ee), Faiths Calico (ee), Sallys Miss Ziggy (ee), Diamond (ee), Ima Sonny Lil Doc (ee), and Sebastian (ee). Sorrel (eeA_Tt) Palomino (ee _ _Crcr nO) Sorrel (ee _ _) »Horses homozygous for the black gene, like Black Jack, are represented by (EE) and will always pass one (E) to offspring. In other words, they will never have red-based offspring. »Horses homozygous for the red gene, like Roxie, are represented by (ee) and will always pass one (e) to offspring. Skip's offspring can be (ee), red, or (Ee), black, depending on the stud. This recessive allele, (e), can be overridden by the dominant allele, (E), if mated to a horse carrying a (E): (EE) or (Ee). →These possibilities can be expressed in the form of a simple Punnett Square: Black Jack (EE) x (EE) Mare Black Jack (EE) x (Ee) Mare Black Jack (EE) x (ee) Mare All crosses produce horses with black points. Mare (ee) x (ee) Max Mare (ee) x (Ee) Stallion Mare (ee) x (EE) Black Jack Agouti is often listed just after the red/black factor alleles like these examples: EeAA, EEAA, eeAa, eeaa, EEaa, etc. This is because all horses will have genes representing agouti just as they will all have genes representing red/black factor. Agouti is represented by the alleles (A) and (a). (A) is dominant and (a) is recessive. A horse can be (AA), (Aa), or (aa). On black horses (EE) or (Ee), the dominant agouti (A) limits the expression of black to the horse's points. Black points are seen on buckskins and bays and are those black tips on ears, black mane, black tail, and black lower legs. You need recessive agouti genes (aa) to get a true black horse. On red horses (ee), the agouti gene is hidden. Your horse may be (AA), (Aa), or (aa), but you wouldn't be able to physically tell that on a red based (ee) horse. To figure out your red based horse's agouti genetics, you will need information on the sire and dam, information on its offspring, or a genetic test. True black horses are those with (EE) or (Ee) and (aa). My stallion, Sparks Black Jack, is a true black horse and is homozygous recessive for agouti (aa). Here are some examples: Black stallion expressing recessive agouti gene (aa). Note the stallion's body is all black and there is no brown shading in the coat. Black Jack is called a 'true black' horse. Furthermore, the two base coats can be affected by dilutions and modifiers like Gray (G). Gray is a dominant gene, which means all coats modified by gray (G) will lose pigmentation and eventually turn white in 5 to 10 years. Homozygous (GG) horses will show more rapid graying and larger distribution of gray than heterozygous (Gg) horses, but both will "gray out". (g) is the absence of gray. (gg) horses will not gray out. Badger, my gray cow horse, is a great example of a gray horse. Both gray horses and paint horses with pink pigmented skin, have a tendency to develop skin cancer around or after about 10-years-old. A common area subject to skin cancer is the vulvar area or anal area under the horse's tail. A lot of paints especially toveros and tobianos will have pink pigmented skin in this area. Many gray horses will have pink skin, but a lot will have dark pigmented skin with a few lighter pigmented patches. Test for Gray is ~ $25. Cream is commonly represented by (Cr). Normal is represented by 'n' or lower case 'cr'. A horse carrying (CrCr), two cream genes, will be more diluted than a horse carrying (nCr) or (Crcr), only one cream gene. (nCr) and (Crcr) are the SAME and are used based on personal preference. (nn) is normal, not dilute. (CrCr) horses carrying 2 cream genes are represented by cremellos, perlinos, and smoky creams. (nCr) horses carrying only 1 cream gene are represented by palominos, buckskins, and smoky blacks. (nn) horses can be represented by sorrel, black, or bay. (CrCr) horses (like Swen, the perlino mare pictured below) will always pass one cream gene on to their offspring. For instance, two (CrCr) horses would produce a (CrCr) baby. A (nCr) horse like Max, my '06 APHA palomino overo stallion, or Rio, an AQHA buckskin mare, has a 50% chance to pass a cream gene on to offspring. Perlinos, cremellos, and smoky creams oftentimes look similar. They all have 2 cream genes and different red/black factor and agouti genetics. People like to call them 'white' horses. They have a very very light color body that is almost white with mostly pink skin, blue eyes, and light colored mane and tail. (ee _ _ nCr nO) Palomino (E_ A_ nCr) Buckskin (ee _ _ nCr) Palomino Palominos have a golden color body and a white mane and tail with black skin. Buckskins have a tan color body and a black mane and tail and black points (ear tips, lower legs, nose, mane, and tail). Buckskins can have countershading and primitive markings like leg barring and partial lines down the croup and back. They also can have frosting (white coloring) on the mane and tail. Smoky blacks look very similar to black horses and oftentimes cannot be differentiated from black horses. Cremellos, perlinos, and smoky creams sometimes vary only genetically and cannot be sorted out by physical appearance. In this case, you would test for red (ee) or black (E_) factors and agouti (A_). The palomino has (ee). The buckskin has (E_) and (A_). Her agouti (A) gene causes black points to show. Cremellos, perlinos, and smoky creams (CrCr) are oftentimes referred to as pink horses. More examples of perlino horses (carrying 2 cream genes): Angel Joe Star Buck (EEAaCrCr), Miss Poco Buck Swen (EeAaCrCr) More examples of buckskin horses (carrying 1 cream gene): Dees Miss Peppi (EeA_CrcrDd) and Swen Sparks Fly (E_AaCrcr) More examples of palomino horses (carrying 1 cream gene): Ima Sonny Lil Doc (ee__Crcr) Test for Cream is ~ $25. Dun is commonly represented by (D). A horse carrying (DD) or (Dd) will exhibit similar or same amounts of dun factor. (dd) is normal, not dun. (DD) and (Dd) cause the body to be diluted; the points are not affected. (DD) and (Dd) horses are represented by buckskin duns, red dun, or grulla. (DD) and (Dd) have a dorsal stripe and usully leg barring, and/or a shoulder patch. Red Dun (eeaaDd) (Dd) Red dun, Kizzy happens to be genetically tested. Her genotype is ee aa Dd. Note that her sorrel coat is diluted to a pinkish red and that she has a dorsal stripe; her points are not affected - they are still sorrel. Also note that since she is aa, recessive agouti, she will produce 100% true black foals with a homozygous black, true black stallion like Black Jack. If these true black foals receive her dun gene (50% chance), they will become grulla. Examples of horses carrying dun: Sallys Miss Ziggy (eeaaDd), Dees Miss Peppi (EeAaDdCrcr) Dunskin Mare (E_A_CrcrDd) Carries both 1 cream gene and 1 dun gene. She has a dorsal stripe, frosting on the mane and tail, light body color, and leg barring can easily be seen on her front left leg just above the knee. She is Crcr (nCr) and Dd. Painted horses can be broadly classifieds into 3 basic color patterns: Tobiano, Overo, and Tovero. →Tobianos usually have distinct characteristics: Sorrel Tobiano (eeA_Tt) Bay Tobiano (E_A_Tt) Black Tobiano (EEaaTT) The tobiano trait is dominant and is represented by (T). →Overos usually have distinct characteristics: Examples of the overo pattern can be seen on Faiths Calico (eeA_nO). Palomino Overo (ee_ _Crcr nO) Sorrel Overo (eeA_nO) »The overo pattern actually consists of 3 separate patterns, which will be discussed further in the genetics section. →Toveros, as the name implies, are a mixture between the two other patterns: An example of the tovero pattern can be seen on Spark of Faith (EeAaTtnO). Black Tovero (E_aaTtnO) Bay Tovero (EeAaTtnO) If you want to breed for a unique pattern or try to avoid a particular outcome, you need to understand pattern genetics. »The tobiano pattern is dominant (T) while non-tobiano is recessive (t). This means that if (T) is present in a foal, (TT) or (Tt), the foal's pattern will be tobiano. If a cross yields a (tt) foal, the foal is absent of the tobiano trait and pattern. An example of a (tt) horse is Star. Most tobianos are heterozygous (Tt), but some, like Black Jack, are homozygous dominant (TT) and will produce painted offspring 100% of the time. Testing for these genes will be covered further down the page. (tt) is said to be homozygous recessive and describes no exp oression of the tobiano gene or pattern. Black Jack (TT) x (TT) Mare Black Jack (TT) x (Tt) Mare Black Jack (TT) x (tt) Skip 100% Chance of Tobiano Foal Heterozygous Stud (Tt) x (Tt) Heterozygous Mare 75% Chance of Tobiano Foal Heterozygous Stud (Tt) x (tt) Skip Homozygous Recessive Stud (tt) x (tt) Skip 50% Chance of Tobiano Foal 0% Chance of Tobiano Foal »The term overo describes 3 genetically distinct color patterns, but has been lumped together in 1 description through habit and convenience. The 3 overo color patterns include the frame overo, the splashed white overo, and the sabino pattern. →The frame overo usually has white patches centered in the body and neck with coloring around them. An example of this can be seen on Faith. The frame overo pattern acts as a dominant gene (nO). Commonly, a frame overo mated to a solid horse will produce 50% painted (nO) offspring. Some frame overos can be almost solid, lacking in spots, and will produce painted offspring 50% of the time. Frame overos can also produce lethal white foals (OO). By mating two frame overos, (nO) x (nO), there is a 25% likelihood a foal can receive both copies of the (O) gene. If a foal receives two copies of the (O) gene, the foal is born white and will die of neural related gut abnormalities. Blood typing and DNA testing can eliminate the possibility of lethal whites by providing the opportunity for responsible breeding plans. Max (nO) x (nn) Mare Max (nO) x (nO) Missy 50% Chance of Frame Overo Foal 50% Chance of Frame Overo Foal 25% Chance of Lethal White Foal Since the frame overo pattern (nO) does exhibit dominance, it is helpful in the production of the overo pattern in offspring. The frame overo pattern is very desirable. Breeding between (nO) and (nn) horses is 100% safe. The only reason you need to be worried about lethal whites is if one horse you are breeding is (nO) and the other horse hasn't been tested. In that case, testing the other horse by simple genetic testing can determine if the cross may produce a lethal white. Production of lethal whites is 100% avoidable. →The sabino pattern describes horses with flecks, patches, and roan areas. These horses usually have blue or partially blue eyes, four white feet and legs, a mostly white head, and a speckled coat. A lot of deviance of these general guidelines occurs. Sabinos can be pretty white and survive unlike lethal whites and are not currently associated with lethal whites. →Splashed white is the least common coat pattern. Genetic study of the splashed white is even newer than that of the frame overo. Some evidence of deafness seems to be linked to splashed white horses. No homozygous splashed whites have been documented and genetic defects have yet to be linked to this scarcity. Splashed white horses often look like they've been dipped in white paint. You may choose to have many different DNA tests performed on your horse. DNA testing is simple and cost effective. Several tests include Red/Black Factor (EE or ee), Tobiano Homozygosity (TT), Lethal White Overo (LWO/OLWS), Hyperkalemic Periodic Paralysis (HYPP), and Hereditary Equine Regional Dermal Asthenia (HERDA). In most cases, testing is $25 to $40 a test and requires only a sample of the horse's mane or tail including the hair roots. - H/H - homozygous dominant HYPP affected horse showing severe symptoms - N/H - heterozygous HYPP carrier showing minimal symptoms - N/N - homozygous recessive normal horse with normal genes - N/N - homozygous dominant normal horse with normal genes - N/H - heterozygous HERDA carrier with no symptoms - H/H - homozygous recessive HERDA affected horse showing symptoms You can visit sites like Pet DNA Services of AZ and Animal Genetics Inc. or shop around for less expensive testing. Click here for a great color calculator from Animal Genetics Inc.! Now that you know your EE's, TT's, and LWO's, you'll be better able to use the information! If you are thinking about breeding your mare to a (nO) frame overo, you should have your mare tested for the (O) gene. Just because she is an overo does not mean that she will be a carrier, but you must remember that even solid horses can be carriers for the gene. Even though mating (nO) with (nO) only has a 25% of producing a lethal white, anything more than O% should be enough to sway that decision. You should never mate two carriers together. Why not? Well, what is a lethal white? A lethal white foal is a foal whose intestinal system has not developed proper neural pathways. The foal will be born almost pure white and will die at about 72 hours after its first meal has not been properly digested. In saying that, it is not a horrible thing to have a carrier. It can be a positive thing as the offspring will be 50% overo and most likely painted. If the dam or sire is (nO), that does not mean that the offspring will be (nO) even if they are painted especially since tobianos can be (nO). Frame overo is a desirable coat pattern especially in riding horses, show horses, and event horses because it is flashy. Careful breeding will create beautiful horses and will avoid lethal whites. Another important thing to remember is that sometimes your tobiano mare or stallion may be a carrier. Not all (nO) horses are frame overos, much less overos at all. They can be tobianos, and rarely, solid AQHA quarter horses, mini horses, and even thoroughbreds. The overo gene is independent and devoid of the tobiano gene. The tobiano gene may show in your tobiano horse while the overo, (nO), remains hidden phenotypically, or visually. The two different traits, Tobiano and Overo, may appear on either different loci on the same chromosome or on two different chromosomes: ♂ ♀ or ♂ ♂ ♀ ♀ So, the offspring could end up with (E) and (O). If you tested the horse for the tobiano allele, this might deter you from testing for (O). Tobiano horses and solid horses can still be (nO) and therefore potentially produce lethal whites and should be tested for the LWO trait if breeding with a (nO) stud or mare. I hope this has helped to either better inform you or to clear up a few things that may have been a little fuzzy. As more research is done on coat color and coat pattern genetics, I am sure things will get much more complicated. What? ...Did you think it would get any easier? The American Paint Horse Association has some great information on their website in the form of PDF files and can provide you with some great brochure information as well. In addition, genetic testing web sites have very accurate information. Q: What color will two sorrels make? Q: Can you get a foal of a certain color dilution or modifier without the sire or dam carrying it? Eg. Can I get a dun foal without the sire or dam carrying dun? A: No. Color genetices do not just pop out of nowhere. It is a predictable, factual process. Dun horses are made out of at least one parent carrying dun and it is not guaranteed unless a homozygous dun parent is used. Cream gene horses are made out of at least on parent carrying cream and it is not guaranteed unless a homozygous cream parent is used. Gray horses are made out of at least one parent carrying gray and it is not guaranteed unless a homozygous gray parent is used. Q: Can two grullas, two blacks, two bays, etc. produce a SORREL COLORED foal. A: YES they can produce a sorrel colored foal, technically a chestnut if out of true black or grulla parents, depending on each parents color genetics. For example: Grulla Ee aa Dd x Grulla Ee aa Dd can sometimes equal ee aa dd = chestnut (looks like sorrel) SOMETIMES; Black Ee aa x Black Ee aa can sometimes equal ee aa = chestnut (looks like sorrel); Bay Ee Aa x Bay Ee Aa can sometimes equal ee Aa or ee aa = sorrel or chestnut. The ONLY WAY TO GUARANTEE NO SORREL IS WITH BLACK HOMOZYGOSITY OR DILUTION HOMOZYGOSITY OR ETC. My stallion, Black Jack, is homozygous black so will never produce a sorrel. An EE parent can never make an ee offspring. Q: What is the difference between sorrel and chestnut? A: Sorrel is eeA_ and Chestnut is eeaa; this can be distinguished only by color genetics information of the parents or by testing. It cannot be seen phenotypically. Q: What is phenotype and genotype? A: Phenotype describes physical characteristics. This horse looks sorrel. That horse looks cremello. Genotype describes the actual genetics. This horse is genetically a chestnut, not sorrel, although they look the same (eeaa vs. eeA_). That horse is genetically a perlino, not a cremello although they look the same or similar (E_A_CrCr vs. ee _ _ CrCr). Q: What is homozygous and how do I know if my horse is homozygous? A: Homozygous means the horse has two of the same exact type of alleles. For example, my stallion, Sparks Black Jack, is EEaaTT and is therefore triple homozygous for EE, aa, and TT (black, recessive agouti, and tobiano). These are examples of homozygosity: aa, AA, EE, ee, TT, CrCr, DD, GG. If both parents are homozygous, then the offspring is homozygous. It cannot be seen phenotypically and must be determined by information from parents or genetic testing. Q: What stallion can I use to make a tobiano? A: You will need to use a homozygous tobiano stallion to guarantee that your foal is tobiano. Otherwise, if your mare is not homozygous (she is heterozygous tobiano or solid) then it is possible to get a solid foal. Q: Will the paint pattern or 'spots' develop on my foal? A: No; paint patterns do not develop on foals. They are born with paint patterns and 'spots' do not show up. The only way a small white spot would possibly show up would be because of scarring. Paints do not act like roans or grays where they roan out or gray out. There is not such thing as 'painting out'!! Q: What crosses produce a palomino horse? A: Sorrel x Palomino = 50% sorrel, 50% palomino Sorrel x Cremello = 100% palomino Palomino x Palomino = 25% sorrel, 50% palomino, 25% cremello A heterozygous black horse with cream such as a buckskin, perlino, smoky black, or smoky cream can also be used. Please use a color calculator to test your own crosses. Q: What offspring result from mating to a gray stallion? A: If the stallion is homozygous gray, the offspring will be gray. If the stallion is heterozygous gray, then there is only a 50% chance his offspring will be gray given that the mare does not carry the gray gene. Q: What stallion will always pass dun? A: A stallion that is homozygous dun. This can be a dunskin, bay dun, grullo, or red dun with DD, homozygous dun. Q: Can my frame overo horse be homozygous for frame overo? A: No. Homozygous frame overo, or OO, is lethal. Only nO can exist in a viable foal. Q: Can a paint horse produce a solid? A: Yes! A heterozygous paint horse can produce a solid when bred to a solid or ANOTHER heterozygous paint horse. The ONLY way you are guaranteed a paint foal is if ONE or both of the parents is homozygous like my stallion, Sparks Black Jack, who will always produce painted babies. A heterozygous paint horse is a heterozygous tobiano paint (Tt) or a frame overo lwo horse (nO) and etc. Just because your horse is painted does not mean that all of its babies will be, too. Q: If you breed a solid horse to a homozygous tobiano horse will you get a paint foal? A: Yes, the offspring is guaranteed to inherit a tobiano gene and will be Tt. Q: What are the color genetics for _____________? A: Sorrel is ee _ _; Bay is E_A_; Black is E_aa Palomino is ee _ _ nCr; Buckskin is E_A_nCr; Smoky Black is E_aanCr Cremello is ee _ _ CrCr; Perlino is E_ A _ CrCr; Smoky Cream is E_aaCrCr Red Dun is ee _ _ D_; Bay Dun is E_A_D_; Grullo is E_aaD_ Dunalino is ee _ _ nCrD_; Dunskin is E_A_nCrD_; Smoky Grullo is E_aanCrD_ Gray is _ _ _ _ G_ |GENETIC COLOR CHART (X)||Sorrel Base (ee _ _)||Bay Base(E_A_)|| Black Base (E_aa)| |1 Cream gene added||Palomino (ee_ _nCr)||Buckskin (E_A_nCr)||Smoky Black (E_aanCr) | |2 Cream genes added||Cremello (ee_ _CrCr)||Perlino (E_A_CrCr)||Smoky Cream (E_aaCrCr)| | Dun gene added||Red Dun (ee_ _D_)||Bay Dun (E_A_D_)||Grullo (E_aaD_)| | Dun gene and 1 cream added||Dunalino (ee_ _nCrD_)|| Dunskin (E_A_nCrD_)||Smoky Grullo (E_aanCrD_)| | Gray gene added||Gray (ee_ _G_)||Gray (E_A_G_)|| Gray (E_aaG_)| | Tobiano gene added (T_)||Any color above + Tobiano pattern ||Any color above + Tobiano pattern||Any color above + Tobiano pattern| |Frame overo gene added (nO)||Any color above + Frame overo pattern||Any color above + Frame overo pattern||Any color above + Frame overo pattern| |Tobiano (T_) and Frame overo gene added (nO)||Any color above + Tovero pattern||Any color above + Tovero pattern||Any color above + Tovero pattern| *nCr and Crcr both mean carrying only 1 cream gene. They are interchangeable. *A (_) denotes that the space can be filled with either a dominant or recessive gene with no change in color.
<urn:uuid:4a10547d-5ba5-4171-acc6-d0c2946b9f7a>
CC-MAIN-2017-17
http://www.coquatranch.com/colorpatternsgenetics.htm
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917120349.46/warc/CC-MAIN-20170423031200-00425-ip-10-145-167-34.ec2.internal.warc.gz
en
0.881693
6,593
2.53125
3
World Education Market Comissioner, Responsible for Education and Culture There is a committment to learning by European commission, but lack of access to learning. That is why the private sector will have to contribute more. There is a lack of investment - only 2.3 percent of total salary. There is a deficit in the output of scientific specialists; the total number of graduates should increase by 15 percent by 2010 and the gender imbalance should decrease. By 2010 target to have 80% 22 year-olds to have completed PSE. By 2010 EU participation in lifelong learning should be 12.5 percent adults. Universities are faced with challenges; they are not competitive. They need to put forward concrete ideas of optimising. A new flagship project - the Erasmus Mundos project - demonstrates the openness of European higher education by helping students around the world to engage in post-graduate studies at European universities. "Quality should join with quality in order to create real centers of excellence." The project generates friendship by promoting dialogue, understanding. It also promotes the European ideals of freedom and democracy. It promotes learning not just about scientific excellence but also about human values. Competitiveness also means putting poeople where jobs are, which is why it is important to make people mobile. Part of the goal is to encourage training outside one's own country. Also, we encourage the learning of at least two langauges other than one's own. Reforms will not be successful if we cannot break down traditional barriers and end the artificial separation between education and professional training. New technologies can help us achieve this aim through access and new partnerships (eg museums, industry) We are making great strides. 93 percent of European schools are connected with 17 students per computer and dropping. To take advantage of the added value of e-learning we need new organizational model, for example, school twinning where we match schools across schools with common lessons - to learn from each other. Remember the Bologna process, of which Erasmus is a consequence. It was launched in 1999 to create a single higher education area. Because it worked so well, ministers of professional training want to adapt it to professional training. "Step by step we go further in order to do things together" We are building tools to help people be mobile in their labour markets: - We must create framework for the transparency of qualifiications and competencies, single entrance requirements - We have to have common criteria for quality management, quality indicators, practical - We must create credit transfer system for vocational training - Common European princples for validation of non-formal or informal learning. "A silent revolution has been underway in Europe, not widely known because the ministers that work togther don't often speak in front of the press... but the progress has been extraordinarily quck... because the time was right to do it..." We want to harmonize just the result of our systems, not the process. We want different ways of doing things, but to have the result to be comparable. We have no intention of changing education systems that have grown through centuries. The commission will not undermine traditional academic values - standards, academic freedom, etc - the commission is sympathitic to the view that transnational education should be done according to the values of the educational sphere. Regarding enlargement: we have already completed enlargement in education and trading - our programs are already open - they have alread had first-hand experience of European education. Tthey are very positive and bring in something. They are the solution to the problem and not the problem Multimedia means new markets for educational suppliers - there are opportunities to be siezed. This conference is about networking. We will provide information on our initiatives - da Vinci and Socrates - and the new portals - and more. The European commission is actively working toward the modernization of the education system, but we can only achieve this through partnerships because you cannot achieve success by top-down reform, you need the support of the teachers and the sector. "Education for all, ladies and gentlemen, is more than just an ambition for all. It is a vision... it is not a utopia, it is based in European values... of freedom, We promote a society of inclusion, of participation, where education is more than just preparing for a job, it means preparing for life. The message speaks for itself, I think. As I was listening to the talk it seemed to me that my own nation - Canada - has more in common with the European approach to education than with the American. Canadian industry minister John Manley suggested this weekend that post-secondary education should be free for all Canadians. While there were those who dismissed the idea, it was on the basis of cost, not on the belief that there is something wrong with the idea in principle. We need to convince people, especially those in the private sector, that there are many more opportunities to be had when the people are educated and have a yearning for education. The vision outlined by the EU commissioner is very much what I would hope would be adopted not only in Canada but around the world. Panel Discussion: Waiting for Godot - The State of Digital Content Development CEO, Giunti Interactive Labs The next big thing is narrowcasting, which is defined as "delivering content just in time to communities of practice." The first generation of e-learning was vendor based. We are now approaching the third genmeration, which is services based... As the core of this approach is standardization. Content will be in repositories, delivered as needed. "You must not tie your content to a specific platform and/or a specific medium." "We as publishers we are quite fed up of reinventing content every time a new media comes out." The principle of RIAD content: Reusable, Interoperable, Accessable, Durable... There are three major features of the (new) Copernican revolution: - Content objects (learning objects a subtype) - chunking of information to extract and place content, structure and semantics into separate boxes. - XML and the semantic web - the boxes are standardized in terms of their labeling. You create a standard format and then create thousands of instances - then just change your style sheet - there is widespread agreement on this - convergent standards. But "the great difficulty is that governments and institutions that should actually pay for the understanding of standards aren't committed yet." The move toward standards goes through three levels: Level 1 - pioneering R&D - For example, ARIADNE, Learning on demand or bilearn. Level 2 - Early adoption - go through specification bodies to aggregate people. Level 3 - mass market adoption - go through (government) sector profiling bodies. Examples include SCORM, eGIF (UK government specification), OKI. After level 3, standardization bodies adopt the specification and formalise it. Examples include IEEE LTSC and ISO-JTC1 - ISO / CEN. So we have a new publishing scenario, a new publishing ecosystem: the right time, the right information, delivered in the right place. There is no interoperability yet at the content level. Providers are concentrating on content hubs and digital repositories. But, "Don't buy single-vendor turn-key solutions." Cardinali's comments are accurate so far as they go, but there is a lot unsaid in his message. If we look at standards strictly from a production point of view, then there is nothing to coding content to meet the standards agreed upon by various vendors of viewing applications. And these vendors might even let producers view the standards! But of course we could do that simply enough with HTML and style sheets. Where standards are supposed to take us is into the realm of 'the right information, to the right person...' It's not so clear that this is simply a matter of getting the standards right. But there is more to this panel, and this topic, so.... CEO, International Business Law Services The main message: not all content should be free, and advertising revenue will not suffice. Content is the lifeblood of the publishing industry. But there are objections to this view. Why wait for e-content? Why spend money when it's for free? Online content has little value. Here are some answers: - it's not all free, some is subscription based - there are legal issues concerning the use of free content for commercial purposes The main question is, how do you determine the cost of online content. The user determines the value, and so the cost depends on individual perceptions. What counts as a reasonable price will vary. Also, there is a difference between searching the web and searching indices and abstracts in subscription-based services the cost of the content in this case is the time spent looking aorund the web for the same information. People who value content the most are end-users who get content integrated into their business. So we have focused on high-end content instead of consumer-oriented content. We also focus on the quality that well recognized brand name brings. We must also ask how to market e-content. To begin, distingish between free and paid. Content that is of higher value is paid content. This ensures that there is plenty of free information, but this model holds back the most desirable content, which becomes premium information. We need to understand content as mechandise. We want to add value. "To simply throw raw content onto a website is insufficient." For example, we add value when we index or summarize the content. As we were building this global database we were building online learning which we were marketing. While it is true info is everywhere on the internet, it is hard to find. People will pay for quick access to quality niche information. There is a market for 'must-have' information. When will all this come together? When there is attractive content at an affordable price. What this talk amounts to is a recognition that there will always be some free content on the web, that the publishers no longer have a monopoly. This, in itself, is an important step forward. Penn also recoignizes that publishers must "add value" if they are to be relevant. So far so good. But she is mistaken if she believes that providing summaries and indices will do the trick. Consider Edu_RSS, for example. It already provides summaries and an index. And it is free. New technologies, such as RSS, are automating this process. Publishers are going to have to look more deeply to find a model that will save their businesses. Acting Director General, Canadian Culture Online Outline of 'The Canadian Model.' 3 major objectives: - create a critical mass of quality digital content - increase the visibility and build audiences for this content - we are building a cultural portal where all Canadian citizens and the world will have access - create the right environment for content creation through (1) DRM - to access content through credit cards - the first of these clearance systems will be online in the fall - (2) and to encourage R&D eg. into First Nations, French and English speaking Canadians - half our funds for the creation of French language content on the internet; youth, authors and artists. Approach to e-learning this this: every client is asked not only to digitize content but also to create learning resources, such as teachers' guides, games, and lesson plans. These resources are offered free of charge to teachers through the internet. We have also launched specific programs (in partnership with Telus) to make $200,000 available to private sector companies for content development. There are challenges. In Canada education is a provincial responsibility, which means 13 systems. The market is therefore very small and quite fragmented. Schools are very well equipped but sometimes outdated. The school system is going through major changes. Teachers don't have the time. They need tools. E-learning is underfunded; most money goes to school books. In response, we first promote accessibility. Canadian teachers are not aware of the availablity of the materials. We need need metatadata standards for facilitate accessibility. But organizations are not committed - we have developed standardization guidelines and are trying to encourage them. It is difficult for private sector. There is no solid business model. We need to address copyright issues and awareness - for example, copyright act prohibits teacher from making a presentation using a single computer - we need to work on this with Industry Canada. We are also trying to enable e-learning growth. To do this, we need government - private sector partnerships. For example, we ask cultural institutions to work with new media producers to add value. Museums don't necessarily have the knowledge in the latest multimedia technology. We want to create an environment for the creation and use of new content through such partnerships. But... the private sector needs to understand users' needs and demands. To advance, we need: - broadband access - metadata and common tech standards - resolution of digital copyright issues - a way to integrate e-learning culture and tools into teacher training Looking at revenue models, it is still difficult to identify models that work. Some models are emerging (in order of decreasing viability): - partnerships between public and private sectors - licensing of online products for use in class - sales of CDROMs to specific school boards - access / subscription fees - corporate and private sponsorships Who will own what, who will author what? Copyright should remain with creators. We want to allow creators to negotiate within an access framework where content is to be made available for 5 years and authors are compensated. Measures need to be put in place to prevent people from reproducing content. And information about copyright clearance must be made available. Aaccess to content facilitated through copyright clearance - this is the purpose of the copyright fund. Generally speaking, digitized content should be free for content produced by institutions and not-for-profit organizations. This content should be free because taxpayers have already invested in these products. Canadian Heritage is contributing up to 75% or total project costs, and we help find partners for remaining needs. Digital content created by the private sector follows different rules. The exposure of private sector is more important. The level of risk is important. The sector needs revenues for core business purposes. But price to buy content sometimes undermines the profitability of commercial ventures. Commercial content producers will like the government's partnership program; it puts some real cash into their pockets. However, they may view Heritage Canada's program as a whole less enthusiastically, since it essentially involves the creation of a large and very useful library of content available for free to Canadians and the world. Tough. As Boies said, we - the taxpayers - paid for this content, and it is only right that we are able to view it for free. If anything, we should be looking at ways to offer access to even more taxpayer supported content. More on that later. Director, Higher Education "We're interested in empowering evrybody to ceate great content" 250 mb of content is being produced per person per year - doubling ever 14-18 months. Without the right content at the right time delivered the right way on the right devices to the right people, strategies stall, opportunities are lost. The challenge for all of us is creating value. Learning cointent is a means of achieving ends - the value of learning content comes in its appication. Learning content value is defined by context and intention. Value is increased by tagging content assets with information to determine what they are worth. Distribution is key to value realization. As with any capital asset, learning content must be managed if value is to be maintained - and sustained - over time. There are multiple contexts: data layer, business logic layer, presentation layer. These lead to lead to multiple uses: users, devices, etc. MOTO gives value to learning content (from David Moschella): - 'any time any place' to 'just the right stuff' - 'searching' to 'finding' - 'instructor-centric' to 'learner-centric' - 'time to learn' to 'time to performance' - 'occasional' to 'continuous' - 'consistency' to 'diversity' - 'mass production' to 'mass customization' - Metadata - subjective or objective data about people, places and things - Objects - smallest unit of text, image; conceptually includes peoples' skills, etc. - Taxonomies - general principles of classifications - Ontologies - relationships between items classified in taxonomies In the emerging e-business of e-learning content, the audiences remain the same (education, training, certification, certification, personal development, professional development). But the businesses change: content production, aggregation, management, (self-)publishing. The commercial product in this model: MOTO. I think Wegner's statistics are compelling. With that much content being produced daily - most of it free - it's hard to see where the value for commercial content comes in. But I think Wegner is mistaken in thinking that there's a good business model in MOTO, at least in the long term. We are rapidly approaching the age when metadata is produced automatically by our authoring tools. Why, then, would we pay people for it? Policy Analyst IET The question for commercial content producers is how to stay in a market that is characterized by subsidy, people giving things away. The challenge is to find ways to blend social learning and lifelong learning with e-learning. The response is simple: faster, better, and cheaper e-learning. Every year the barriers are lower for people who want to create free content. The commercial industry is looking to create a sustainable business model with digital rights management. Will this help? The creation of the metadata needed to make this work represents a cost. This must be offset against income. It makes the products more, not less, expensive. Instead: to grow, offer users more. Producers need a knowledge-based approach. Show me what I don't know relates to what I know (if students have to repeat stuff, they're bored). Products must take cognitive load into account - for example, take into account learners' schooling. It's the equivalent of readibility, it's the equivalent of ergonomics (makes learning faster and more effective). It can be done. Products must also take a whole-life perspective, to become, in effect, personal knowledge management. Also: to grow, cut producer costs. The old model (the model of putting up money in advance and recouping through sales) is dead. The new model is to create content when needed, creating at time of use. This leads to lower up-front costs, but it requires higher tecnnology - need tools for reusing and repurposing content, maintaining narratvive, personalization. Also, content producers must drive down cost by a factor of 10 and by another factor of 10 two years from now. And DRM mut be done in absoloutely transparent fashion - get rid of those business processes - think of them as things that have to be taken care of in seconds, not weeks. And extend use: support users' 3rd party additionals, annotations, re-structuring. Content producers may not like these remarks, but they are the closest thing to realistic this panel saw. I have long been arguing that information technology should lead to a two-times order of magnitude reduction in the price of information. Yet the cost of digital content is greater than the cost of the same content in paper form. How can this be? Publishers need to look, not at the short term, but to the long term. Look at where the trends lead. 250 megabytes of new content per day. Doubling every year and a half. Content production getting easier. Where do you think all this is headed? SUBSCRIBE TO OLDAILY DONATE TO DOWNES.CA Web - Today's OLDaily Web - This Week's OLWeekly Email - Subscribe RSS - Individual Posts RSS - Combined version JSON - OLDaily National Research Council Canada All My Articles About Stephen Downes About Stephen's Web Subscribe to Newsletters Privacy and Security Policy Stephen's Web and OLDaily Half an Hour Blog Google Plus Page Huffington Post Blog
<urn:uuid:7f09a1cc-ac26-4187-b893-7de0fd6726eb>
CC-MAIN-2017-17
http://www.downes.ca/post/59
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917120101.11/warc/CC-MAIN-20170423031200-00011-ip-10-145-167-34.ec2.internal.warc.gz
en
0.938797
4,271
2.515625
3
Soviet war in Afghanistan Learn more about Soviet war in Afghanistan |Soviet war in Afghanistan| |Storm-333 - Khost - Magistral - Panjsher I-VI - Panjsher VII| |Afghan Civil War| |Soviet involvement · Civil War (1989-1992) · Civil War (1992-1996) · Civil War (1996-2001)· U.S. involvement| The Soviet forces in Afghanistan Civil War was a nine-year year period involving the Soviet forces and the Mujahideen insurgents that were fighting to destroy Afghanistan's Marxist People's Democratic Party of Afghanistan (PDPA) government. The Soviet Union supported the government while the insurgents found support from a variety of sources including the United States and Pakistan. The initial Soviet deployment of the 40th Army into Afghanistan took place on December 25, 1979, and the final troop withdrawal took place between May 15, 1988, and February 2, 1989. On February 15, 1989, the Soviet Union announced that all of its troops had departed the country. The country's nearly impassable mountainous and desert terrain is reflected in its ethnically and linguistically singular population. Pashtuns are the most dominant ethnic group, along with Tajiks, Hazara, Aimak, Uzbeks, Turkmen and other small groups. The Saur Revolution Mohammad Zahir Shah succeeded to the throne and reigned from 1933 to 1973. Zahir's cousin, Mohammad Daoud Khan, served as Prime Minister from 1953 to 1963. The Marxist PDPA party was credited for significant growth in these years. In 1967, the PDPA split into two rival factions, the Khalq (Masses) faction headed by Nur Muhammad Taraki and Hafizullah Amin and the Parcham (Banner) faction led by Babrak Karmal. Former Prime Minister Daoud seized power in an almost bloodless military coup on July 17, 1973 through charges of corruption and poor economic conditions. Daoud put an end to the monarchy but his attempts at economic and social reforms were unsuccessful. Intense opposition from the factions of the PDPA was sparked by the repression imposed on them by Daoud's regime. With the purpose of ending Daoud's rule, the factions of the PDPA reunified. On April 27 1978, the PDPA overthrew and executed Daoud along with members of his family. Nur Muhammad Taraki, Secretary General of the PDPA, became President of the Revolutionary Council and Prime Minister of the newly established Democratic Republic of Afghanistan. Democratic Republic of Afghanistan Factions inside the PDPA After the revolution, Taraki assumed the Presidency, Prime Ministership and General Secretary of the PDPA. In reality, the government was divided along partisan lines, with President Taraki and Deputy Prime Minister Hafizullah Amin of the Khalq faction against Parcham leaders such as Babrak Karmal and Mohammad Najibullah. Within the PDPA, conflicts resulted in exiles, purges and executions. During its first 18 months of rule, the PDPA applied a Marxist-style program of reforms. Decrees setting forth changes in marriage customs and land reform were not received well by a population deeply immersed in tradition and Islam. Thousands of members of the traditional elite, the religious establishment and intelligentsia were persecuted. By mid-1978, a rebellion began in the Nuristan region of eastern Afghanistan and civil war spread throughout the country. In September 1979, Deputy Prime Minister of Afghanistan Hafizullah Amin seized power after a palace shootout that resulted in the death of President Taraki. Over 2 months of instability overwhelmed Amin's regime as he moved against his opponents in the PDPA and the growing rebellion. The Soviet-Afghan Friendship Treaty In December 1978, Moscow and Kabul signed a bilateral treaty of friendship and cooperation that permitted Soviet deployment in case of an Afghan request. Soviet military assistance increased and the PDPA regime became increasingly dependent on Soviet military equipment and advisers. The regimes of the Soviet Union and Democratic Republic of Afghanistan enjoyed comfortable diplomatic relations. Prior to the Soviet deployment, up to 400 Soviet military advisers were dispatched to Afghanistan in May 1978. On July 7, 1979, the Soviet Union sent an airborne battalion with crews in response to a request from the Afghan government for such a delivery. Subsequent requests by the Afghan government related more broadly to regiments rather than to individual crews. However, by October 1979 relations between Afghanistan and the Soviet Union soured somewhat as Amin dismissed Soviet advice on stabilizing his government. Islamic guerrillas in the mountainous countryside harassed the Afghan army to the point where the government of President Hafizullah Amin turned to the Soviet Union for increased amounts of aid. With Afghanistan in a dire situation during which the country was under assault by an externally supported rebellion, the Soviet Union deployed the 40th Army in response to previous requests from the government of Afghanistan. The 40th Army consisted of three motorized rifle divisions, an airborne division, an assault brigade, two independent motorized rifle brigades and five separate motorized rifle regiments. U.S. aid to anti-communist factions during 1979 The former director of the CIA and current nominee for Secretary of Defense, Robert Gates, stated in his memoirs "From the Shadows", that American intelligence services began to aid the opposing factions in Afghanistan 6 months before the Soviet deployment. On July 3 1979, US President Jimmy Carter signed a directive authorizing the CIA to conduct covert propaganda operations against the revolutionary regime. Carter advisor Zbigniew Brzezinski stated "According to the official version of history, CIA aid to the Mujahadeen began during 1980, that is to say, after the Soviet army invaded Afghanistan, 24 Dec 1979. But the reality, secretly guarded until now, is completely otherwise." Brzezinski himself played a fundamental role in crafting U.S. policy, which, unbeknownst even to the Mujahideen, was part of a larger strategy "to induce a Soviet military intervention." In a 1998 interview with Le Nouvel Observateur, Brzezinski recalled proudly: - "That secret operation was an excellent idea. It had the effect of drawing the Soviets into the Afghan trap..." [...]"The day that the Soviets officially crossed the border, I wrote to President Carter. We now have the opportunity of giving to the Soviet Union its Vietnam War." The Soviet deployment "Brotherly aid" The Soviet Union decided to provide aid to Afghanistan in order to preserve its regime's revolution, but felt that Amin, as the Afghan leader, was incapable of accomplishing this goal. Soviet leaders, based on information from the KGB, felt that Amin destabilized the situation in Afghanistan. The KGB station in Kabul had warned following Amin's initial coup that his leadership would lead to "harsh repressions, and as a result, the activation and consolidation of the opposition." <ref>Walker, Martin (1994). The Cold War - A History. Toronto, Canada: Stoddart.</ref> The Soviets established a special commission on Afghanistan, of KGB chairman Yuri Andropov, Ponomaryev from the Central Committee and Dmitry Ustinov, the Minister of Defence. In late October they reported that Amin was purging his opponents, including Soviet sympathisers; his loyalty to Moscow was false; and that he was seeking diplomatic links with Pakistan and possibly China. The clear implication was that Amin would have to be replaced with a pro-Soviet leadership capable of commanding broader popular support in Afghanistan. The last arguments to eliminate Amin were information obtained by the KGB from its agents in Kabul; supposedly, two of Amin's guards killed the former president Nur Muhammad Taraki with a pillow, and Amin was suspected to be a CIA agent. The latter, however, is still disputed: Amin always and everywhere showed official friendliness to the Soviet Union. Soviet General Vasily Zaplatin, a political advisor at that time, claimed that four of the young Taraki's ministers were responsible for the destabilization. However, Zaplatin failed to emphasize this enough. Even after the execution of Amin and two of his sons, his wife claimed that she and her remaining two daughters and a son only wanted to go to the Soviet Union, because her husband was its friend; she did eventually go to the Soviet Union to live. Soviet coup and invasion On December 22, the Soviet advisors to the Afghan Armed Forces advised them to undergo maintenance cycles for tanks and other crucial equipment. Meanwhile, telecommunications links to areas outside of Kabul were severed, isolating the capital. With a deteriorating security situation, large numbers of Soviet airborne forces joined stationed ground troops and began to land in Kabul. Simultaneously, Amin moved the offices of the president to the Tajbeg Palace, believing this location to be more secure from possible threats. On December 27 1979, 700 Soviet troops dressed in Afghan uniforms, including KGB OSNAZ and GRU SPETSNAZ special forces from the Alpha Group and Zenith Group, occupied major governmental, military and media buildings in Kabul, including their primary target - the Tajbeg Presidential Palace. That operation began at 7:00 P.M., when the Soviet Zenith Group blew up Kabul's communications hub, paralyzing Afghani military command. At 7:15, the storm of Tajbeg Palace began, with the clear objective to depose and kill President Hafizullah Amin. The operation ended with the death of Amin, and lasted 45 minutes. Simultaneously, other objects were occupied (e.g. the Ministry of Interior at 7:15). The operation was fully complete by the morning of December 28. The Soviet military command at Termez, in Soviet Uzbekistan, announced on Radio Kabul that Afghanistan had been liberated from Amin's rule. According to the Soviet Politburo they were complying with the 1978 Treaty of Friendship, Cooperation and Good Neighborliness that former President Taraki signed. Moscow calculated that Amin's ouster would end the factional power struggle within the PDPA and also reduce Afghan discontent. A broadcast allegedly from the Kabul radio station, but identified as actually coming from a facility in Soviet Uzbekistan, announced that the execution of Hafizullah Amin was carried out by the Afghan Revolutionary Central Committee. That committee then elected as head of government former Deputy Prime Minister Babrak Karmal, who had been demoted to the relatively insignificant post of ambassador to Czechoslovakia following the Khalq takeover, and that it had requested Soviet military assistance. <ref>The Soviet Invasion of Afghanistan in 1979: Failure of Intelligence or of the Policy Process? - Page 7</ref> Soviet ground forces entered Afghanistan from the north on December 27. In the morning, the Vitebsk parachute division landed at the airport at Bagram and the deployment of Soviet troops in Afghanistan was underway. Within two weeks, a total of five Soviet divisions had arrived in Afghanistan: the 105th Airborne Division in Kabul, the 66th Motorized Brigade in Herat, the 357th Motorized Rifle Division in Kandahar, the 16th Motorized Rifle Division based in northern Badakshan and the 306th Motorized Division in the capital. In the second week of the invasion alone, Soviet aircraft had made a total of 4,000 flights into Kabul.<ref>Fisk, Robert. The Great War for Civilisation: the Conquest of the Middle East. London: Alfred Knopf, 2005. pp. 40-41 ISBN 1-84115-007-X</ref> Soviet occupation of Afghanistan Soviet operations The initial force entering the country consisted of three motor rifle divisions (including the 201st), one separate motor rifle regiment, one airborne division, 56th Separate Air Assault Brigade, and one separate airborne regiment.<ref>Carey Schofield, The Russian Elite, Greenhill/Stackpole, 1993, p.60-61</ref> Following the deployment, the Soviet troops were unable to establish authority outside Kabul. As much as 80% of the countryside still escaped effective government control. The initial mission, to guard cities and installations, was expanded to combat the anti-communist Mujahideen forces, primarily using Soviet reservists. Early military reports revealed the difficulty which the Soviet forces encountered in fighting in mountainous terrain. The Soviet Army was unfamiliar with such fighting, had no counter-insurgency training, and their weaponry and military equipment, particularly armored cars and tanks, were sometimes ineffective or vulnerable in the mountainous environment. Heavy artillery was extensively used when fighting rebel forces. The Soviets used helicopters (including Mil Mi-24 Hind helicopter gunships) as their primary air attack force, which was regarded as the most formidable helicopter in the world, supported with fighter-bombers and bombers, ground troops and special forces. In some areas, they conducted a scorched earth campaign destroying villages, houses, crops, and livestock. International condemnation arose due to the alleged killings of civilians in any areas where Mujahideen were suspected of operating. The inability of the Soviet Union to break the military stalemate, gain a significant number of Afghan supporters and affiliates, or to rebuild the Afghan Army, required the increasing direct use of its own forces to fight the rebels. Soviet soldiers often found themselves fighting against civilians due to the elusive tactics of the rebels. They did repeat many of the American Vietnam mistakes, winning almost all of the conventional battles, but failing to control the countryside. World reaction U.S President Jimmy Carter indicated that the Soviet incursion was "the most serious threat to the peace since the Second World War." Carter later placed an embargo on shipments of commodities such as grain and high technology to the Soviet Union from the US. The increased tensions, as well as the anxiety in the West about masses of Soviet troops being in such proximity to oil-rich regions in the gulf, effectively brought about the end of detente. The international diplomatic response was severe, ranging from stern warnings to a boycott of the 1980 Summer Olympics in Moscow. The invasion, along with other events, such as the revolution in Iran and the US hostage stand-off that accompanied it, the Iran-Iraq war, the 1982 Israeli invasion of Lebanon, the escalating tensions between Pakistan and India, and the rise of Middle East-born terrorism against the West, contributed to making the Middle East an extremely violent and turbulent region during the 1980s. Babrak Karmal's government lacked international support from the beginning. The foreign ministers of the Organization of the Islamic Conference deplored the invasion and demanded Soviet withdrawal at a meeting in Islamabad in January 1980. The United Nations General Assembly voted by 104 to 18 with 18 abstentions for a resolution which "strongly deplored" the "recent armed intervention" in Afghanistan and called for the "total withdrawal of foreign troops" from the country. Afghan resistance - See also: Mujahideen By the mid-1980s, the Afghan resistance movement, receptive to assistance from the United States, United Kingdom, China, Saudi Arabia, Pakistan, and others, contributed to Moscow's high military costs and strained international relations. Thus, Afghan guerrillas were armed, funded, and trained mostly by the US and Pakistan. It also included contingents of so-called Afghan Arabs (hailed by US President Ronald Reagan as "freedom fighters" and funded by US intelligence services) foreign fighters recruited from the Muslim world to wage jihad against the communists. Notable among them was a young Saudi named Osama bin Laden, whose Arab group eventually evolved into Al Qaeda. Of particular significance was the donation of American-made FIM-92 Stinger anti-aircraft missile systems, which increased aircraft losses of the Soviet Air Force. However, many field commanders, including Ahmad Shah Massoud, stated that the Stingers' impact was much exaggerated. Also, while guerrillas were able to fire at aircraft landing at and taking off from airstrips and airbases, anti-missile flares limited their effectiveness. Pakistan's Inter-Services Intelligence (ISI) and Special Service Group (SSG) were actively involved in the conflict, and in cooperation with the CIA and the United States Army Special Forces supported the armed struggle against the Soviets. It is speculated that United Kingdom's Special Air Service (SAS) also played an unpublicized role during the war. In May 1985, the seven principal rebel organizations formed the Seven Party Mujahideen Alliance to coordinate their military operations against the Soviet army. Late in 1985, the groups were active in and around Kabul, unleashing rocket attacks and conducting operations against the communist government. By mid-1987 Soviet Union announced it was withdrawing its forces. Sibghatullah Mojaddedi was selected as the head of the Interim Islamic State of Afghanistan, in an attempt to reassert its legitimacy against the Moscow-sponsored Kabul regime. Mojaddedi, as head of the Interim Afghan Government, met with then President of the United States George H.W. Bush, achieving a critical diplomatic victory for the Afghan resistance. Pakistani involvement and aid to the Afghan resistance The Soviet invasion of Aghanistan also posed a significant security threat to Pakistan from the north-west. United States President Jimmy Carter had accepted the view that Soviet aggression could not be viewed as an isolated event of limited geographical importance but had to be contested as a potential threat to the Persian Gulf region. The uncertain scope of the final objective of Moscow in its sudden southward plunge made the American stake in an independent Pakistan all the more important. After the Soviet invasion, Pakistan's military ruler General Muhammad Zia-ul-Haq started accepting financial aid from the Western powers to aid the Mujahideen. The United States, the United Kingdom and Saudi Arabia became major financial contributors to General Zia, who, as ruler of a neighbouring country, greatly helped by ensuring the Afghan resistance was well-trained and well-funded. Pakistan's Inter-Services Intelligence and Special Service Group now became actively involved in the conflict against the Soviets. After Ronald Reagan became the new United States President in 1981, aid for the Mujahideen through Zia's Pakistan significantly increased. In retaliation, the KHAD, under Afghan leader Mohammad Najibullah, carried out (according to the Mitrokhin archives and other sources) a large number of operations against Pakistan, which also suffered from an influx of weaponry and drugs from Afghanistan. In the 1980s, as the front-line state in the anti-Soviet struggle, Pakistan received substantial aid from the United States and took in millions of Afghan (mostly Pashtun) refugees fleeing the Soviet occupation. Although the refugees were controlled within Pakistan's largest province, Balochistan under then-martial law ruler General Rahimuddin Khan, the influx of so many refugees - believed to be the largest refugee population in the world <ref>Amnesty International file on Afghanistan URL Accessed March 22, 2006</ref> - into several other regions had a heavy impact on Pakistan and its effects continue to this day. Despite this, Pakistan played a significant role in the eventual withdrawal of Soviet military personnel from Afghanistan. Soviet withdrawal from Afghanistan The toll in casualties, economic resources, and loss of support at home increasingly felt in the Soviet Union was causing criticism of the occupation policy. Leonid Brezhnev died in 1982, and after two short-lived successors, Mikhail Gorbachev assumed leadership in March 1985. As Gorbachev opened up the country's system, it became more clear that the Soviet Union wished to find a face-saving way to withdraw from Afghanistan. The government of President Karmal, established in 1980 and identified by many as a puppet regime, was largely ineffective. It was weakened by divisions within the PDPA and the Parcham faction, and the regime's efforts to expand its base of support proved futile. Moscow came to regard Karmal as a failure and blamed him for the problems. Years later, when Karmal’s inability to consolidate his government had become obvious, Mikhail Gorbachev, then General Secretary of the Soviet Communist Party, said: - The main reason that there has been no national consolidation so far is that Comrade Karmal is hoping to continue sitting in Kabul with our help. In November 1986, Mohammad Najibullah, former chief of the Afghan secret police (KHAD), was elected president and a new constitution was adopted. He also introduced in 1987 a policy of "national reconciliation," devised by experts of the Communist Party of the Soviet Union, and later used in other regions of the world. Despite high expectations, the new policy neither made the Moscow-backed Kabul regime more popular, nor did it convince the insurgents to negotiate with the ruling government. Informal negotiations for a Soviet withdrawal from Afghanistan had been underway since 1982. In 1988, the governments of Pakistan and Afghanistan, with the United States and Soviet Union serving as guarantors, signed an agreement settling the major differences between them known as the Geneva accords. The United Nations set up a special Mission to oversee the process. In this way, Najibullah had stabilized his political position enough to begin matching Moscow's moves toward withdrawal. On July 20 1987, the withdrawal of Soviet troops from the country was announced. Among other things the Geneva accords identified the U.S. and Soviet non-intervention with internal affairs of Pakistan and Afghanistan and a timetable for full Soviet withdrawal. The agreement on withdrawal held, and on February 15, 1989, the last Soviet troops departed on schedule from Afghanistan. Their exit, however, did not bring either lasting peace or resettlement due in part to United States and Pakistan's violations of Geneva accords. Official Soviet personnel strengths and casualties Between December 25th, 1979 and February 15th 1989 a total of 620,000 soldiers serviced with the forces in Afghanistan (though there were only 80,000-104,000 force at one time in Afghanistan). 525,000 in the Army, 90,000 with border troops and other KGB sub-units, 5,000 in independent formations of MVD Internal Troops and police. A further 21,000 personnel were with the Soviet troop contingent over the same period doing various white collar or manual jobs. The total irrecoverable personnel losses of the Soviet Armed Forces, frontier and internal security troops came to 14,453. Soviet Army formations, units and HQ elements lost 13,833, KGB sub units lost 572, MVD formations lost 28 and other ministries and departments lost 20 men. During this period 417 servicemen were missing in action or taken prisoner; 119 of these were later freed, of whom 97 returned to the USSR and 22 went to other countries. There were 469,685 sick and wounded, of whom 53,753 or 11.44%, were wounded, injured or sustained concussion and 415,932 (88.56%) fell sick. A high proportion of casualties were those who fell ill. This was because of local climatic and sanitary conditions, which were such that acute infections spread rapidly among the troops. There were 115,308 cases of infectious hepatitis, 31,080 of typhoid fever and 140,665 of other diseases. Of the 11,654 who were discharged from the army after being wounded, maimed or contracting serious diseases, 92%, or 10,751 men were left disabled.<ref>Krivosheev, G. F. (1993). Combat Losses and Casualties in the Twentieth Century. London, England: Greenhill Books.</ref> Material losses were as follows: - 118 jet aircrafts - 333 helicopters - 147 main battle tanks - 1,314 IFV/APCs - 433 artillery and mortars - 1,138 radio sets and command vehicles - 510 engineering vehicles - 11,369 trucks and petrol tankers Afghan Civil War (1989-1992) The civil war continued in Afghanistan after the Soviet withdrawal. The Soviet Union left Afghanistan deep in winter with intimations of panic among Kabul officials. The Afghan Resistance was poised to attack provincial towns and cities and eventually Kabul, if necessary. Najibullah's regime, though failing to win popular support, territory, or international recognition, was able to remain in power until 1992. Kabul had achieved a stalemate which exposed the Mujahedin's weaknesses, political and military. For nearly three years, Najibullah's government successfully defended itself against Mujahedin attacks, factions within the government had also developed connections with its opponents. According to Russian publicist Andrey Karaulov, the main reason why Najibullah lost power was the fact Russia refused to sell oil products to Afghanistan in 1992 for political reasons (the new Russian government did not want to support the former communists) and effectively triggered a blockade. The defection of General Abdul Rashid Dostam and his Uzbek militia, in March 1992, seriously undermined Najibullah's control of the state. In April, Kabul ultimately fell to the Mujahedin because the factions in the government had finally pulled it apart. Najibullah lost internal control immediately after he announced his willingness, on March 18, to resign in order to make way for a neutral interim government. Ironically, until demoralized by the defections of its senior officers, the Afghan Army had achieved a level of performance it had never reached under direct Soviet tutelage. Grain production declined an average of 3.5% per year between 1978 and 1990 due to sustained fighting, instability in rural areas, prolonged drought, and deteriorated infrastructure. Soviet efforts to disrupt production in rebel-dominated areas also contributed to this decline. Furthermore, Soviet efforts to centralize the economy through state ownership and control, and consolidation of farmland into large collective farms, contributed to economic decline. During the withdrawal of Soviet troops, Afghanistan's natural gas fields were capped to prevent sabotage. Restoration of gas production has been hampered by internal strife and the disruption of traditional trading relationships following the dissolution of the Soviet Union. - Rambo III was an action movie with Sylvester Stallone set within the Soviet invasion of Afghanistan. - The Beast is a movie released in 1988 about the crew of a Soviet T-62 tank and their attempts to escape a hostile region, set during the invasion of Afghanistan in 1981. - Afghan Breakdown (Afganskiy Izlom), the first in-depth movie about the war, produced jointly by Italy and the Soviet Union, in full cooperation with the Red Army, in 1991. - The 1987 James Bond movie The Living Daylights, with Timothy Dalton as Bond, was fictionally set in Soviet-occupied Afghanistan. - 9th Company, the biggest Russian box office success to date. Based upon true events (but largely fictionalized too), it details the 9th Company being left behind as the Soviet Union withdrew from Afghanistan and was slaughtered before the withdrawing Soviets came to the rescue. Some versions available with subtitles. - The Road to Kabul ("الطريق الى كابول") Arabic television series explored Arab youth participation in the Afghan war. - Charlie Wilson's War is an upcoming movie about the real-life Congressman Charlie Wilson and his relentless efforts to increase CIA support for the Afghan fighters. Tom Hanks is slated to play the role of Congressman Wilson. - In the anime Black Lagoon, mafia boss Balalaika was formerly a Russia Airborne Troops paratrooper and Soviet Army officer who served during the Afghan war. Further reading - The Sword and the Shield: The Mitrokhin Archive and the Secret History of the KGB, Christopher Andrew and Vasili Mitrokhin, Basic Books, 1999, ISBN 0-465-00310-9 - Kurt Lohbeck, introduction by Dan Rather, Holy War, Unholy Victory: Eyewitness to the CIA's Secret War in Afghanistan, Regnery Publishing (November, 1993), hardcover, ISBN 0-89526-499-4 - Stephane Courtois, Le livre noir du Communisme, hardcover, ISBN 0-674-07608-7 - George Crile, Charlie Wilson's War: the extraordinary story of the largest covert operation in history, Atlantic Monthly Press 2003, ISBN 0-87113-851-4 - Robert D. Kaplan, Soldiers of God: With Islamic Warriors in Afghanistan and Pakistan, ISBN 1-4000-3025-0 - Lester W. Grau, The Soviet-Afghan War: How a Super Power Fought and Lost, ISBN 0-7006-1186-X - John Prados, Presidents' Secret Wars, ISBN 1-56663-108-4 - Kakar, M. Hassan, Afghanistan: The Soviet Invasion and the Afghan Response, 1979-1982, Berkeley: University of California Press, 1995. (free online access courtesy of UCP) - Borovik, Artyom, The Hidden War: A Russian Journalist's Account of the Soviet War in Afghanistan, ISBN 0-8021-3775-X - Vladimir Rybakov, The Afghans, Infinity Publishing, 2004. ISBN 0-7414-2296-4 - Hosseini Khaled, The Kite Runner, Riverhead Books, 2003. ISBN 1-57322-245-3 - Tom Clancy, The Cardinal of the Kremlin, G. P. Putman's Sons, 1988 - Ken Follett, Lie Down with Lions, Pan Publishers, 1998 - Stephen Collishaw, Amber, Sceptre, 2004 External links - The Art of War project, dedicated to the soldiers of the recent wars, set up by the veterans of the Afghan war. Has Russian and English versions - "Afganvet" (russian: "Афганвет") - USSR/Afghanistan war veterans community - Casualty figures - U.N resolution A/RES/37/37 over the Intervention in the Country - Afghanistan Country Study (details up to 1985) - The Take-Down of Kabul: An Effective Coup de Main - Soviet Afghan War Documentary ar:الحرب السوفيتية في أفغانستان cs:Sovětská invaze do Afghánistánu da:Den afghansk-sovjetiske krig de:Afghanischer Bürgerkrieg und sowjetische Invasion es:Invasión soviética de Afganistán fa:جنگ شوروی در افغانستان fr:Guerre d'Afghanistan (1979) hi:अफ़ग़ानिस्तान में सोवियत युद्ध ko:소비에트 연방의 아프가니스탄 침공 it:Invasione sovietica dell'Afghanistan he:מלחמת אפגניסטן (ברית המועצות) lv:Afganistānas karš nl:Afghaanse Oorlog (1979-1989) ja:ソビエト連邦のアフガニスタン侵攻 no:Den afghansk-sovjetiske krig pl:Radziecka interwencja w Afganistanie pt:Guerra do Afeganistão ro:Războiul sovietic din Afganistan ru:Советская война в Афганистане 1979—1989 simple:Soviet war in Afghanistan sr:Совјетско-авганистански рат sh:Afgansko-sovjetski rat fi:Afganistanin sisällissota sv:Afghansk-sovjetiska kriget zh:阿富汗战争 (1979年)
<urn:uuid:935ca2b5-a2cd-4567-acd2-02771be48fac>
CC-MAIN-2017-17
http://worldwizzy.com/library/Soviet_war_in_Afghanistan
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917121355.9/warc/CC-MAIN-20170423031201-00014-ip-10-145-167-34.ec2.internal.warc.gz
en
0.95453
6,688
3.296875
3
The French Resistance Charles de Gaulle Le Struthof or Camp du Struthof in Alsace has a particular significance for the French because it was the main Nazi concentration camp where French resistance fighters were sent after they were captured by the Germans during World War II. The camp, also known as Natzwiller-Struthof, has become a symbol of the French resistance against the evils of Fascism during the German occupation of France. Although there were around 40,000 French citizens who were convicted of collaborating with the Nazis, there were also thousands of brave men and women who did not accept the capitulation of France and continued to fight Fascism as civilian soldiers or partisans in defiance of both the Geneva Convention of 1929 and the Armistice signed by France and Germany after France surrendered. The leader of the French resistance was Charles de Gaulle, shown in the photo above, as he broadcasts over the British Broadcasting Company (BBC). De Gaulle made his headquarters in Great Britain and the French resistance was aided and financed by the British. In 2005, a new museum at the Natzweiler Memorial Site was dedicated to the heroes of the French resistance whose efforts to defeat the Nazis and liberate Europe were significant. By the time that the Allies were ready to invade Europe in June 1944, there were as many as 9 major resistance networks which were fighting as guerrillas against the German occupation of France. There were an estimated 56,000 French resistance fighters who were captured and sent to concentration camps; half of them The French resistance fighters blew up bridges, derailed trains, directed the British in the bombing of German troop trains, kidnapped and killed German army officers, and ambushed German troops. They took no prisoners, but rather killed any German soldiers who surrendered to them, sometimes mutilating their bodies for good measure. The Nazis referred to them as "terrorists." The photo below shows a Nazi poster which depicts the heroes of the French resistance as members of an Army of Crime. World War II had started when France and Great Britain both declared war on Germany after Hitler ordered the invasion of Poland on September 1, 1939. Poland was conquered by September 28, 1939 with the help of the Soviet Union which invaded Poland from the other side on September 17, 1939. France and Great Britain had made a pact with Poland that they would provide support in case of an attack, but only if the attack was made by the Germany Army; they were under no obligation to declare war on the Soviet Union. The British also made a pact with France that neither country would sign a separate peace with the Germans. After the conquest of Poland, there was a period called the "phony war," or the "Sitzkrieg" when there were no further attacks by the Germans. Months later, when the war started up again, Germany invaded France on May 10, 1940, going around the Maginot Line, which the French had thought would protect them from Nazi aggression. On June 17, 1940, Marshal Henri Philippe Pétain, the new prime minister of France, asked the Germans for surrender terms and an Armistice was signed on June 22, 1940. The French agreed to an immediate "cessation of fighting." According to the terms of the Armistice, the French were allowed to set up a puppet government at Vichy in the southern part of the country which the Germans did not occupy. The Vichy government openly collaborated with the Germans, even agreeing to cooperate in the sending of French Jews to Nazi There were no German soldiers stationed in Vichy France, and many refugees, including some Jews, flocked there. In occupied France, the German soldiers were ordered by Hitler to behave like gentlemen. They were not to rape and plunder. They were to take only photographs. Hitler himself visited Paris and had his photo taken in front of the Eiffel Tower, as shown On Hitler's orders, the German conquerors went out of their way to be friendly. They set up food depots and soup kitchens to feed the French people until the economy could be brought back to normal. The French soon decided that collaboration with the Germans was to their advantage. The French people had been stunned by the collapse of the French Army in only a few weeks. To many, this meant the end of France as a world power. The collaborationists felt that the German war machine was invincible and the only sensible thing to do was to become allies with the Nazis who would soon unite Europe under their domination. But for some, centuries of hatred of the Germans prevented them from accepting their defeat. Charles de Gaulle, a tank corps officer in the French Army, refused to take part in the surrender; he fled to England where, on the eve of the French capitulation, he broadcast a message to the French people over the BBC on June 18, 1940. This historic speech rallied the French people and helped to start the resistance movement. Part of his speech is Is the last word said? Has all hope gone? Is the defeat definitive? No. Believe me, I tell you that nothing is lost for France. This war is not limited to the unfortunate territory of our country. This war is a world war. I invite all French officers and soldiers who are in Britain or who may find themselves there, with their arms or without, to get in touch with me. Whatever happens, the flame of French resistance must not die and will not die. Although Charles de Gaulle was not well known in France, and few people had heard his broadcast, this was the beginning of the French resistance which slowly gained momentum. At the time of the French surrender, America was not yet involved in World War II. President Franklin D. Roosevelt had no choice but to recognize Vichy France as the legitimate government. Winston Churchill refused to acknowledge Pétain's government and recognized de Gaulle as the leader of the "Free French." On July 4, 1940, a court-martial in Toulouse sentenced de Gaulle in absentia to four years in prison. On August 2, 1940, a second court-martial sentenced him to death. Aided by the British, de Gaulle set up the Free French movement, based in London. It was particularly galling to the French that Germany had annexed the provinces of Alsace and Lorraine to the Greater German Reich after the Armistice. The Cross of Lorraine was later adopted by Charles de Gaulle as the symbol of his Free French movement. The French resistance was in direct violation of the Armistice signed by the French, which stipulated the following: The French Government will forbid French citizens to fight against Germany in the service of States with which the German Reich is still at war. French citizens who violate this provision are to be treated by German troops Since Great Britain was the only country still at war with the German Reich, the collaboration of the French resistance with the British was a violation of the Armistice, as was the later collaboration of the partisans with American troops after the Normandy invasion. According to the Geneva Convention of 1929, the French resistance fighters were non-combatants who did not have the rights of Prisoners of War if they were captured. In the summer of 1940, British Prime Minister Winston Churchill established an intelligence organization called the Special Operations Executive (SOE). Its purpose was to wage secret war on the continent, but with the defeat of France this intelligence network was all but destroyed. The SOE was revived and by November 1940, it was giving aid to the French At least one American participated in the French Resistance: Lt. Rene Guiraud was a spy in the American Military Intelligence organization, called the OSS. After being given intensive specialized training, Lt. Guiraud was parachuted into Nazi-occupied France, along with a radio operator. His mission was to collect intelligence, harass German military units and occupation forces, sabotage critical war material facilities, and carry on other resistance activities. Lt. Guiraud organized 1500 guerrilla fighters and developed intelligence networks. During all this, Guiraud posed as a French citizen, wearing civilian clothing. He was captured and interrogated for two months by the German Gestapo, but revealed nothing about his mission. He was then sent to the Dachau concentration camp, where he participated in the camp resistance movement along with several captured British SOE spies. Because he was an illegal combatant, wearing civilian clothing, Lt. Guiraud did not have the rights of a POW under the Geneva Convention. At first, the French resistance was not organized; it consisted of individual acts of sabotage. Ordinary French citizens cut telephone lines so that communications were interrupted, resulting in German soldiers being killed because they had not received warning of bombing raids by the British Royal Air Force. The Germans fought back by announcing that hostages would be shot if more acts of resistance were carried out. Slowly, resistance organizations began to form. Telephone workers united in a secret organization to sabotage telephone lines and intercept military messages which they would give to British spies operating in France. Postal workers organized in order to intercept important military communications. The French railroad workers formed a resistance group called the Fer Réseau or Iron Network. They diverted freight shipments to the wrong location; they caused derailments by not operating the switches properly; they destroyed stretches of railroad tracks and blew up railroad bridges. Women also participated as lone fighters in the resistance, as for example, Madame Lauro who poured hydrochloric acid and nitric acid on German food supplies in freight cars on the French railroads. Hundreds of the railroad workers were shot after they were caught, but Madame Lauro continued her acts of sabotage, working alone and at night; she was never captured. Another French woman, Marie-Madeleine Fourcade, became the head of the most famous resistance network of all, the Alliance Réseau; its headquarters was at Vichy, the capital of unoccupied France. This espionage network was one of the first to be organized with the help of the British. They began by supplying the Alliance Network with short wave radios, dropped by parachute into Vichy France. Millions of francs to support the Alliance Network were dropped from the air by the British or sent by couriers. The British SOE and the French resistance worked together throughout the remainder of the war to obtain vital information about the German military and their plans. The SOE would send questions for the French resistance network to find the answers to and report back the information. The Alliance Network was originally started by Georges Loustanau-Laucau and a group of his friends. The nickname of the Alliance was Noah's Ark because Madame Fourcade gave the members of her underground network the names of animals as their code names. She took the name Hedgehog as her own code name. Madame Fourcade was eventually captured, but she escaped by squeezing through the bars on the window of her prison cell. She then joined the Maquis and worked with the British SOE spy organization in the last days before France was liberated. Sir Claude Dansey, the head of the S.I.S, requested the Alliance Réseau to go to Alsace to give General George Patton information about the German Order of Battle in that region. The Alliance was able to help Patton with some very valuable intelligence that had been obtained by the British. Madame Fourcade survived the war, but members of her Alliance Network were captured in Alsace and executed As the war progressed, anti-Fascists and Communists from other countries joined the British SOE as secret agents. One of the most famous was Albert Guérisse, who headed the PAT line which helped downed British and American flyers to escape from France, going through Spain and then back to England. Guérisse was captured in 1943 and subsequently sent to Natzweiler-Struthof. Another escape line, called Comet, was also infiltrated by German agents in 1943 and Dédée de Jongh was arrested by the Gestapo. She was sent to the women's concentration camp at Ravensbrück where she survived. Guérisse was taken to Dachau when the Natzweiler camp was evacuated and he also survived. After the arrests of Guérisse and de Jongh, the escape lines were rebuilt and they became more effective than ever in saving British and American downed fliers. Eight women SOE agents were executed, four at Dachau and four at Natzweiler, for their part in the French resistance. Three other women SOE agents were shot at Ravensbrück. But for some strange reason, Guérisse and de Jongh were not executed, despite the important part they had played in rescuing fliers so that they could live to bomb German cities again in what the Nazis called "terror bombing." Nor was Madame Fourcade, a very high-ranking resistance fighter, executed by the Gestapo. Instead, they tried to convince her to become a double agent, but she refused. Don Lawson, the author of a book entitled "The French Resistance," wrote the following with regard to the downed fliers who were saved by the resistance fighters: How many Allied military escapees and evaders were actually smuggled out of France and into Spain will never really be known. Records during the war were poorly kept and reconstruction of them has been unsatisfactory. Combined official American and British sources indicate there were roughly 3,000 evading American fliers and several hundred escaping POWs who were processed through Spain. These same sources indicate there were roughly 2,500 evading British fliers and about 1,000 escaping POWs. (American and British escapees and evaders in all of the theaters of war totaled some 35,000, which amounts to several military divisions.) Operating these escape and evasion lines was not, of course, without cost in human lives. Here, too, records are incomplete and unsatisfactory, since many of the resistants simply disappeared without a trace. Estimates of losses vary from the official five hundred to as many as several thousand. Historians Foot and Langley estimate that for every escapee who was safely returned to England a line operator lost his or her life. According to the terms of the Armistice signed on June 22, 1940, the 1.5 million captured French soldiers, who were prisoners of war, were to held in captivity until the end of the war. The French agreed to this because they thought that the British would surrender in a few weeks; instead, the British rejected all peace offers by the Germans and the French POWs remained in prison for five long years. Many of them escaped and joined the Maquis, one of the most notorious resistance groups, which distinguished itself by committing atrocities against German In 1943, the Germans started conscripting Frenchmen as workers in German factories. Many refused to go and escaped into the forests where they joined the Maquis. In the provinces of Alsace and Lorraine, which were annexed into the Greater German Reich, former French citizens were conscripted into the German Army. Many Alsatians went into hiding and escaped into Vichy France where they too became part of the French resistance, fighting with the Maquis. The Maquis was independent from the other resistance groups; they operated as guerrilla fighters in rural areas and especially in mountainous regions. The name Maquis comes from a word that means bushes that grow along country roads. The Maquis literally hid in the bushes, darting out to kidnap German Army officers and execute them in a barbarous fashion. One of the most well-known victims of the Maquis was Major Helmut Kämpfe, the commander of Der Führer Battalion 3, who was kidnapped on 9 June 1944 and killed the next day. The Maquisards, as the fighters in the Maquis were called, were politically diverse. Some of them, like the "Red Spaniards" who were former soldiers in the Spanish Civil War, were Communists, but in general, the Communists had their own resistance organizations, such as the FTP. This was a resistance group, formed by the Communist party, called the Francs-Tireurs et Partisans. The Communist party also formed the Front National which fought in the resistance. After the Allied invasion at Normandy on June 6, 1944, the Maquis became particularly active. In preparation for the invasion, the British had dropped a large number of weapons and millions of francs by parachute into rural areas. The weapons were stored in farm houses and villages, ready for the resistance fighters who would play an important part in the liberation of Europe. As a result, the Maquis was very effective in preventing German troops from reaching the Normandy area to fight the invaders. The reprisals against the Maquis by German troops became more and more vicious. Innocent French civilians suffered, as for example in the village of Oradour-sur-Glane which was destroyed by Waffen-SS soldiers on June 10, 1944. Henri Rosencher was a Jewish medical student and a Communist member of the Maquis. He survived the war and wrote a book entitled Le Sel, la cendre, la flamme (Salt, Ash, Fire) in which he described his work as an explosives expert with the French resistance. He was captured and sent to Natzweiler-Struthof, then later to Dachau where he was liberated by American troops on April 29, 1945. The following is a quote from Rosencher's book, which describes a typical Maquis resistance action which resulted in the death by suffocation of 500 German Wehrmacht On the morning of the 17th of June, I arrived in the area of Lus-la-Croix-Haute, the "maquis" [zone of resistance] under the command of Commander Terrasson. They were waiting for me and took me off by car. The job at hand was mining a tunnel through which the Germans were expected to pass by train. The Rail resistance network had provided all the details. My only role was as advisor on explosives. TNT (Trinitrotoluene - a very powerful explosive) and plastic charges were going to collapse the mountain, sealing off the tunnel at both ends and its air shaft. When I got there, all the ground work was done. I only had to specify how much of the explosive was necessary, and where to put it. I checked the bickfords, primers, detonators, and crayons de mise à feu. We stationed our three teams and made sure that they could communicate with each other. I settled into the bushes with the team for the tunnel's entrance. And we waited. Toward three p.m., we could hear the train coming. At the front came a platform car, with nothing on it, to be sacrificed to any mines that might be on the tracks, then a car with tools for repairs, and then an armored fortress car. Then came the cars over-stuffed with men in verdi-gris uniforms, and another armored car. The train entered the tunnel and after it had fully disappeared into it, we waited another minute before setting off the charge. Boulders collapsed and cascaded in a thunderous burst; a huge mass completely covered the entrance. Right after that, we heard one, then two huge explosions. The train has been taken prisoner. The 500 "feldgraus" inside weren't about to leave, and the railway was blocked for a long, long Another Jewish hero of the French resistance is Andre Scheinmann, who emigrated to the United States in 1953. Together with Diana Mara Henry, he has written a book entitled "I Am Andre: World War II Memoirs of a Spy in France." Andre is a German Jew whose family escaped to France in 1938 after the Nazi pogrom known as Kristallnacht. His parents were murdered at Auschwitz during the German occupation of France. Andre had been a soldier in the French Army and was a Prisoner of War when France surrendered, but he escaped and joined the French underground resistance movement. Pretending to be a collaborator, he became an interpreter for the Germans for the French national railroad. The Nazis never suspected that he was Jewish; he was given the job of overseeing the rail system in the Brittany region of France. As a member of the French underground, second in command of a network of 300 spies, Scheinmann's job was intelligence, but he also engaged in sabotage. His resistance network gathered information on German troop movements and reported weekly to the British. The information that they supplied was invaluable to the British Air Force in bombing German troop trains. Scheinmann and his compatriots also blew up trains, killing contingents of German soldiers. Scheinmann was eventually arrested by the German Gestapo; he endured 11 months in a Paris prison until he was sent in July 1943 to Natzweiler-Struthof as a Nacht und Nebel prisoner. He disappeared into the Night and Fog of the Nazi concentration camp system, where he was not allowed to communicate with anyone on the outside. At Natzweiler, he was given a cushy job working in the weaving workshop, and because of his ability to speak German, he was made a Kapo with the authority to supervise Along with many other well-known French resistance fighters, he was evacuated from Natzweiler to Dachau and released by the American liberators. He joined the FFI and remained a soldier in the French military even after the war ended. As a hero of the resistance, he was awarded the Legion of Honor, the Medal of Resistance and the Medal of the Camps by the French government. The FFI, or the Force Française d'Interior, also known as the "Fee Fee," was also very active after the invasion at Normandy. The British increased their arms drops after the invasion and a vast arsenal of weapons was stored on farms and in villages, ready to be handed out to the resistance fighters. Before Hitler broke his non-aggression pact with the Soviet Union, signed in August 1939, the Communists in France had refused to join the resistance movement. When Germany invaded the Soviet Union in June 1941, the French Communists then began to organize. The Communist resistance fighters were not loyal to de Gaulle or to France; their loyalty was to international Communism and the Soviet Union, which was fighting on the side of the Allies, against Fascism. The objective of the Communist guerrilla fighters was the defeat of Fascism and the establishment of a Communist government in France, which would have direct allegiance to the Soviet Union. Because of this, they preferred to fight independently of the other resistance groups. Their specialty was capturing German army officers and executing them, which brought swift reprisals from the Germans. In October 1941, the Germans shot 50 hostages in reprisal for the assassination of a German field commander at Nantes. This did not stop the assassinations; in 1943, the Communists claimed that they were killing 500 to 600 German soldiers per month. Following the German invasion of the Soviet Union in June 1941, a right-wing collaborationist military group, called the Service d'Ordre Legionnaire, was organized by Joseph Darnand in July 1941 in support of Marshal Henri-Philippe Pétain and his Vichy government. Darmand volunteered to help the Germans and the Vichy officials in rounding up the Jews in France and in fighting against the French resistance. In January 1943, the Service d'Ordre Legionnaire was reorganized into the Milice Française, which became the secret police of the Vichy government, working in close association with the German Gestapo in France. Darmand was accepted into the Waffen-SS army and given the rank of Sturmbannführer. Like all members of the SS, Darmand took an oath of loyalty to Adolf Hitler. Another famous Milice leader was Paul Touvier. By 1944, the Milice had expanded, from an initial 5,000 members, to a special police force of 35,000, which greatly assisted the Gestapo in fighting against the resistance. Without the help of the French collaborationists, the job of the Gestapo would have been much more difficult. Sometimes, the Milice made mistakes, giving misleading information, as when two Milice policemen told the Germans that a Germany army officer, who had been kidnapped by the Maquis, was being held in the village of Oradour-sur-Glane and was going to be ceremoniously burned alive; this wrong information led to the destruction of Oradour-sur-Glane and the murder of 642 innocent people, one of the worst atrocities committed by the Waffen-SS in World War II. The greatest hero of the French Resistance was a man named Jean Moulin, shown in the photo above. His great contribution was in convincing Charles de Gaulle that all the independent resistance groups and the Secret Army should be united into one central organization. De Gaulle had not planned to use the French resistance to liberate France, but Moulin advised him that the Allies would have a much better chance of defeating the Germans with the help of a united resistance movement. Moulin was authorized by de Gaulle to establish the National Resistance Council (CNR). He contacted the leaders of the various resistance networks and got them to agree with his plan by promising them money and supplies from the British. Then he suggested that the resistance should form a military organization that would fight the Germans in open combat. He was preparing for the day when the Allies would invade Europe and a French Army would be ready to join them. De Gaulle hoped to use this army to take control of France and become its President after the country was liberated. Most of the resistance networks were against this idea because they had been successfully operating as guerrilla fighters and did not want to become part of an Army. The Communists liked the idea of a resistance Army but they would not swear loyalty to de Gaulle, since they had their own plans to set up a Communist government in France. In the Spring of 1943, the first meeting of the National Resistance Council took place in Paris, attended by the leaders of 16 different resistance organizations. The meeting lasted for several days, during which time Moulin was finally able to persuade the rival factions to unite and to swear an oath of loyalty to Charles de Gaulle as their leader. Several weeks later, on June 21, 1943, Moulin was arrested by the German According to Don Lawson, author of a book called "The French Resistance," Moulin had been betrayed by Jean Moulton, an agent with "Combat," one of the oldest of the resistance groups. Moulton had been captured by the Gestapo and to avoid being shot or tortured, he agreed to give names and locations of resistance members. Hundreds of the resistance fighters were then arrested by the infamous Gestapo Chief, Klaus Barbie. Many of them wound up at the Natzweiler-Struthof Two of the biggest problems for the resistance fighters were transportation and food, which was also was a problem for the French people in general. Food was rationed and the Germans had confiscated most of the cars. The British supplied weapons and money, but there were no food drops into France. The places where food was available, mainly in rural areas, became thriving black market centers. The Maquis traveled mostly by bicycle, although some had managed to hide a few old cars. Gasoline was not available for most French civilians; only doctors and others who needed to use a car were allowed gasoline. When the Allied invasion came, the resistance fighters were cautioned to wear armbands with the Cross of Lorraine, so that they could be easily identified by the Allied soldiers. French women who were not part of the resistance were asked to volunteer to help sew the armbands. After the successful Normandy invasion, General Dwight D. Eisenhower made the decision not to take Paris immediately. De Gaulle knew that Paris was a Communist stronghold and he knew that if Paris were liberated from within by the Communists, they would probably take control of the French government. To prevent the Communists from taking control of the capital city of Paris, De Gaulle decided that Paris must be liberated by the According to Don Lawson in his book "The French Resistance," de Gaulle had taken steps, before the Normandy invasion, to strengthen his plan to become the new President of France after the liberation. He insisted that the SOE increase its weapons drops, but stipulated that these weapons should be mainly parachuted to the Maquis in the outlying areas, with only a few going to the 25,000 Communist resistance fighters holed up in Paris. Don Lawson wrote the following, regarding the decision to drop weapons to outlying areas: That this procedure was justified by motivations other than de Gaulle's personal ones was clearly indicated by the fact that in the peninsula of Brittany alone fewer than 100,000 FFI kept several German divisions pinned down during the Normandy campaign. In the whole of France, it was estimated by General Eisenhower, the FFI's efforts in preventing German troops from attacking the Allied invasion forces were the equivalent of some fifteen Allied divisions. As always, these resistance efforts were not without cost. The Germans were now desperate, and their reprisals were even more savage than before. In March of 1944 an entire Maquis band numbering more than a thousand resistants were wiped out in the Haute Savoie region. In July 1944 another Maquis force of similar size was destroyed In the days immediately following the Normandy invasion, the FFI, or the French Forces of the Interior, became a French Army under the Supreme Headquarters Allied Expeditionary Forces (SHAEF) commanded by General Eisenhower, who unilaterally informed the Germans that the French resistance fighters were to be regarded as legal combatants. Eisenhower authorized a French combat division to be commanded by General Jacques-Philippe Leclerc. This division was called the 2nd Armored Division, but it was more commonly known as Division Leclerc. De Gaulle contacted the Communist resistance in Paris and unilaterally informed them that Division Leclerc would be the liberators of Paris. Meanwhile, Hitler was holed up in his Berlin bunker and he had seemingly gone mad; he ordered the destruction of Paris rather than surrender it to the Allies. His generals ignored this order and Paris was saved. Eisenhower had finally agreed that the 2nd Armored Division should lead the liberation of Paris with the US Fourth Infantry Division providing backup. Paris was liberated on August 25, 1944; Charles de Gaulle rode into Paris in triumph, holding up his arms, spread wide in a V for victory sign.
<urn:uuid:12acd6da-e7e1-4357-a856-3e5feacc6387>
CC-MAIN-2017-17
http://www.scrapbookpages.com/Natzweiler/History/FrenchResistance.html
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917123270.78/warc/CC-MAIN-20170423031203-00308-ip-10-145-167-34.ec2.internal.warc.gz
en
0.975359
6,841
3.953125
4
Alcohol and HIV/AIDS: Intertwining Stories Human immunodeficiency virus (HIV)—the pathogen responsible for the current pandemic of acquired immune deficiency syndrome (AIDS)—targets the body’s immune system. HIV infection puts a person at risk for a multitude of diseases that someone with a healthy immune system generally would fight off. When HIV was recognized in the 1980s, testing positive for HIV infection was, in fact, a death sentence. Now, however, the availability of anti-HIV medications has made living with the virus a reality. Patients who stick to a careful medication regimen (i.e., taking several medicines at specific times throughout the day) may live from 20 and 40 years with HIV and do not always die of AIDS-related illnesses. People with HIV are now living longer and healthier lives. Nevertheless, many challenges remain in preventing both infection with the virus and progression of the disease. One of the many factors that thwarts efforts to prevent the spread of the infection and the treatment of infected patients is the use and abuse of alcohol by those who are at risk for infection or who already are infected. Scientists are gaining a better understanding of the complex relationship between alcohol consumption and HIV infection. Abusing alcohol or other drugs can impair judgment, leading a person to engage in risky sexual behaviors. People who drink also tend to delay getting tested for HIV and, if they do test positive, tend to postpone seeking treatment. When receiving treatment, they may have difficulty following the complex medications regimen. All of these factors increase the likelihood that an infected person will infect others or will go on to develop AIDS.1 Alcohol, then, occupies a prominent place in the HIV/AIDS landscape. This Alcohol Alert outlines the role that alcohol has in HIV/AIDS prevention, transmission, and disease progression and touches on recent efforts to reduce these strong, yet preventable, effects. Defining the Population Each year in the United States, between 55,000 and 60,000 people become infected with HIV, for a total of more than 1.1 million now infected. The population that once was primarily made up of homosexual White men is now composed increasingly of people of color, women, and young people. Of these new HIV cases, the proportion of women rose from 7 percent in 1985 to 25 percent in 2000. In that group, African American and Hispanic women were disproportionately represented compared with White women. Also, HIV/AIDS is now a leading cause of death among women in the United States, especially those of childbearing ages (i.e., between 25 and 44 years).1 As more young women are becoming infected, there is growing concern that the virus will be transmitted to their children, either during pregnancy or after birth. One of the main reasons for this shift in the HIV population is that heterosexual sex is now a primary route for HIV transmission. Alcohol use is one of the factors that increases the risk of HIV transmission among heterosexuals. Particularly among women, a strong association has been seen between alcohol and other drug abuse, infection with HIV, and progression to AIDS.2 Although additional studies are needed to further define alcohol use patterns among infected and at-risk people, it is clear that alcohol use is closely intertwined with the spread of HIV. Conceptual Model for Living With HIV Infection This figure is based on findings from the Veteran Aging Cohort Study, a large (approximately 7,000 participants) and lengthy (currently 7 to 8 years) study exploring the effects of alcohol on HIV outcomes within the broader context of aging. The study has helped to define a VACS Risk Index to identify those individuals most at risk. The researchers hope to use the VACS Risk Index to design better interventions for helping people with HIV to live longer and healthier lives. Alcohol and HIV: A Complex Relationship People infected with HIV are nearly twice as likely to use alcohol than people in the general population. Moreover, up to 50 percent of adults with HIV infection have a history of alcohol problems.3,4 Understanding how alcohol influences HIV is vital, both in treating those infected with HIV and in stopping the spread of this disease. The link between alcohol use and HIV is complex. Research shows that alcohol has numerous effects, both direct and indirect, on how this virus develops and how quickly it causes disease. Alcohol can increase how fast the virus grows, leading to higher amounts of virus (i.e., the viral load) in the body. Those high concentrations, in turn, can increase the spread of the disease. In one study, women receiving antiretroviral therapy (ART)* who drank moderately or heavily were more likely to have higher levels of the HIV virus, making it easier for them to spread the virus to others.2 Framework for HIV/AIDS Risk The socioecological framework for HIV/AIDS risk shows the factors that affect risk on a number of different levels. Risks range from “broken windows” (or the number of abandoned or vacant buildings in a neighborhood) to the individual’s use of alcohol and his or her sexual behavior. ART itself can be problematic in people who drink. A major cause of illness and death among HIV-infected patients that has emerged since the advent of ART is liver disease. Antiretroviral medications not only are processed in the liver, they also have toxic effects on the organ, and some drug combinations can lead to severe toxicity in up to 30 percent of patients who use them. These patients are left with the grim choice of continuing ART to prevent the progression of the virus to AIDS—thereby risking further liver damage—or stopping ART to prevent liver damage and progressing to AIDS. Further, a large proportion of people with HIV also are infected with hepatitis C (HCV). Alcohol abuse and dependence significantly increase the risk of liver damage both in people with HIV alone and with HCV co-infection.5 Research suggests too that alcohol may interfere directly with ART medications used for HIV, essentially blocking their effectiveness.6 Moreover, patients who drink are nine times more likely to fail to comply with their medication regimens compared with sober patients.7,8 When HIV-infected drinkers fail to take their medications or do not take them correctly, it can lead to a higher viral load and an increasing likelihood that the virus will become resistant to the therapy. ART, alcohol consumption, and HIV infection can be harmful in other ways as well. HIV patients typically experience declines in organ function earlier in life than do uninfected people. And because people with HIV tend to drink heavily well into their middle and older years, these organs are even more at risk for injury. For example, both HIV infection and certain types of ART medications increase a person’s risk for heart disease, because they change the balance of different fats—such as cholesterols—in blood, induce inflammation, and affect the blood-clotting process. Both excessive alcohol use and infection with hepatitis C virus further enhance the risk. Also, the medicines used to treat cholesterol problems can be particularly harmful when taken by patients with liver damage from alcohol abuse or hepatitis C virus. Heavy alcohol consumption (more than six drinks per day) has been linked to heart disease in HIV-infected people; thus, stopping or cutting down on their drinking may help to reduce the risk of heart disease.9 Another organ impacted by alcohol use and by HIV infection is the lung. Patients who drink or who have HIV infection are more likely to suffer from pneumonia and to have chronic conditions such as emphysema. Scientists do not yet know if alcohol and HIV together raise the risk for injury to the lung. However, studies using animals suggest that this combination does indeed increase the risk for problems. Lung infections remain a major cause of illness and death in those with HIV, and chronic alcohol consumption has been found to increase the rate at which viruses infect lungs and aid in the emergence of opportunistic infections (i.e., rare viruses that infect only people whose immune systems are weakened by a condition like HIV infection).10,11 Advances in imaging techniques have revealed another organ at risk for HIV and alcohol injury—the brain. In studies comparing patients with alcoholism, HIV infection, or both, people with alcoholism had more changes in brain structure and abnormalities in brain tissues than those with HIV alone. Patients with HIV infection and alcoholism were especially likely to have difficulty remembering and to experience problems with coordination and attention. Those with alcoholism whose HIV had progressed to AIDS had the greatest changes in brain structure.12 Preventing the Spread of HIV In addition to these direct effects, alcohol also works indirectly to raise the risk for HIV and for the problems associated with this virus. For example, alcohol consumption often occurs in bars and clubs where people meet potential sex partners. These establishments create networks of at-risk people through which HIV can spread rapidly. In addition, alcohol abusers’ high-risk sexual behaviors make them more likely to be infected with other sexually transmitted diseases; those, in turn, increase the susceptibility to HIV infection. They also are more likely to abuse illegal substances, which can involve other risky behaviors, such as needle sharing.6 Currently, the primary HIV prevention efforts seek to change people’s risky sexual behaviors and to promote the use of barriers, such as topical microbicides and condoms, which kill the virus or reduce the spread of disease during sexual contact. Unfortunately alcohol use can interfere with these efforts, impairing people’s judgment and making them less likely to use protection during sex. Although people who abuse alcohol and other drugs can be a difficult population to reach, research shows that individuals in treatment programs are less likely to engage in risky sexual behavior13 or to inject drugs or share needles14—behaviors that greatly increase the spread of the infection. Thus, alcohol treatment itself can help prevent risky behaviors. Also, some research suggests that looking at the places, where alcohol consumption and risky sexual behaviors take place (such as bars and clubs) can help in the development of social policy tools and successful interventions,15 including targeting such environments with prevention messages16,17 and providing HIV testing, condoms, and sexual health services at those establishments.18 The Role of Alcool in HIV/AIDS Risk Alcohol influences the risk for HIV/AIDS on a variety of levels, from the neighborhood (the number of bars and clubs) to the individual (his or her use of alcohol and other drugs as well as his or her sexual behavior). Policy, both formal and informal, can help to reduce these risks. For example, laws can dictate how many liquor stores can do business in a neighborhood. Treatment—Targeting HIV and Alcohol As noted previously, HIV-infected individuals who drink, even those who consume only low levels of alcohol, are less likely to comply with a strict ART regimen, which may increase the risk of AIDS.19 Drinking fewer than five standard drinks per day, one or more times a week, has been found to reduce survival among patients with HIV by more than 1 year. Binge drinking (defined as five or more drinks per day) produces even more pronounced effects. Binge drinking twice a week was found to reduce survival rates by 4 years, and daily binge drinking reduced survival by 6.4 years, a 40-percent decrease in life expectancy.20 When ART fails, the patient progresses to AIDS. The significance of this problem, along with alcohol’s other negative effects on the success of ART, has led some scientists to suggest that one way to improve the care of HIV patients is to provide screening for alcohol use disorders on a regular basis. Those who screen positive could then receive a treatment aimed at reducing alcohol consumption.19 Though it is clear that substance abuse treatment among HIV-infected patients can contribute greatly to their care, little research has been done in this area. The use of behavioral interventions in HIV-infected people who have a history of alcohol problems has produced only limited evidence that such interventions work.20 Some clinical trials have produced promising results, using interventions that combine one-on-one counseling with various forms of peer education, support group sessions, and telephone-based interactive methods to guide participants through stages to change their drinking behavior. In those studies, both drinking levels and risky sexual behavior were reduced in some patients.22 Interestingly, a review of studies aimed at reducing drinking in HIV-infected people found that no trials have examined the success of the four medications now available to treat alcohol dependence (i.e., disulfram, naltrexone, acamprosate, and topiramate) in HIV patients. There are significant barriers that exist when addressing alcohol problems among HIV-infected patients, including the additional commitments of time, money, and effort involved in treating alcoholism. Drinkers who do not suffer from severe alcohol problems may not think treatment is worthwhile or may fear the stigma associated with alcoholism treatment. Those patients may be more likely to receive treatment if the interventions are simple, require little effort, and take place in settings in which the patients already are receiving testing or treatment for HIV.1 Along these lines, studies using telephone-based interactive interventions show that this technology also may help to boost the effectiveness of treatment for alcohol problems. Clearly, questions remain concerning the treatment of alcoholism in HIV-infected patients. For example, is it better to treat a patient for alcoholism before starting ART therapy or concurrently? If ART regimens were simpler, would alcohol use have a reduced impact on patients’ ability to adhere to the treatments? NIAAA and other Institutes at the National Institutes of Health are sponsoring the Veterans Aging Cohort Study (VACS), which looks at the effects of alcohol on HIV patients as they age.23 One innovation in this study is the VACS Risk Index, which uses indicators of liver and kidney injury, hepatitis, immune suppression and illnesses—such as certain forms of pneumonia—to predict alcohol’s impact on illness and death. Because it relies on biological markers, the index provides an accurate measure of how much alcohol the patients have consumed. VACS study authors hope to use the index to answer these questions and to identify behavioral and medical treatments that can help decrease patients’ alcohol use and reduce their risk of illness and death. Epidemiologic data show that HIV’s spread has not slowed in recent years and may be on the rise in certain populations.24 Alcohol problems promote the spread of HIV, and increase illness and death in people with HIV. Decreasing drinking and the behaviors it encourages is one of the most promising ways to reduce these problems. Understanding the complex interplay between alcohol use and HIV will lead to better care for those already infected. Such knowledge also will play a vital role in developing behavioral, medical, and social policy tools for reducing the spread of the disease. 1Bryant, K.J.; Nelson, S.; Braithwaite, R.S.; and Roach, D. Integrating HIV/AIDS and alcohol research. Alcohol Research & Health 33(3):167–178. 2010. 2NIAID. Women’s Health in the United States: Research on Health Issues Affecting Women. NIH Pub. No. 04–4697. Bethesda, MD: NIAID, 2004. 3Lefevre, F.; O’Leary, B.; Moran, M.; et al. Alcohol consumption among HIV-infected patients. Journal of General Internal Medicine 10(8):458–460, 1995. PMID: 7472704 4Samet, J.H.; Phillips, S.J.; Horton, N.J.; et al. Detecting alcohol problems in HIV-infected patients: Use of the CAGE questionnaire. AIDS Research and Human Retroviruses 20(2):151–155, 2004. PMID: 15018702 5Barve, S; Kapoor, R.; Moghe, A.; Ramirez, J.A.; Easton, J.W.; Gobejishvili, L. Joshi-Barve, S.; and McClain, C.J. Focus on the Liver: Alcohol use, highly active antiretroviral therapy, and liver disease in HIV-infected patients. Alcohol Research & Health 33(3):229–236, 2010. 6Pandrea, I.; Happel, K.I.; Amedee, A.M.; Bagby, G.J.; and Nelson, S. Alcohol’s role in HIV transmission and disease progression. Alcohol Research & Health 33(3):203–218, 2010. 7Palepu, A.; Tyndall, M.W.; Li., K.; et al. Alcohol use and incarceration adversely affect HIV-1 RNA suppression among injection drug users starting antiretroviral therapy. Journal of Urban Health 80(4):667–675, 2003. 8Parsons, J.T.; Rosof, E.; and Mustanski, B. The temporal relationship between alcohol consumption and HIV-medication adherence: A multilevel model of direct and moderating effects. Health Psychology 27(5):628–637, 2008. PMID: 18823189 9Freiburg, M.S.; and Kraemer, K.L. Focus on the heart: alcohol consumption, HIV infection, and cardiovascular disease. Alcohol Research & Health 33(3):237–246, 2010. 10Bagby, G.J.; Stoltz, D.A.; Zhang, P.; et al. The effect of chronic binge ethanol consumption on the primary stage of SIV infection in rhesus macaques. Alcoholism: Clinical and Experimental Research 27:495–502, 2003. PMID: 12658116 11Quintero, D.; and Guidot, D.M. Focus on the lung. Alcohol Research & Health 33(3):219–228, 2010. 12Rosenbloom, M.J.; Sullivan, E.V.; and Pfefferbaum, A. Focus on the brain: HIV infection and alcoholism––comorbidity effects on brain structure and function. Alcohol Research & Health 33(3):247–257, 2010. 13Needle, R.H.; Coyle, S.L.; Norman, J.; et al. HIV prevention with drug-using populations — current status and future prospects: Introduction and overview. Public Health Reports 113(Suppl 1):4–18, 1998. PMID: 9722806 14Fuller, C.M.; Ford, C.; and Rudolph, A. Injection drug use and HIV: Past and future considerations for HIV prevention and interventions. In: Mayer, K., and Pizer, H.F., Eds. HIV Prevention: A Comprehensive Approach. London: Elsevier, 2009, pp. 305–339. 15Scribner, R.; Theall, K.P.; Simonsen, N.; and Robinson, W. HIV risk and the alcohol environment: advancing an ecological epidemiology for HIV/AIDS. Alcohol Research & Health 33(3):179–183, 2010. 16Kelly, J.A.; St. Lawrence, J.S.; Stevenson, L.Y.; et al. Community AIDS/HIV risk reduction: The effects of endorsements by popular people in three cities. American Journal of Public Health 82:1483–1489, 1992. PMID: 1443297 17Kelly, J.A.; Murphy, D.A.; Sikkema, K.J.; et al. Randomized, controlled community-level HIV-prevention intervention for sexual-risk behaviour among homosexual men in US cities. Lancet 350:1500–1505. 1997. PMID: 9388397 18Kalichman, S.C. Social and structural HIV prevention in alcohol-serving establishments: review of international interventions across populations. Alcohol Research & Health 33(3):184–194, 2010. 19Braithwaite, R.S., and Bryant, K. Influence of alcohol consumption on adherence to and toxicity of antiretroviral therapy and survival. Alcohol Research & Health 33(3):280–287. 2010. 20Braithewaite, R.S.; McGinnis, K.A.; Conigliaro, J.; et al. A temporal and dose–response association between alcohol consumption and medication adherence among veterans in care. Alcoholism: Clinical and Experimental Research 29(7):1190–1197, 2005. PMID: 16046874 21Carey, M.P.; Senn, T.E.; Vanable, P.A.; et al. Brief and intensive behavioral interventions to promote sexual risk reduction among STD clinic patients: Results from a randomized controlled trial. AIDS and Behavior 14(3):504–517, 2010. PMID: 19590947 22Samet, J.H.; and Walley, A.Y. Interventions targeting HIV-infected risky drinkers: drops in the bottle. Alcohol Research & Health 33(3):267–279, 2010. 23Justice, A.; Sullivan, L; and Fiellin, D. et al. HIV/AIDS, comorbidity, and alcohol: can we make a difference? Alcohol Research & Health 33(3):258–266, 2010. 24Hall, H.I.; Geduld, J.; Boulos, D.; et al. Epidemiology of HIV in the United States and Canada: Current status and ongoing challenges. Journal of Acquired Immune Deficiency Syndrome 51 (Suppl. 1):S13–S20, 2009. PMID: 19384096 25Mayer, K.H.; Skeer, M.; and Mimiaga, M.J. Biomedical approaches to HIV prevention. Alcohol Research & Health 33(3):195–202, 2010. Source material for this Alcohol Alert originally appeared in Alcohol Research & Health, 2010, Volume 33, Number 3. - Alcohol Research & Health, 33(3) describes the complex relationship between alcohol consumption and HIV/AIDS. Articles examine the ways in which alcohol influences the risk for infection by HIV, transmission of the virus, and progression to AIDS. Other articles address alcohol’s role in the prevention and treatment of HIV/AIDS. The medical aspects of HIV/AIDS and alcohol use also are featured, including the effects on the brain, immune system, and other body systems. - For more information on the latest advances in alcohol research, visit NIAAA’s Web site, www.niaaa.nih.gov Full text of this publication is available on NIAAA’s World Wide Web site at www.niaaa.nih.gov. All material contained in the Alcohol Alert is in the public domain and may be used or reproduced without permission from NIAAA. Citation of the source is appreciated. Copies of the Alcohol Alert are available free of charge from the National Institute on Alcohol Abuse and Alcoholism Publications Distribution Center, P.O. Box 10686, Rockville, MD 20849–0686.
<urn:uuid:faf32029-3ce6-4d9b-a464-d8823ab7f7fc>
CC-MAIN-2017-17
https://pubs.niaaa.nih.gov/publications/AA80/AA80.htm
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917121165.73/warc/CC-MAIN-20170423031201-00602-ip-10-145-167-34.ec2.internal.warc.gz
en
0.924228
4,745
3.15625
3
The U.K.'s "Daily Mail" reported on 16 March 2012 that Russian scientists and engineers on the Earth's southern polar continent of Antarctica have successfully drilled through to the submerged fresh water lake, Lake Vostok, and successfully removed samples of the water for scientific study of any potential microbial life that may be present in the lake, which standard geography maintains has been submerged and sealed off for potentially millions of years. The study could provide insights on whether or not life could survive in such an icy submerged environment, similar to conditions on Mars: "Triumph! After two decades of drilling in most inhospitable place on Earth, Russian scientists return home with a barrelful of water from an 'alien' lake untouched for 20 million years". But that may not be the real story. Note how the article ends: "Earlier this week state-run news agency in Russia claimed that an extraordinary cache of Hitler's archives may be buried in a secret Nazi Ice Bunker near the spot where yesterday's breakthrough was made. "‘It is thought that towards the end of the Second World War, the Nazis moved to the South Pole and started constructing a base at Lake Vostok,’ claimed "RIA Novosti", the Russian state news agency. "It cited Admiral Karl Dönitz in 1943 saying ‘Germany's submarine fleet is proud that it created an unassailable fortress for the Führer on the other end of the world’...in Antarctica Adm. Dönitz is said to have made this statement at the Nuremberg Tribunal: "The German submarine fleet is proud of having built for the Führer, in another part of the world, a Shangri-La on land, an impregnable fortress—an invulnerable fortress, a paradise-like oasis in the middle of eternal ice". The Israeli writer and former Mossad agent Michael Bar-Zohar in his book "The Avengers" widely publicized this quote, writing: "In March 1945 a detailed report was circulated in the U.S. State Department, which read: 'The Nazi regime has exact plans for the continuation of their plans and doctrine after the war. Some of these plans have already been put into effect'. After the war, however, Dönitz denied ever having made the statement in the first place. "According to German naval archives, months after the Nazis surrendered to the Allies in April 1945, a U-530 submarine arrived at the South Pole from the Port of Kiel. "The crew are rumoured to have constructed a still undiscovered ice cave and supposedly stored several boxes of relics from the Third Reich, including Hitler's secret files. "A later claim was that a U-977 submarine delivered remains of Hitler and Eva Braun to Antarctica in the hope they could be cloned from their DNA. The submariners then went to Argentina to surrender, it was claimed". The U-530 and U-977 did indeed show up in Argentina some months after the end of the war in Europe to surrender to Argentine authorities, which of course, in March of 1945, had entered the war on the Allied side, when Generalissimo Juan Peron declared war on his best friends, the Nazis [it will be recalled that a young Colonel Peron accompanied the Nazi delegation, including Hitler himself, when Hitler toured Paris after the Fall of France in June of 1940]. The rumor that a U-Boat secretly ferried Hitler and his wife out of Germany is an old one. A "Time Magazine" story from 23 July 1945, relays the story of U-530, which surrendered to authorities in Mar del Plata, Argentina, some two months after Germany's surrender. The story notes that an Argentine reporter cited a police report describing a submarine surfacing off Argentina's coast and dropping off two passengers, "a high-ranking officer and a civilian". The "Time" reporter speculated that the couple "might have been Adolf Hitler and his wife, Eva Braun, in man's dress". Antarctic Haven Reported The New York Times 18 July 1945 BUENOS AIRES, 17 July 17 [Reuter]—The startling theory that Adolf Hitler and Eva Braun may have landed in the Antarctic from the U‑530 is advanced by the Buenos Aires newspaper "Critica" today. The newspaper mentions as the probable place of debarkation "Queen Maud Land" where "a new Berchtesgaden is likely to have been built" during a German Antarctic expedition in 1938–39. The U-Boat, "Critica" added, probably formed part of a convoy of submarines that went from Germany to the Antarctic. There's no evidence that U-530 ever visited Antarctica, although neither the captain nor his crew explained exactly what they had been doing for the previous two months. A mystery attaches to the final voyage of U-530 When U-530 was examined by Argentine surveyors on 10 July 1945 this is what confronted them: U-530 looked as though she had survived some dreadful maritime calamity. The hull was devoid of paintwork and very rusted, the deck and structures had been damaged by the use of a ferocious corrosive cleaning material, the upper casing appeared to have been the seat of a great fire. The turret was split apart, the interior of the boat mouldy and the Diesels had been damaged by sabotage. Neither the US nor the Argentine declassified documents provide any explanation for all the damage except for the Diesels. A great quantity of material had been ditched. By Wermuth's own admission under interrogation, the following had been thrown overboard: - The war diary and other secret books - five unused torpedoes plus the gyro and warhead of a sixth in which the battery had exploded and jammed in a tube. - the torpedo aiming equipment - all ammunition for the 20mm and 37mm flak guns - parts of the 37mm flak gun - the dynamite scuttling charges - manometer gauges - 3 Metox anti-radars - 1 Hohentwiel radar and antenna Most of the crew, including the commander, lacked a Soldbuch and other identity documents. Some papers must have been recovered aboard U-530, because they are mentioned in paragraph [J] of the Naval Attaché's report, but this section does not appear to have been declassified to date. In addition, although well stocked for provisions, the crew was starving and scurvy. This would indicate either that some substance aboard U-530 made food inedible and/or destroyed Vitamin C in food or the human body. Although the boat had been cleaned and aired at Mar del Plata for three days, there remained a vile and disgusting stench in its interior. There appears to have been an unexplained substitution of commander for a brief period. The Real Wermuth likely got off the boat with the war diary and other secret documents to report on the voyage to the Nazi espionage network in Argentina. The False Wermuth substituted for him until the Real Wermuth returned on 12 July with the permission of the Argentine Navy. According to many crew members interviewed by the Argentine Press on 10 July 1945, the deck gun was a 105mm weapon weighing five tonnes which they had dismantled and manhandled overboard with great efforts on the high seas. This is unlikely to have been a false memory. The False Wermuth had stated to reporters that U-530 did not have a deck gun, which had been left on the quay when sailing, while the Real Wermuth does not appear to have mentioned it at all. No mention of the deck gun appears in the USN or Arg Navy reports. This highlights the great importance of the deck gun. The deck gun likely was the cause of the calamity aboard U-530. The Outward Voyage On 19 February 1945, U-530 provisioned at Kiel. Wermuth took aboard a week's supply of fresh provisions including meat, vegetables, bread and 17 weeks supply of special U-Boat foodstuffs. At no time did he reprovision, he said.The voyage lasted 15 weeks but at Mar del Plata there was so much foodstuff still aboard that the Argentines accused Wermuth that he must have reprovisioned at sea or elsewhere. The date may be significant. Following the disastrous air raid on Dresden on 16 February 1945 , Hitler had ordered a reprisal using unconventional shells to be fired by a U-Boat against New York. It is possible that U-530 was chosen for this task. Much later Keitel and Jodl dissuaded Hitler from going through with the measure. [Günther Gellermann, "Der Krieg der nicht stattfand", Bernard & Graefe, Koblenz 1986]. Besides the 105mm deck gun and flak weapons, U-530 loaded 14 torpedoes. These were: 8 x T-3a LUT pi-2 and 6 x T-5 FAT pi-4c. Radio equipment receivers consisted of two main, one all-frequency, one radione and a D/F. U-530 arrived at Christiansand [Skagerrak] and refueled. 225 tonnes oil was shipped, 20 tonnes short of capacity "for better stability" on the recommendation of the Chief Engineer. [The other known Argentine boat, U-977, also loaded short, in her case only 80 tonnes for a capacity of 130 tonnes, again "for better stability" on the instuctions of the Chief Engineer]. Wermuth stated that he received his operating instructions direct from Berlin. This would be Dönitz HQ "Koralle" at Bernau north of Berlin. U-530 with a crew of 54 officers and men left Christiansand on 3 March 1945 for Horten, Olso Fjord, and after a two day stop there sailed for the Atlantic on 5 March 1945 hugging the coast until well north of Bergen. Nothing abnormal occurred during this part of the voyage. U-530 In US Waters On 24 April Wermuth received orders to "operate against New York". His last message from Berlin was received on 26 April, this advising defensive measures. There now began a series of mysterious problems of radio reception. By 30 April only the short-wave receiver was working, and when reception ceased that day the unlikely situation had arisen in which U-530 had no further wireless contact with other German transmitters. [Wolfgang Hirschfeld, radio operator of U-234 which was passing through the region at the relevant time, had no problems receiving short wave and local transmissions]. U-530 reached the 200-metre line on 28 April and spent the next fortnight south of Long Island on occasions so close inshore that the crew was allowed to see "the automobiles, trains, skyscrapers and dirigibles" of New York City. On 4 May 1945 Dönitz sent his signal ordering a cessation of U-Boat attack hostilities. Because Wermuth supposedly could not receive wireless messages he opened his attack on coastal shipping that day. He attacked a convoy of "10 to 20 ships" with the almost infallible LUTs. Two missed and one stuck in the tube. On 6 May he attacked a large convoy with two LUTs and missed, then a tanker and a straggler with one LUT each and missed with both. On 7 May he fired two FATs at a convoy and missed with both, then was forced off by bombing. No U-boat was reported near convoys on any of these three occasions. On 8 May, or it might have been 10 May, as Wermuth recollected, his radio reception mysteriously recovered and he received the order to cease hostilities. He doubted its authenticity but decided to quit his "attack zone" that day. According to his officers, it was on 12 May and still in US waters that he attempted to contact the BdU for permission to return to Norway and discovered the war had ended a week previously. It was now decided to go to a very deep trench 1000 miles ENE of Puerto Rico to jettison torpedoes and ammunition, gun parts, the deck gun and papers. Once that was done the same kind of "democratic" decision was made to head for Argentina as reportedly occurred aboard U-977, if one can believe such a thing: the Equator was crossed on 17 June 1945 and U-530 arrived off Mar del Plata on 9 July. The main problem presented by the U-530 story as recounted by Wermuth and his officers/senior NCOs can be summarized as follows: (1) The strange difficulties of radio reception reported, in which not even local shore radio stations could be heard. (2) The three unsuccessful attacks on convoys using the almost infallible LUT and FAT torpedoes. (3) The failure of the declassified reports to explain what operations caused the damage to the boat. The fact that U-530 went to the Atlantic trench 1000 miles ENE of Puerto Rico to jettison the material including the torpedoes, ammunition for the flak guns, and also, the 105mm rounds for the deck gun, suggests that the rounds for the latter were unconventional and of a highly noxious character. Possibly from early on off New York the unconventional material in these shells was leaking and was of such a nature as to interfere with the electrics of the torpedoes and radio installation, and when the attempt was made to ditch the shells there was a grave spillage which endangered the boat. Perhaps even the tiniest droplets still remaining on the casing were hazardous to life. This would account for the fierce corrosive cleanser and the great fire being set on the casing, and might account for the vile stench, the mould inside the boat indicating that the hatch had been shut for excessively long periods of time, and the deleterious effects on foodstuffs and crew health which remained until Mar del Plata. As reported in the "New York Times" of 22 October 1944, the Germans were working on a new V-weapon which could be fitted into a rocket or shell. Though not a nuclear weapon it involved a nuclear principle. The article describes how the additive-gas would expand at detonation to change the nature of the air environment in which the normal explosion occurred, thus greatly enhancing its effects to cover a huge area when the shell impacted. Submarine U-977 also surrendered in Mar del Plata on 17 August 1945, after famously spending 66 days submerged as it travelled from the North Atlantic to Argentina. The voyage of U-977 has also fueled several conspiracy theories involving Hitler and Nazi Gold, but no real evidence. In any case, it is highly unlikely that Nazi scientists would have even thought to attempt to preserve the Führer's DNA. The DNA molecule was first discovered in 1869, but it wasn't until 1952 that scientist confirmed that it plays a role in heredity. The first successful clone from an adult mammal didn't come until 1996, when Scottish scientists successfully cloned a sheep. But what is not known is exactly where those U-Boats went nor what they were doing. Is it possible they went to Antarctica? Yes. Certainly. Is it possible that the Germans constructed some sort of base there during the war? Yes, that's also possible, but if there was a base in Antarctica, it may have been for a limited U-Boat operation and/or weather monitoring, and possibly research only. As for Grand Admiral Dönitz's remark, this has often been construed -as the state-run Russian media apparently agrees- as implying Antarctica. But the Nazi compounds in and around San Carlos di Bariloche in Rio Negro province of Argentina are no strangers to harsh winters either. What is interesting here is why Russia would be calmly implying that this Nazi survival myth -to the exclusion of recent research that maintains Hitler and Eva escaped to Argentina- is true. And that they might "uncover something" in Antarctica that would shed further light on the mystifying end of World War Two. Maybe they already have, and for the moment, they're not talking, but they are letting people know that they have in their quiet, Russian, chess-playing way. Tunnels as tall as the Eiffel Tower discovered under Antarctic Ice Sheets This report ties in with the persistent postwar rumors of "high strangeness" on the southern polar continent, and of some sort of Nazi base in Antarctica . |British scientists discovered 820-foot tunnels in West Antarctica| 24 August 2014 They were detected on airborne radar imaging and satellite photos: A team of British scientists has discovered tunnels that are almost as tall as the Eiffel Tower under an ice shelf in Antarctica. Researchers from a number of UK universities and the British Antarctic Survey – a research centre based on the continent – detected the tunnels when they flew a plane over the Filchner-Ronne Ice Shelf in West Antarctica. Radar from the plane, as well as satellite photos, revealed that ridges and cavities on the surface of the ice sheet corresponded to tunnels lying at its base. British scientists discovered the tunnel using radar from the specially-modified plane. It revealed that ridges and cavities on the surface of the ice corresponded to 820-foot-high tunnels hidden at the base of the sheet The 820-foot tunnels are nearly as tall as the Eiffel Tower – which measures just over 987 feet – and more than four times as tall as Tower Bride – which comes in at 213 feet. Researchers concluded that the placement of the tunnels means that they were most likely formed from meltwater – water released from melting ice – that flowing underneath the ice sheet, over land, and into the ocean. Researchers used a specially-modified 'Twin Otter' aircraft, equipped with with remote sensors that provided scientists with data on the land, ice and sea that it flew over, to make the discovery. The data revealed that water moved beneath the ice in concentrated channels, similar to rivers. Scientists had previously thought that meltwater moved in more evenly spread sheets of water Specially-designed radar equipment deciphered the tunnels under the ice – it can also be used to pick out layers within the ice itself. Longer-term monitoring from the air can be used to record the break-up of ice sheets or atmospheric changes. The British team will now use its newfound knowledge of the under-ice tunnels to predict how exactly that ice shelf will melt in response to climate change. The researchers published a paper about their work in the journal "Nature Geoscience". This is intriguing for the obvious reason that it appears to confirm in some respects those persistent stories about Nazi bases in Antarctica that have been around since the Nazi Neuschwabenland expedition to the continent in late 1938 and early 1939. These rumors have always included the idea, in some circles, that the Nazis actually built and maintained research facilities on the continent for the development of advanced and exotic technologies, a view which is unlikely when more secure, less logistically vulnerable possibilities existed for the placement of such facilities in southern Latin America. Long after World War 2, the secretive Argentine Government was compelled at congressional hearings to declassify some of its wartime dealings with Nazi Germany. One report stated that a German six-engined transport aircraft landed "at the war's end" on a German ranch at [Puntas de] Gualeguay in Paysandu province, Uruguay and had aboard items of highly secret technological equipment including a device known as "The Bell". It further stated that the latter then came into Argentina and finished up at a German lakeside laboratory near Bariloche, whose ruins are still visible and where AEG made further experiments postwar. To cover the embarrassment of the aircraft's existence in Argentina as the Third Reich collapsed, it was broken up with parts dumped into the Rio Pirana. -- Classified Intelligence report of Argentine Economic Ministry 1945 [only declassified 1993] But the presence of such tunnels, of such a large scale, hollowed out but under-ice flows of water, the presence of such tunnels, of such a large scale, hollowed out but under-ice flows of water, suggest the possibility, one which I have entertained as a distinct possibility, of hidden U-Boat bases under the ice. Such possibilities would have been within the capabilities of the German navy, and such bases would have been of value for a variety of reasons. So let’s indulge in our trademark high octane speculation once again. Why would one look for such tunnels in the first place, and why would it be the British doing it? the geological and scientific reasons are fairly obvious and do not need to be rehearsed. It’s the hidden possibilities, the historical ones, that intrigue me. Tunnels of such size could hide any number of things, large things, and thus perhaps one is looking at yet another attempt to corroborate the existence of lost secret Nazi bases, or even something more ancient. The discovery of the tunnels [or perhaps, re-discovery] places the famous expedition of Admiral Byrd, Operation Highjump, in late 1946 and early 1947, once again into a unique light, for let it be recalled that this expedition was outfitted for a stay of several months, and yet, only stayed for a few weeks, when the Admiral called it quits, and headed back to the USA, giving an interview with the "El Mercurio" newspaper of Santiago, Chile, on the way back, in which he warned that the USA would have to prepare defenses against "enemy fighters that can fly from pole to pole with tremendous speed". Such possibilities would have been within the capabilities of the German navy, and such bases would have been of value for a variety of reasons. Why would one look for such tunnels in the first place, and why would it be the British doing it? The geological and scientific reasons are fairly obvious and do not need to be explored. It's the hidden possibilities, the historical ones, that are intriguing. Tunnels of such size could hide any number of things, large things, and thus perhaps one is looking at yet another attempt to corroborate the existence of lost secret Nazi bases, or even something more ancient. -- Joseph P. Farrell 20 January 2016 On 5 March 1947, the prestigious Chilean newspaper "El Mercurio" carried an article from its correspondent Lee van Atta aboard the support ship 'Mount Olympus'. The title of the article was: "Admiral Richard E Byrd refers to the Strategic Importance of the Poles." It has often been alleged that this item never appeared and is fiction, but now we have the cutting in question, and so it exists. In the past it has often been misquoted in translation by occult enthusiasts, the usual interpolation in the text being "flying objects" having the ability "to fly from pole to pole at incredible speeds". The article reads in true translation as follows: "Admiral Byrd declared today that it was imperative for the United States to initiate immediate defence measures against the possible invasion of the country by hostile aircraft operating from the polar regions. The Admiral stated: - 'I don't want to frighten anyone unduly but it is a bitter reality that in the case of a new war, the continental United States will be attacked by aircraft flying in from one or both poles'. As regards the recently terminated expedition, Byrd said that the most important result of the observations and discoveries made is the current potential effect which they will have on the security of the United States." "El Mercurio" is a conservative Chilean newspaper with editions in Valparaiso and Santiago. No other newspaper appears to have carried this report. English Translation of the Article: Admiral Richard E. Byrd warned today that the United States should adopt measures of protection against the possibility of an invasion of the country by hostile planes coming from the polar regions. The Admiral explained that he was not trying to scare anyone, but the cruel reality is that in case of a new war, the United States could be attacked by planes flying over one or both poles. This statement was made as part of a recapitulation of his own polar experience, in an exclusive interview with International News Service. Talking about the recently completed expedition, Byrd said that the most important result of his observations and discoveries is the potential effect that they have in relation to the security of the United States. The fantastic speed with which the world is shrinking – recalled the Admiral – is one of the most important lessons learned during his recent Antarctic exploration. I have to warn my compatriots that the time has ended when we were able to take refuge in our isolation and rely on the certainty that the distances, the oceans, and the poles were a guarantee of safety. -- Michael S. Heiser, "The Portent" 'Operation Highjump' [1946-47], which was organized to "explore" the Antarctic, was a totally military expedition decreed at the highest levels [ordered by Defense Secretary James Forrestal, planned by the Chief of Naval Operations, Fleet Admiral Chester Nimitz, and carried out by an American hero, Admiral Richard Bird]. It involved 4,700 men, 33 aircraft, 13 ships, two seaplane groups, an icebreaker, a submarine and an aircraft carrier [the 'USS Midway'] – The “Highjump” task force converged from three directions exactly on New Swabia, where the Reich had a base in 1938-39 – In 1958 the U.S. military dropped three atomic bombs on the Antarctic [as part of a “physics experiment,” of course…] – The name of the 1958 task force which returned to the scene, consisting of 1,500 military personnel and nine ships, was "Task Force 88". At this stage the humor in the choice of the number “88” should be obvious to everyone. [The numbers 88 in numerology mean the letters HH, which means “Heil Hitler.” Lest anyone think there are no occult types at the Pentagon, D-Day occurred at 6 am on the 6th day of the 6th month of 1944, and 44 is a multiple of 11…and we had 9/11, the John Kennedy murder on 11/22, and on and on]. For those seriously interested in the momentous topic of whether the Reich survived the war as an operating force, one should consult the ten books by Joseph Farrell, Ph.D. on the massive survival of the Third Reich in the postwar period [though he focuses more on South America]. For Farrell, National Socialism now lives on as a financial, high-tech and nuclear-weapons network. Did US Inteligence Help Smuggle Hitler to South America? Jerome Corsi is somewhat of a "fixture" in the alternative research community, and a well-respected researcher. In his book, "Hunting Hitler" concerning the survival of Adolf and Eva Hitler from the war, Corsi brings to light many troubling questions, including: • Why were the Americans unable to obtain physical evidence of Hitler’s remains after the Russians absconded with his body? • Why did both Stalin and Eisenhower doubt Hitler’s demise? • Did U.S. intelligence agents in Europe, including the OSS and Allen Dulles [who later headed the CIA under President Eisenhower], aid Hitler’s escape, as they did with so many other Nazis? • Argentinean media reported Hitler arrived in the country and it continued to report his presence. Why have the findings not made it to the US? Argentina is the forgotten World War Two belligerent, and its perspective on postwar events is all but ignored in mainstream Western Media. Its role and perspectives are signally important to a proper understanding of postwar history, including the persistent stories, since the end of the war, of Hitler's presence in and around San Carlos di Bariloche and the Rio Negro province. [The locals of the region preserve stories of a quiet, sudden, quick, and secret visit of President Eisenhower to the region in 1954, during that period he famously "went missing" for a couple of days, ostensibly to have a tooth worked on, or, if one listens to the UFOlogy crowd, to have a secret meeting with extra-terrestrials at California's Muroc air base]. But the reasons for that malign presence are even more significant, and the article strongly suggests them: Corsi presents documentary evidence Allen Dulles’ wartime mission in Switzerland included helping Martin Bormann, Hitler’s secretary, to funnel billions of dollars of Nazi ill-gotten financial gain out of Germany and invest in the U.S. and Argentinian stock markets to provide a financial cushion to survive in hiding after the war, but Corsi has added a new factor to consideration: namely, the exchange of a postwar hideout and non-prosecution for war crimes to Herr und Frau Hitler, in return for access to that vast pile of plunder the Nazis had looted from occupied Europe. This inevitably invokes the dirty deals done between the corporate elite of the American and German "military industrial complexes", and that leads ultimately to the implied dirty deals some within the Allied [and mostly American] camp made with Martin Bormann, and his boss. The German armed forces surrendered, and that could be taken to mean Germany did. But no one was present at either the Rheims or Berlin ceremonies signing for the Reich government itself, nor for the Nazi Party. It is a curious omission that, in the wider context of secret financial and intelligence deals, make one wonder if that omission was not intentional. What was it Hiler said? "There will never again occur a November 1918 in German history," and "I have never known the word 'surrender'. And of all the people able to point a very aware and knowing finger at complicit American corporations and families that helped him and his regime into power, and of all the people that knew where he had instructed his lackey Bormann to "bury his treasure," it was Adolf Hitler. He and Bormann both knew of the depth of the deal struck with US General Edwin Siebert, and OSS station chief Allen Dulles, whose brother, John Foster, was Eisenhower's secretary of State. If one assumes that Hitler DID survive, and moreover, escaped Berlin and Germany successfully, where did he go? And what did he do once he arrived there? It would simply be unreasonable in the extreme to assume that he went elsewhere in Europe. After all, as history's most notorious criminal, and having just savaged Europe for five and a half years, there would have been no safe haven for him there. Only Franco's Nationalist Spain would have been relatively welcoming, and even then, Hitler would have been within easy reach of Allied or Soviet "special operations teams". The only other possibility of a relatively secure and welcoming refuge would have been Latin America. There the situation would have been a little more secure, but it would have been more or less the same story. The last two possibilities are disturbing, but must be mentioned. One place, of course, for the ex-Führer of the totally eclipsed Greater German Reich to go would have been the alleged "secret base" in Antarctica. One cannot, though, imagine Hitler, who had by this time become accustomed to living in some luxury, managing to be happy in Spartan and doubtless small living quarters surrounded by miles of cold and ice. Which leaves a final possibility...that Hitler's escape had been co-ordinated, not only with Nazis, but with other outside parties, who decided to take him in and screen him in thanks for a job well done. On this view, Hitler, in effect, went to ground with the very people who had put him into power. It would seem to take, at some point, the knowledge and connivance of a great power with the intelligence and security resources to keep a secret of that magnitude secret for that long, and to maintain for decades a cover story that looks increasingly to be as shaky as a pristine bullet on a stretcher in Dallas, Texas, in 1963. Of these possibities, then, we end with two as being the most likely, if the escape scenario is true, and both of them end with "America". "After visiting these two places [Berchtesgaden and the Eagle's Nest on Obersalzberg] you can easily understand how that within a few years Hitler will emerge from the hatred that surrounds him now as one of the most significant figures who ever lived. He had boundless ambitions for his country which rendered him a menace to the peace of the world, but he had a mystery about him in the way that he lived and in the manner of his death that will live and grow after him. He had in him the stuff of which legends are made. -- John F. Kennedy "Prelude To Leadership - The European Diary of John F. Kennedy - Summer, 1945". Regency Publishing, Inc. Washingon, DC
<urn:uuid:85db0657-a2e4-4045-92a0-07b3486a01ff>
CC-MAIN-2017-17
http://myth.greyfalcon.us/gotter6.htm
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917121165.73/warc/CC-MAIN-20170423031201-00602-ip-10-145-167-34.ec2.internal.warc.gz
en
0.971413
6,868
2.9375
3
The Munich Agreement was a settlement permitting Nazi Germany’s annexation of portions of Czechoslovakia along the country’s borders mainly inhabited by German speakers, for which a new territorial designation “Sudetenland” was coined. The agreement was signed in the early hours of 30 September 1938 (but dated 29 September) after being negotiated at a conference held in Munich, Germany, among the major powers of Europe, excluding the Soviet Union. Today, it is widely regarded as a failed act of appeasement toward Germany. The purpose of the conference was to discuss the future of the Sudetenland in the face of ethnic demands made by Adolf Hitler. The agreement was signed by Germany, France, the United Kingdom, and Italy. Sudetenland was of immense strategic importance to Czechoslovakia, as most of its border defenses, and banks were situated there, as well as heavy industrial districts. Part of the borderland was invaded and annexed by Poland. Because the state of Czechoslovakia was not invited to the conference, it considered itself to have been betrayed by the United Kingdom and France, so Czechs and Slovaks call the Munich Agreement the Munich Diktat (Czech: Mnichovský diktát; Slovak: Mníchovský diktát). The phrase “Munich Betrayal” (Czech: Mnichovská zrada; Slovak: Mníchovská zrada) is also used because the military alliance Czechoslovakia had with France and Britain proved useless and also known because of the phrase “About us, without us!” This phrase is most hurtful for people of Czechoslovakia (Czech republic, Slovakia and Subcarpathian Ruthenia). Today the document is typically referred to simply as the Munich Pact (Mnichovská dohoda). In Germany, the Sudeten crisis led to the so-called Oster Conspiracy. General Hans Oster, deputy head of the Abwehr, and prominent figures within the German military who opposed the regime for its behavior that was threatening to bring Germany into a war that they believed it was not ready to fight, discussed overthrowing Hitler and the Nazi regime through a planned storming of the Reich Chancellery by forces loyal to the plot. Demands for Sudeten Autonomy From 1918 to 1938, after the breakup of the Austro-Hungarian Empire, more than 3 million ethnic Germans were living in the Czech part of the newly created state of Czechoslovakia. Sudeten German pro-Nazi leader Konrad Henlein founded the Sudeten German Party (SdP) that served as the branch of the Nazi Party for the Sudetenland. By 1935, the SdP was the second largest political party in Czechoslovakia as German votes concentrated on this party while Czech and Slovak votes were spread among several parties. Shortly after the anschluss of Austria to Germany, Henlein met with Hitler in Berlin on 28 March 1938, where he was instructed to raise demands unacceptable to the Czechoslovak government led by president Edvard Beneš. On 24 April, the SdP issued a series of demands upon the government of Czechoslovakia, that were known as the Carlsbad Program. Among the demands, Henlein demanded autonomy for Germans living in Czechoslovakia. The Czechoslovakian government responded by saying that it was willing to provide more minority rights to the German minority but it refused to grant them autonomy. As the previous appeasement of Hitler had shown, the governments of both France and Britain were intent on avoiding war. The French government did not wish to face Germany alone and took its lead from the British government of Prime Minister Neville Chamberlain. Chamberlain considered the Sudeten German grievances justified and believed Hitler’s intentions were limited. Both Britain and France, therefore, advised Czechoslovakia to concede to Germany’s demands. Beneš resisted and on 19 May initiated a partial mobilization in response to possible German invasion. On 20 May, Hitler presented his generals with a draft plan of attack on Czechoslovakia codenamed Operation Green, insisting that he would not “smash Czechoslovakia” militarily without “provocation,” “a particularly favourable opportunity” or “adequate political justification.” On 28 May, Hitler called a meeting of his service chiefs where he ordered an acceleration of U-boat construction and brought forward the construction of his first two battleships, Bismarck and Tirpitz, to spring 1940, and demanded that the increase in the firepower of the battlecruisers Scharnhorst and Gneisenau be accelerated . While recognizing that this would still be insufficient for a full-scale naval war with Britain, Hitler hoped it would be a sufficient deterrent. Ten days later, Hitler signed a secret directive for war against Czechoslovakia, to begin not later than 1 October. On 22 May, Juliusz Łukasiewicz, the Polish ambassador to France, told the French Foreign Minister Georges Bonnet that if France moved against Germany in defense of Czechoslovakia: “We shall not move.” Łukasiewicz also told Bonnet that Poland would oppose any attempt by Soviet forces to defend Czechoslovakia from Germany. Daladier told Jakob Surits, the Soviet ambassador to France: “Not only can we not count on Polish support but we have no faith that Poland will not strike us in the back.” Hitler’s adjutant, Fritz Wiedemann, recalled after the war that he was “very shocked” by Hitler’s new plans to attack Britain and France 3–4 years after “deal[ing] with the situation” in Czechoslovakia. General Ludwig Beck, chief of the German general staff, noted that Hitler’s change of heart in favor of quick action was due to Czechoslovak defenses still being improvised, which would cease to be the case 2–3 years later, and British rearmament not coming into effect until 1941/42. General Alfred Jodl noted in his diary that the partial Czechoslovak mobilization of 21 May had led Hitler to issue a new order for Operation Green on 30 May, and that this was accompanied by a covering letter from Keitel stating that the plan must be implemented by 1 October at the very latest. In the meantime, the British government demanded that Beneš request a mediator. Not wishing to sever his government’s ties with Western Europe, Beneš reluctantly accepted. The British appointed Lord Runciman, the former Liberal cabinet minister, who arrived in Prague on 3 August with instructions to persuade Beneš to agree to a plan acceptable to the Sudeten Germans. On 20 July, French Foreign Minister Georges Bonnet, told the Czechoslovak Ambassador in Paris that while France would declare her support in public to help the Czechoslovak negotiations, it was not prepared to go to war over the Sudetenland question. During August the German press was full of stories alleging Czechoslovak atrocities against Sudeten Germans, with the intention of forcing the Western Powers into putting pressure on the Czechoslovaks to make concessions. Hitler hoped the Czechoslovaks would refuse and that the Western Powers would then feel morally justified in leaving the Czechoslovaks to their fate. In August, Germany sent 750,000 soldiers along the border of Czechoslovakia officially as part of army maneuvers. On 4 or 5 September, Beneš submitted the Fourth Plan, granting nearly all the demands of the Munich Agreement. The Sudeten Germans were under instruction from Hitler to avoid a compromise, and after the SdP held demonstrations that provoked police action in Ostrava on 7 September in which two of their parliamentary deputies were arrested, the Sudeten Germans used this incident and false allegations of other atrocities as an excuse to break off further negotiations. On 12 September, Hitler made a speech at a Nazi Party rally in Nuremberg on the Sudeten crisis in which he condemned the actions of the government of Czechoslovakia. Hitler denounced Czechoslovakia as being a fraudulent state that was in violation of international law’s emphasis of national self-determination, claiming it was a Czech hegemony where neither the Germans, the Slovaks, the Hungarians, the Ukrainians, nor the Poles of the country actually wanted to be in a union with the Czechs. Hitler accused Czechoslovakia’s President Edvard Beneš of seeking to gradually exterminate the Sudeten Germans, claiming that since Czechoslovakia’s creation over 600,000 Germans were intentionally forced out of their homes under the threat of starvation if they did not leave. He claimed that Beneš’ government was persecuting Germans along with Hungarians, Poles, and Slovaks, and accused Beneš of threatening these nationalities with being branded traitors if they were not loyal to the country. He claimed that he, as the head of state of Germany, would support the right of the self-determination of fellow Germans in the Sudetenland. He condemned Beneš for his government’s recent execution of several German protesters. He accused Beneš of being belligerent and threatening behavior towards Germany which, if war broke out, would result in Beneš forcing Sudeten Germans to fight against their will against Germans from Germany. Hitler accused the government of Czechoslovakia of being a client regime of France, claiming that the French Minister of Aviation Pierre Cot had said “We need this state as a base from which to drop bombs with greater ease to destroy Germany’s economy and its industry”. On 13 September, after internal violence and disruption in Czechoslovakia ensued, Chamberlain asked Hitler for a personal meeting to find a solution to avert a war. Chamberlain arrived by plane in Germany on 15 September and then arrived at Hitler’s residence in Berchtesgaden for the meeting. The Sudeten German leader Henlein flew to Germany on the same day. On that day, Hitler and Chamberlain held discussions in which Hitler insisted that the Sudeten Germans must be allowed to exercise the right of national self-determination and be able to join Sudetenland with Germany; Hitler also expressed concern to Chamberlain about what he perceived as British “threats”. Chamberlain responded that he had not issued “threats” and in frustration asked Hitler “Why did I come over here to waste my time?”. Hitler responded that if Chamberlain was willing to accept the self-determination of the Sudeten Germans, he would be willing to discuss the matter. Chamberlain and Hitler held discussions for three hours, after which the meeting adjourned and Chamberlain flew back to the UK and met with his cabinet to discuss the issue. After the meeting, French Prime Minister Édouard Daladier flew to London on 16 September to meet British officials to discuss a course of action. The situation in Czechoslovakia became more tense that day with the Czechoslovak government issuing an arrest warrant for the Sudeten German leader Henlein, who had arrived in Germany a day earlier to take part in the negotiations. The French proposals ranged from waging war against Germany to supporting the Sudetenland being ceded to Germany. The discussions ended with a firm British-French plan in place. Britain and France demanded that Czechoslovakia cede to Germany all those territories where the German population represented over fifty percent of the Sudetenland’s total population. In exchange for this concession, Britain and France would guarantee the independence of Czechoslovakia. The proposed solution was rejected by both Czechoslovakia and opponents of it in Britain and France. On 17 September, 1938 Hitler ordered the establishment of Sudetendeutsches Freikorps, a paramilitary organization that took over the structure of Ordnersgruppe, an organization of ethnic-Germans in Czechoslovakia that had been dissolved by the Czechoslovak authorities the previous day due to its implication in large number of terrorist activities. The organization was sheltered, trained and equipped by German authorities and conducting cross border terrorist operations into Czechoslovak territory. Relying on the Convention for the Definition of Aggression, Czechoslovak president Edvard Beneš and the government-in-exile later regarded 17 September 1938 as the beginning of the undeclared German-Czechoslovak war. This understanding has been assumed also by the contemporary Czech Constitutional court. On 18 September, Italy’s Duce Benito Mussolini made a speech in Trieste, Italy where he declared “If there are two camps, for and against Prague, let it be known that Italy has chosen its side,” with the clear implication being that Mussolini supported Germany in the crisis. On 20 September, German opponents to the Nazi regime within the military met to discuss the final plans of a plot they had developed to overthrow the Nazi regime. The meeting was led by General Hans Oster, the deputy head of the Abwehr (Germany’s counter-espionage agency). Other members included Captain Friedrich Wilhelm Heinz, and other military officers leading the planned coup d’etat met at the meeting. On 22 September, Chamberlain, about to board his plane to go to Germany for further talks, told the press who met him there that “My objective is peace in Europe, I trust this trip is the way to that peace.” Chamberlain arrived in Cologne, where he received a lavish grand welcome with a German band playing “God Save the King” and Germans giving Chamberlain flowers and gifts. Chamberlain had calculated that fully accepting German annexation of all of the Sudetenland with no reductions would force Hitler to accept the agreement. Upon being told of this, Hitler responded “Does this mean that the Allies have agreed with Prague’s approval to the transfer of the Sudetenland to Germany?”, Chamberlain responded “Precisely”, to which Hitler responded by shaking his head, saying that the Allied offer was insufficient. He told Chamberlain that he wanted Czechoslovakia to be completely dissolved and its territories redistributed to Germany, Poland, and Hungary, and told Chamberlain to take it or leave it. Chamberlain was shaken by this statement. Hitler went on to tell Chamberlain that since their last visit on the 15th, Czechoslovakia’s actions, which Hitler claimed included killings of Germans, had made the situation unbearable for Germany. Later in the meeting, a prearranged deception was undertaken in order to influence and put pressure on Chamberlain: one of Hitler’s aides entered the room to inform Hitler of more Germans being killed in Czechoslovakia, to which Hitler screamed in response “I will avenge every one of them. The Czechs must be destroyed.” The meeting ended with Hitler refusing to make any concessions to the Allies’ demands. Later that evening, Hitler grew worried that he had gone too far in pressuring Chamberlain, and telephoned Chamberlain’s hotel suite, saying that he would accept annexing only the Sudetenland, with no designs on other territories, provided that Czechoslovakia begin the evacuation of the German majority territories by 26 September at 8:00am. After being pressed by Chamberlain, Hitler agreed to have the ultimatum set for 1 October (the same date that Operation Green was set to begin). Hitler then said to Chamberlain that this was one concession that he was willing to make to the Prime Minister as a “gift” out of respect for the fact that Chamberlain had been willing to back down somewhat on his earlier position. Hitler went on to say that upon annexing the Sudetenland, Germany would hold no further territorial claims upon Czechoslovakia and would enter into a collective agreement to guarantee the borders of Germany and Czechoslovakia. Meanwhile, a new Czechoslovak cabinet, under General Jan Syrový, was installed and on 23 September a decree of general mobilization was issued. The Czechoslovak army, modern and possessing an excellent system of frontier fortifications, was prepared to fight. The Soviet Union announced its willingness to come to Czechoslovakia’s assistance. Beneš, however, refused to go to war without the support of the Western powers, because he feared democracy would be over and communism would ensue in Czechoslovakia if they accept help from the Soviet Union only. In the early hours of 24 September, Hitler issued the Godesberg Memorandum, which demanded that Czechoslovakia cede the Sudetenland to Germany no later than 28 September, with plebiscites to be held in unspecified areas under the supervision of German and Czechoslovak forces. The memorandum also stated that if Czechoslovakia did not agree to the German demands by 2 pm on 28 September, Germany would take the Sudetenland by force. On the same day, Chamberlain returned to Britain and announced that Hitler demanded the annexation of the Sudetenland without delay. The announcement enraged those in Britain and France who wanted to confront Hitler once and for all, even if it meant war, and its supporters gained strength. The Czechoslovakian Ambassador to the United Kingdom, Jan Masaryk, was elated upon hearing of the support for Czechoslovakia from British and French opponents of Hitler’s plans, saying “The nation of Saint Wenceslas will never be a nation of slaves.” On 25 September, Czechoslovakia agreed to the conditions previously agreed upon by Britain, France, and Germany. The next day, however, Hitler added new demands, insisting that the claims of ethnic Germans in Poland and Hungary also be satisfied. On 26 September, Chamberlain sent Sir Horace Wilson to carry a personal letter to Hitler declaring that the Allies wanted a peaceful resolution to the Sudeten crisis. Later that evening, Hitler made his response in a speech at the Sportpalast in Berlin; he gave Czechoslovakia a deadline of 28 September at 2:00pm to cede the Sudetenland to Germany or face war. On 28 September at 10:00am, four hours before the deadline and with no agreement to Hitler’s demand by Czechoslovakia, the British ambassador to Italy, Lord Perth, called Italy’s Foreign Minister Galeazzo Ciano to request an urgent meeting. Perth informed Ciano that Chamberlain had instructed him to request that Mussolini enter the negotiations and urge Hitler to delay the ultimatum. At 11:00am, Ciano met Mussolini and informed him of Chamberlain’s proposition; Mussolini agreed with it and responded by telephoning Italy’s ambassador to Germany and told him “Go to the Fuhrer at once, and tell him that whatever happens, I will be at his side, but that I request a twenty-four hour delay before hostilities begin. In the meantime, I will study what can be done to solve the problem.” Hitler received Mussolini’s message while in discussions with the French ambassador. Hitler told the ambassador “My good friend, Benito Mussolini, has asked me to delay for twenty-four hours the marching orders of the German army, and I agreed. Of course, this was no concession, as the invasion date was set for 1 October 1938. ” Upon speaking with Chamberlain, Lord Perth gave Chamberlain’s thanks to Mussolini as well as Chamberlain’s request that Mussolini attend a four-power conference of Britain, France, Germany, and Italy in Munich on 29 September to settle the Sudeten problem prior to the deadline of 2:00pm. Mussolini agreed. Hitler’s only request was to make sure that Mussolini be involved in the negotiations at the conference. When United States President Franklin D. Roosevelt learned the conference had been scheduled, he telegraphed Chamberlain, “Good man”. A deal was reached on 29 September, and at about 1:30 am on 30 September 1938, Adolf Hitler, Neville Chamberlain, Benito Mussolini and Édouard Daladier signed the Munich Agreement. The agreement was officially introduced by Mussolini although in fact the so-called Italian plan had been prepared in the German Foreign Office. It was nearly identical to the Godesberg proposal: the German army was to complete the occupation of the Sudetenland by 10 October, and an international commission would decide the future of other disputed areas. Czechoslovakia was informed by Britain and France that it could either resist Nazi Germany alone or submit to the prescribed annexations. The Czechoslovak government, realizing the hopelessness of fighting the Nazis alone, reluctantly capitulated (30 September) and agreed to abide by the agreement. The settlement gave Germany the Sudetenland starting 10 October, and de facto control over the rest of Czechoslovakia as long as Hitler promised to go no further. On 30 September after some rest, Chamberlain went to Hitler and asked him to sign a peace treaty between the United Kingdom and Germany. After Hitler’s interpreter translated it for him, he happily agreed. On 30 September, upon his return to Britain, Chamberlain delivered his infamous “peace for our time” speech to crowds in London. Just Click on Any Picture Below to Make it Large for Viewing!!
<urn:uuid:df7b153f-4fa0-47d5-8312-e50d513ee62c>
CC-MAIN-2017-17
http://historicalsocietyofgermanmilitaryhistory.com/historic-events/annexation-sudentenland/
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917122174.32/warc/CC-MAIN-20170423031202-00602-ip-10-145-167-34.ec2.internal.warc.gz
en
0.97062
4,392
4.09375
4
This article was published in the Royal Historical Society of Queensland Journal Volume 16 No. 12, November 1998. It explores the contact history between settlers and Aborigines in the Cairns district until 1892, when the Bellenden Ker Mission (later Yarrabah Mission) was established. To better understand the relationship it is useful to detail the pattern of contact in the district as well as the theories that influenced settlers' thoughts and actions. This will provide the basis for an understanding of the reasons for what took place and reactions to it. Wherever possible I have quoted the writers themselves so that we may read what occurred from their perspective, coloured and filtered as they are through the beliefs and attitudes of their times. This provides an atmosphere of authenticity while capturing the flavour of an era, which although only a century ago, was so markedly different to our own. The philosophical underpinning for black and white relations was different to that found today. Social Darwinism was influential from 1859 until the 1940s. This theory held that: · Separate races are different species that have evolved through Darwinian processes of natural selection. Aboriginal People were seen as examples of the lowest rung of evolutionary development, the childhood of humanity itself. Naturally the European races were seen as the highest form of human evolution. · Cultural differences have a biological basis which can be explained through the laws of evolution, natural selection and the survival of the fittest. · The survival or disappearance of cultures is determined by these natural laws. Those cultures that survive are the fittest and the strongest; those which disappear or are clearly inferior, are weeded out by natural selection and doomed to extinction.[i] This theory legitimised and provided scientific support for; · Invasion and subsequent colonisation of Aboriginal land, with no recognition of prior ownership or sense of obligation to those dispossessed. · The apparent complacency at the appalling living conditions, health and death rates amongst Aborigines. · Willingness to permit punitive expeditions, massacres and killing of Aborigines with no real sense of outrage. · The concern of the dangers of interbreeding, which threatened the racial purity of the white race with inferior Aboriginal strains. · A denial that Aborigines were genetically capable of becoming civilised (ie. adopting white values and lifestyles) and looking after their own lives. · A denial that Aborigines were genetically capable of becoming educated.[ii] At the time of European settlement the area was inhabited by four main Aboriginal groups.[iii] On the western side of Cairns, from Redlynch to over the ranges were the Djabugay People. On the central and northern side of Cairns, from approximately Bessie Point to Port Douglas and westward to Redlynch, were the Yirrganydji People. On the Southern side of Cairns to Babinda, eastward to the Murray Prior Range and westward to Lake Barrine were the Yidinji and the area occupied by Yarrabah was the traditional land of the Gunngandji, who inhabited the Cape Grafton Peninsula westward to the Murray Prior Range and southward to the mouth of the Mulgrave River.[iv] The first known European to sail past was Captain James Cook who rounded Cape Grafton on 10 June 1770 and anchored in Trinity Bay.[v] His passage was observed at Brown’s Bay in the Yarrabah district and two paintings of his ship Endeavour were painted onto the rock.[vi] In the 19th Century explorers, surveyors and then beche-de-mer fishermen began sailing up the North Queensland Coast. J.S.V. Mein established a beche-de-mer station on Green Island, off the Cairns Coast, in 1858.[vii] On one trip to the mainland he and his party were tracked by a large group of Aborigines. In Trinity Inlet they came across canoes full of Aborigines but had no commercial dealings with them. However they managed to trade with people at Cape Grafton.[viii] In 1868 Philip Garland set up a beche-de-mer station on Green Island and in 1870 he was attacked at Smith’s Creek when he went up the Inlet looking for food and water.[ix] This is the earliest recorded case of conflict in the Cairns area. Unfortunately it was to be the first of many. Aborigines in the district fought the settlers wherever and whenever they could in an attempt to hold on to their land and way of life. While many details are sketchy or not recorded, I have listed some of the attacks on both sides that did occur during the early days of Cairns. These attacks resulted in the inevitable subjugation of the Aboriginal People and their eventual removal to missions and reserves. This conflict set the pattern for future race relations in the district. It is sobering that the vast majority of documented cases concerned Europeans. Aboriginal deaths were rarely recorded and where the accounts have survived they were rarely mentioned by name. Names were only given if an Aboriginal was caught killing a European. A series of murders took place on Green Island in 1873. William Rose and William White were killed on 12 April 1873, by three Aboriginal men picked up by these beche-de-mer fishermen off Palm Island. On 10 July 1873, John Finlay, James Mercer, Charles Reeve and a man named Towie were allegedly killed by four other Palm Island Aborigines on Green Island.[x] Sub-Inspector Johnstone was sent to the Cairns district to hunt down the perpetrators of the Green Island massacre and describes being met by a large group of Aborigines. In his own words; We did not wait for them to attack us, as directly I saw they meant (to) fight we commenced at 200 yards range, and when they saw the result of our first volley they cleared, and we, with a yell, charged, and saw no more of them that day.[xi] In 1873 Dalrymple undertook his North East Coast Expedition from Cardwell to the Endeavour River (where Cooktown is now situated). They arrived at Trinity Inlet on 16 October 1873. On 17 October they saw two parties of Aborigines in outrigger canoes and: Endeavoured to get them to fraternise: but they jumped ashore and disappeared in the mangroves and mud, abandoning their vessels.[xii] Sub-Inspector Johnstone was also a member of the expedition, being in charge of the accompanying native police. On 20 October 1873 he; Saw a mob of blacks coming towards us, yelling and brandishing spears poised on the woomera, each carrying a bundle of spears in the left hand. I saw at once they intended attacking us and made preparations accordingly.[xiii] In 1874 a man named Old Bill Smith was killed at Green Island.[xiv] Cairns was named after Queensland’s first Irish born Governor, William Wellington Cairns, who was the Governor at the time.[xv] It was established in 1876 to serve the mining industry, with Trinity Inlet chosen as the first port for the Hodgkinson Goldfield, a new field west of Cairns, after William Smith blazed a track through the Great Dividing Range to the Cairns port.[xvi] Cairns was declared a port of entry in November 1876 and the site of the town surveyed.[xvii] Cairns struggled to establish itself in the early years. Smithfield, 15 kilometres to the north, overshadowed it until it was abandoned after successive floods from the adjacent Barron River from 1877 to 1879.[xviii] In June 1877 an easier route to the Hodgkinson Goldfields across the steep coastal range was discovered to Salisbury (later Port Douglas).[xix] What saved Cairns was the decision in 1884 to make it the railhead for the mineral rich interior at the expense of Port Douglas.[xx] It was also felt that Cairns had a superior harbour. Its other rival was Cooktown, but it declined due to the dwindling reserves of the Palmer River Goldfields in the 1880s and the inability construct a viable inland railway in time. Thus Cairns was able to sustain its position as a growing settlement throughout the 1880s while its rivals weakened.[xxi] Construction of the railway to the tableland commenced in 1886, with the first section to Redlynch opened in 1887 and the second or range section to Myola in 1891.[xxii] Industries began to emerge in the 1880s that would help to sustain it and ensure its growth. The Chinese who had abandoned the Palmer River Goldfields were instrumental in the development of Cairns.[xxiii] They established pioneer agricultural industries such as sugar plantations, rice growing and market gardens and by the mid 1880s comprised up to 33% of the population of 1 376 people.[xxiv] They were also involved in trade and commerce, establishing several stores in what became known as Chinatown, centred on Sachs Street.[xxv] Three large sugar plantations were established, a banana export trade developed and Cairns became the port for mineral and timber shipments from the Atherton and Hahn Tablelands.[xxvi] Large companies from the southern colonies invested in the sugar industry which boomed for a short period. However this ended with a price slump when markets were flooded with European sugar beet after 1883 and was compounded by disease, natural disasters and labour problems. The founding of Cairns soon caused tension. In November 1876 there was an attack on a Chinese Camp near the Three Mile.[xxvii] It appeared that from settlement onwards there were few attacks on residents in Cairns itself; Those that did occur were on packers, settlers and farmers in isolated areas. It is useful to quote Collinson in this regard: But other influences were at work, and the advent of the timber-getters soon resulted in hostilities. Blacks thieved the camps when the men were absent, and in retaliation were shot on sight. Their fishing and hunting grounds were filched from them, and they were gradually driven back into the scrubs. They watched every opportunity to rob the camps, way-lay pack teams, drive off cattle and horses, and raid maize or sweet potato patches. From 1877 till well into 1884, it was unsafe for any person to travel far away from Cairns without arms. Packers and teamsters’ outfits always included a revolver and a rifle.[xxviii] In 1878 a packer was killed west of Cairns[xxix] and in the same year an Aboriginal man named Monday was killed at Smithfield.[xxx] The first Aborigines to come into Cairns itself did so in 1882 when three of them arrived from the Cape Grafton or Yarrabah side in canoes.[xxxi] As Jones notes, it is interesting that no native police camp was ever established in Cairns and that until the mid 1880s only a handful of police were available in the town.[xxxii] By June 1886 there were about 100 Aborigines who had come into Cairns seeking work with settlers although they were too uncertain of their reception to bring women with them.[xxxiii] This may have been because of an incident in April 1885 when a so called “tame black” rushed into a house in Spence Street demanding “rations, tobacco and coin”. With the husband being absent Constable O’Brien was called and: Administered his sable opponent a good horsewhipping. Next day all the Blackfellows loitering in the town were hunted out by the police.[xxxiv] Aborigines continued to be steadily forced off their land and further into the interior as land was taken over for cultivation and settlement. They reacted by attacking isolated properties and crops whenever they could. Selectors and timber-getters were encroaching upon the rainforest and its inhabitants from the east, so denying them the fertile rivers and river flats of the Barron and Mulgrave. Miners in the west restricted access to hunting grounds and freshwater fishing. While the scrub provided refuge, it contained insufficient food. In 1878 the Police Commissioner noted that from the Mulgrave to the Mossman “the natives were literally starving”.[xxxv] By 1886 most of the available agricultural land around Cairns and the Barron River had was taken up by selectors. It was thus inevitable that conflict would erupt.[xxxvi] Jones lists a litany of attacks and reprisals that occurred around the district from 1884 to 1890.[xxxvii] At the end of July 1884 John (Jack) Conway was murdered in the Russell River McManus selection area. It was believed that his death was in retaliation for the way he treated Aborigines.[xxxviii] On 21 December 1884 Donald McAuley, a selector on the Mulgrave River, was killed as was another selector in the same area.[xxxix] These deaths took place after selectors on the Mulgrave River and Trinity Inlet petitioned parliament for the native police to be relocated closer to them and thus provide greater protection against “The incursions of the blacks upon their crops and the consequent loss that are sustaining.”[xl] Not content to wait for assistance, retribution was taken by the settlers in the Mulgrave Valley, which resulted in “completely breaking up the tribe.”[xli] What assistance was provided was condemned as the “uselessness of an occasional visit from the native police after these murders and depredations are committed.”[xlii] James Jameson, manager and proprietor of the Mount Buchan Estate complained that his homestead was not safe to leave unless well protected by the occupants and he had his horse turned off and his dog speared to death.[xliii] In April 1885 a serious attack was mounted on the Mount Buchan estate homestead with 2 Aborigines wounded and a horse and several cattle speared to death and goods destroyed. The Cairns Post was forced to suggest that: The entire northern population of Blacks so far as practicable should be massed together towards the north of Double Island... The Native Police, if they are to be of any use at all beyond the ornamental, ought to be able to patrol regularly, and see that their charge is kept strictly within a certain boundary. The present state of things is becoming intolerable.[xliv] On January 1, 1885, Inspector Carr of the Native Police arrived with his detachment and proceeded to the Mulgrave River to inspect the scene of the murder of Donald McAuley. All that “could be done by the Native Police was followed out to serve as a warning to the numerous Blacks in this district.”[xlv] In the same week Surveyor Munro’s camp was pillaged and a quantity of rations stolen.[xlvi] An altercation took place in January 1885 between a mob of 100 Aborigines and a group of Chinese at the Pyramid Plantation, but tragedy was averted by the arrival of a Mr Loridan and a number of his men. There were also reports of insecurity and unease in the Freshwatwer area,[xlvii] with clothes and rations being stolen from Parker’s selection,[xlviii] while on the Mulgrave River Kinsmill’s selection was cleaned out.[xlix] Attacks continued and the call for assistance reached a fever pitch, without any response from the authorities. Instances of other incidents in 1885 included the clearing out of a selector within three miles of the post office,[l] attacking a bullock at the 4-Mile,[li] and three bailiffs on various selections were forced to vacate their posts and take refuge at the Mount Buchan estate for Mr Jamieson for mutual protection.[lii] Tom Thomas had a horse speared to death on the Mulgrave.[liii] In the same year Aborigines fired the cane several times at Pyramid Estate south of Cairns. 200 acres were lost in a fire in August 1885.[liv] Cattle were slaughtered and in February 1886, Four miles from the Cairns Post Office at the Hop Wah Estate, armed Aborigines were found driving horses off after having turned them loose.[lv] There was also concern over the supply of liquor to Aborigines leading to outrages against settlers when intoxicated and the Cairns Post was moved to point out that this was an punishable offence under the law.[lvi] On January 5, 1886, Charles Henry Townsend was killed at Cape Grafton in the Yarrabah area.[lvii] In February 1886, 8 head of cattle were stolen from James Allen near Toohey’s Creek.[lviii] One has to query what the response of the selectors was to all these attacks. Given what occurred elsewhere in the district it would be reasonable to assume that they took matters into their own hands. An incident in the Herberton district, west of Cairns is instructive in this regard. 20 Aborigines were seen raiding a potato crop and were forced to retreat through the use of fire arms. This course of action was repeated the following day.[lix] I am unable to find any documented instances of White on Black massacres in the Cairns area in the Cairns Post for this period. Certainly the selectors and the paper would have been extremely circumspect about reporting any after the Post informed its readers on 8 January 1885 that Sub-Inspector Nichols was arrested in Port Douglas “on the charge of being accessory before the fact of the recent outrages on the Aboriginals near Irvinebank.”[lx] Continued calls for a Native Police presence in the area do not necessarily mean the settlers were incapable of settling any problems that may have arisen but probably suggest that they preferred the Native Police to keep the peace for them. Either way it was still to take a couple of years before the frontier was completely subdued. The first blanket day was held in Cairns in May 1886, with between 80 and 90 Aborigines, coming from as far as the Mulgrave and Barron Rivers, receiving blankets at the Customs Office.[lxi] The Post noted that: The gathering of so many Blacks in the town caused a great deal of amusement to the spectators assembled, and an attempt was made endeavouring to get the myalls to perform a corroboree, but they seemed very reticent to give this exhibition, and dispersed carrying their blankets on their shoulders ... Each myall when presented with a blanket, as they stood ranged in single file, distinctly said Thank You and before dispersing endeavoured to give three cheers for the Queen and afterwards cheered for themselves.[lxii] Many of them would have come from the Lilly Street fringe camp as they did not bring their women with them, but were given extra blankets if requested. In September 1886 John Nairne was almost killed in an attack at Freshwater. In 1887 stock was still being speared at Mt Buchan and bush across the Trinity Inlet was being fired.[lxiii] In October 1887 selectors in the Barron Valley petitioned the Minister of Lands for permission to abandon their selections for 12 months by which time things may have improved.[lxiv] Cane was still being set on fire at the Pyramid Estate. Aboriginal women were to be found in selector’s camps and beche-de-mer fishermen were known to steal Aboriginal women. Aborigines started congregating in town amidst squalid conditions and abuse of liquor. The first fringe camp formed in 1886 on the banks of Lily Creek, at the turn-off of the West Cairns and Mulgrave roads. This makeshift settlement was a collection of gunyahs built of bags, old Kerosene tins and bark. The residents were forced to earn a living by begging or through wood and water-carrying.[lxv] Aboriginal men roamed the streets of Cairns scavenging, leading to frightened housewives. After an incident in which a constable was threatened with a tomahawk, the Aboriginal inhabitants of Cairns were rounded up and forced out of town.[lxvi] Opium use also became a problem. Attacks continued throughout the district in 1888,1889 and 1890, leading for calls for an Aboriginal Reserve along The Barron River or north of Buchans Point.[lxvii] In July 1890 George Hobson was killed on the Lower Barron.[lxviii] Reverend John Gribble arrived in the district in 1891 looking for land on which to start a mission, leading to the eventual formation of the Yarrabah Mission. In his report to the Colonial secretary he stated that the: Barron and Kuranda Blacks were succumbing to white exploitation and that they were no longer regarded as dangerous although the settlers take every precaution.[lxix] In 15 short years the Aboriginal inhabitants of the Cairns area had been dispossessed of their land and forced to subsist in fringe camps under appalling conditions. The Cairns Post of 20 January 1892 describes conditions at one such camp. It was located on the Hop Wah road (now Mulgrave road), less than one kilometre from the post office and inhabited by 100 men, women and children. Tobacco and opium usage were rife, coupled with disease ridden dogs and an influenza epidemic was ravaging the community. This article also mentions that a group of white men had recently set fire to several gunyahs in the camp.[lxx] Shortly after this incident the camp was deserted and a disabled man left behind, who was taken into care by the Salvation Army and sent to hospital.[lxxi] In 1892 there were further complaints about Aborigines being allowed to live in Cairns, specifically the fringe camp in the upper part of Lake Street.[lxxii] There were complaints about the possibility of the spread of contagious diseases and that the camp, which with its: Gunyahs are really picturesque; still this wide illustration of savage life is far too near to be considered wholesome by the Whites living literally in its midst.[lxxiii] This state of affairs paralleled what was going on elsewhere in Australia. With Yarrabah established and the original owners vanquished, the black problem was conveniently put out of sight and out of mind. Settlers could concentrate on taming the land, exploiting its resources and pursuing commerce to improve their living standards. But the guilt and the memory of what occurred never completely died. It remained in the psyche and collective consciousness, like a cancer in the benign Australian landscape; hidden, but vaguely sensed and deeply feared; ready at any moment to erupt in the open. Until we recognise this and attempt to acknowledge the past and reconcile it with the present, we will all be diminished. It is to be hoped that by charting the events of what happened in the Cairns district some 100 years ago in some small way the cause of reconciliation will be advanced. Broughton, Pat J (1984) “The Rise and Fall of Smithfield”, Establishment Trinity Bay: a Collection of Historical Episodes. Cairns, Cairns Historical Society, p. 17-18. Broughton, Pat J and Stephens, S. E (1984) “A Magnificent Achievement: The Building of the Cairns Range Railway”, Establishment Trinity Bay: a Collection of Historical Episodes. Cairns, Cairns Historical Society, p. 24-33. Cairns City Heritage Study: A Report for the Cairns City Council and the Department of Environment and Heritage. (1994) Allom Lovell Marquis-Kyle Pty Ltd. Collinson, J. W (1939) Early Days of Cairns. Brisbane, Smith and Paterson. Dalyrmple, G. (1873) Narrative and Reports of the Queensland North-east Coast Expedition. Brisbane, Houses of Parliament. Doherty, W. J (1928) “Fragments of North Queensland History”, Cummins & Campbell’s Monthly Magazine, March 1928, p. 13 and 15. Gribble, John (1891) Summary of the Report of Rev. J.B. Gribble to the Colonial Secretary on his Missionary Visit to the Northern Districts of Cairns, Atherton, etc. Johnston, W. T (1983) “Early European Contact with Aborigines of the Present Mulgrave Shire Area Up To the End of the Year 1889”, Mulgrave Shire Historical Society Bulletin no 54. Johnstone-Need, J. W (1984) Spinifex and Wattle: Reminiscences of Pioneering in North Queensland. Being the Experiences of Robert Arthur Johnstone, Explorer and Naturalist, Sub-Inspector of Police and Police Magistrate. Cairns, The Author. Jones, Dorothy (1976) Trinity Phoenix: A History of Cairns and District. Cairns, The Author. Kelly, Kerrie and Sue Lenthall (1997) An Introduction to Recent Aboriginal and Torres Strait Islander History in Queensland. Cairns, Rural Health Training Unit. Kerr, Ruth Sadie (1984) “Packers, Speculators and Customs Collectors: The Opening of Cairns in 1876”, Establishment Trinity Bay: A Collection of Historical Episodes. Cairns, Cairns Historical Society, p. 10-12. Loos, Noel (1982) Invasion and Resistance: Aboriginal-European Relations on the North Queensland Frontier, 1861-1897. Canberra, Australian National University. Martyn, Julie (1993) The History of Green Island: The Place of Spirits. Cairns, The Author. Prideaux, P. (198-?) The Genesis of Cairns. Unpublished Paper. Seaton, Douglas (1952) “Rock Paintings in the Brown Bay Area, North Queensland, Irukandji People”, North Queensland Naturalist, vol 20 no 102, September 1952, p. 35-37. Viater (1929) “Trinity Bay: Genesis of the Port of Cairns”, Cummins & Campbell’s Monthly Magazine, March 1929, p. 51-53. [i] K. Kelly, An Introduction to Recent Aboriginal and Torres Strait Islander History in Queensland, p. 35 [iii] The exact boundaries and locations are unclear and indeed may overlap. Evidence of this can be seen in Native Title Claims where the same area is claimed by different groups [iv]There were also sub groups and clans. For example the Yidinji can be divided into seven sub-groups such as the Malanbarra-Yidinji of the Goldsborough Valley. There are also several spelling variations for each group. The Djabugay were also known as Tjapukai, Tja Pukai, Tja Pukanja and Tja Boga. The Yirrganydji were known as Irukandji, Irukandji and Yirkandji. The Yidinji were known as Indinji, Idindji, Yidindji and Yidindyi and Gunngandji were previously known as Konkandji, Kunggandyi, Kunngganji and Kungandji (Jones, Trinity Phoenix: a History of Cairns and District, 1976, p. 291-292) [v] Jones, p. 2 [vi] D. Seaton, “Rock Paintings in the Brown Bay Area, North Queensland, Irukandji People”, North Queensland Naturalist, vol 20 no 102, September 1952, p. 35-37. W. Johnston, “Early European Contact with Aborigines of the Present Mulgrave Shire Area Up To the End of the Year 1889”, Mulgrave Historical Society Bulletin no 54, p. 1, notes that Yarrabah folk lore has it that an ambush had been set to prevent Cook’s party from approaching a sacred site. Fortunately the site was not discovered by Cook [vii] Jones, p. 13 [viii] Ibid., p. 15 [ix] Jones, p. 16 The site became known as Battle camp or Battle Creek and later Smith’s landing. The clash set the tone for race relations in the district and it is thought that what occurred here in 1870 was the reason why Aborigines approached later contacts with settlers with wariness or hostility. According to Collinson a prominent Government official at Cooktown publicly stated that “if the people at Cairns had trouble with the natives it could be traced back to that event” (Collinson, p. 61) The fight was over the attempted theft of a canoe by Garland. This account was mentioned by Johnstone, Queenslander, 5 March 1904 (In J. Johnstone-Need, Spinifex and Wattle: Reminiscences of Pioneering in North Queensland. Being the Experiences of Robert Arthur Johnstone, Explorer and Naturalist, Sub-Inspector of Police and Police Magistrate, p. 2 and 54). The account then briefly resurfaces in 2 articles in Cummins & Campbell (W. Doherty, “Fragments of North Queensland History”, 1928, p. 13 and Viater, “Trinity Bay: Genesis of the Port of Cairns”, March 1929, p. 51) and then Collinson, p. 60 and 61 and all subsequent accounts appear to have emanated from the account in Collinson. [x] Martyn, J. The History of Green Island, p. 14-15 [xi] Queenslander 5 March 1904. In, Johnstone-Need, p. 55. Johnstone states this event occurred in 1872. However the Green Island murders took place in 1873 [xii] G. Dalrymple, Narrative and Reports of the Queensland North-east Coast Expedition, p. 17 [xiii] Johnstone in his report of the Expedition (Dalrymple, p. 44). It is not mentioned what the preparations were. However Collinson mentions the encounter and that it was “demanding (of) stern measures” (Collinson, p. 61-2). Jones, p. 29, gives a fuller account; shots were fired at 30 yards. Johnstone wrote about this incident in greater detail in the Queenslander 17/12/1904 and reprinted in, Johnstone-Need, p. 154. An unspecified number of Aborigines were shot and killed [xiv] N. Loos, Invasion and Resistance: Aboriginal-European Relations on the North Queensland Frontier, p.213 [xv] Jones, p. 84. Jones makes the comment that “his short term in office was in no way distinguished” [xvi] R. Kerr, “Packers, Speculators and Customs Collectors: the Opening of Cairns in 1876”, Establishment Trinity Bay: a Collection of Historical Episodes., p. 10. For an exhaustive account on the events leading to the founding of Cairns and its establishment in 1876 see Prideaux, P. The Genesis of Cairns [xvii] Cairns City Heritage Study: a Report for the Cairns City Council and the Department of Environment and Heritage, p. 12 [xviii] P. Broughton, “The Rise and Fall of Smithfield” Establishment Trinity Bay: a Collection of Historical Episodes, p. 18 [xix] Ibid., p. 18 [xx] Cairns City Heritage Study, p. 14 [xxi] Ibid., p. 14 [xxii] P. Broughton, “A Magnificent Achievement”, Establishment Trinity Bay: a Collection of Historical Episodes., p. 24 [xxiii] Ibid., p. 15 [xxv]J. Collinson, Early Days of Cairns, p. 70. Sachs Street was later renamed Grafton Street. [xxvi] Ibid., p. 15 [xxvii] Jones, p. 301 [xxviii] Collinson, p. 61-2 [xxix] Loos, p. 221 [xxx] Ibid., p. 223 [xxxi] Ibid., p. 301 [xxxiii] Ibid., p. 97 [xxxiv] Cairns Post 23 April 1885, p. 2 [xxxv] Ibid., p. 93 [xxxvi] This did not always occur. In the Cairns Post, 6 March 1884, mention is made of one property that was immune to attack and cattle spearing as the selector was supplying food to Aborigines on his selection [xxxvii] Jones, p. 302-314 [xxxviii] Jones, p. 303. Cairns Post 14 August 1884. Loos, p. 231. Collinson, p. 62 [xxxix] Loos, p. 232 [xl] Cairns Post 22 May 1884, p. 2; 3 July 1884, p. 2 and p. 3; 10 July 1884, p. 2 and Jones, p. 306. The need for a centralised native police camp was justified on the grounds that “By the long immunity from punishment, these blacks are now getting very bold in their deprecations, and unless the district has a native police force in a central position we may expect to hear of even more serious offences than thieving committed” (Cairns Post, 3 July 1884). Examples of the losses sustained are also given. In less than a week a calf was speared on Fallon’s selection, Jamieson lost a working bullock, Anderson had 2 cows speared and a calf was speared in the Mulgrave district. The paper points out that “Such a state of affairs in a well settled district and within a short distance of a populous town, surely points to something radically wrong in the administration of the forces appointed to keep the blacks in check and protect the settlers” (Cairns Post 10 July 1884, p. 2). However in a letter to the Cairns Post on 7 August 1884, p. 2, John Atherton wrote that Inspector Carr had told him that he was stationed in the district to “protect the blacks, not to punish them” [xliii] Cairns Post 3 July 1884, p. 2 & 8 January 1885, p. 2 [xliv] Cairns Post 16 April 1885, p. 2 [xlv] Cairns Post 8 January 1885, p. 2 [xlvii] Cairns Post 25 January 1885, p. 2 [xlviii] Cairns Post 30 April 1885, p. 2 [xlix] Cairns Post 15 May 1885, p. 2 [l] Cairns Post 4 June 1885, p. 2 [li] Cairns Post 11 June 1885, p. 2 [lii] Cairns Post 13 August 1885, p. 2 [liii] Cairns Post 24 September 1885, p. 2 [liv] Cairns Post 8 October 1885, p. 2 [lv] Cairns Post, 11 February 1886 [lvi] Cairns Post 7 August 1884, p. 2 [lvii] Loos, p. 377. Jones, p. 309 [lviii] Cairns Post 25 February 1886, p. 2 [lix] Cairns Post 9 July 1885, p. 2 [lx] Cairns Post 8 January 1885, p. 2 [lxi] Jones, p.310 & the Cairns Post 27 May 1886, p. 2 [lxii] [lxii] Cairns Post 27 May 1885, p. 2 [lxiii] Jones, p. 310 [lxv] Collinson, p. 64-5. This camp was probably established in 1885, not 1886 [lxvi] Jones, p. 311-312 [lxvii] Jones, p. 312-314 [lxviii] Ibid., p. 241 [lxix] J. Gribble, Summary of the Report of Rev J.B. Gribble ..., p. 1 [lxx] Cairns Post 20 January 1892, p. 2 [lxxi] Cairns Post 20 February 1892, p. 2. The Blacks [lxxii] Cairns Post 5 October 1892. The Blacks in Cairns
<urn:uuid:ec13783b-851d-4a94-a23e-5f671cc53426>
CC-MAIN-2017-17
http://queenslandhistory.blogspot.com/2011/03/conflict-and-dispossession-on-cairns.html
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917124478.77/warc/CC-MAIN-20170423031204-00605-ip-10-145-167-34.ec2.internal.warc.gz
en
0.970691
7,767
3.28125
3
Your Weather Definitions placeholder text: Advection: the horizontal transfer of any property in the atmosphere by the movement of air (wind). examples include heat and moisture advection. bernoulli's theorem: a statement of the conservation of energy for a steady, nonviscous, incompressible level flow. it is an inverse relationship in which pressures are least where velocities are greatest. theorized by daniel bernoulli (1700-1782), a swiss mathematician and physicist. blowing sand: sand that is raised by the wind to heights of six feet or greater. it is reported as "blsa" in an observation and on the metar. centrifugal force: the apparent force in a rotating system that deflects masses radially outward from the axis of rotation. this force increases towards the equator and decreases towards the poles. chemosphere: a vaguely defined region of the upper atmosphere in which photochemical reactions take place. it includes the top of the stratosphere, all of the mesosphere, and sometimes the lower part of the thermosphere. dense fog advisory: advisory issued when fog reduces visibility to 1/8 mile or less, creating possible hazardous conditions. divergence: wind movement that results in a horizontal net outflow of air from a particular region. divergence at lower levels is associated with a downward movement of air from aloft. contrast with convergence. doldrums: located between 30 degrees north and 30 degrees south latitudes in the vicinity of the equator, this area typically has calm or light and variable winds. also a nautical term for the equatorial trough. dusk: the period of waning light from the time of sunset to dark. gale: on the beaufort wind scale, a wind with speeds from 28 to 55 knots (32 to 63 miles per hour). for marine interests, it can be categorized as a moderate gale (28 to 33 knots), a fresh gale (34 to 40 knots), a strong gale (41 to 47 knots), or a whole gale (48 to 55 knots). in 1964, the world meteorological organization defined the categories as near gale (28 to 33 knots), gale (34 to 40 knots), strong gale (41 to 47 knots), and storm (48 to 55 knots). geophysics: the study of the physics or nature of the earth and its environment. it deals with the composition and physical phenomena of the earth and its liquid and gaseous envelopes. areas of studies include the atmospheric sciences and meteorology, geology, seismology, and volcanology, and oceanography and related marine sciences, such as hydrology. by extension, it often includes astronomy and the related astro-sciences. haboob: sudanese name for duststorm or sandstorm with strong winds that carry small particles of dirt or sand into the air, particularly severe in areas of drought. hudson bay low: an area of low pressure over or near the hudson bay area of canada that often introduces cold air to the north central and northeast united states. newhall winds: the local name for winds blowing downward from desert uplands through the newhall pass southward into the san fernando valley, north of los angeles. noctilucent clouds: rarely seen clouds of tiny ice particles that form approximately 75 to 90 kilometers above the earth's surface. they have been seen only during twilight (dusk and dawn) during the summer months in the higher latitudes. they may appear bright against a dark night sky, with a blue-silver color or orange-red. palmer drought index: a long-term meteorological drought severity index produced by the noaa/usda (department of agriculture) joint agricultural weather facility. the index depicts prolonged times, as in months or years, of abnormal dryness or wetness. it responds slowly, changing little from week to week, and reflects long-term moisture runoff, recharge, and deep percolation, as well as evapotranspiration. sounding: a plot of the atmosphere, using data rom upper air or radiosonde observations. usually confined to a vertical profile of the temperatures, dew points, and winds above a fixed location. subtropical air: an air mass that forms over the subtropical region. the air is typically warm with a high moisture content due to the low evaporative process. tilt: the inclination to the vertical of a significant feature of the pressure pattern or of the field of moisture or temperature. for example, midlatitide troughs tend to display a westward tilt with altitude through the troposphere. trajectory: the curve that a body, such as a celestial object, describes in space. this applies to air parcel movement also. tropics/tropical: the region of the earth located between the tropic of cancer, at 23.5 degrees north latitude, and the tropic of capricorn, at 23.5 degrees south latitude. it encompasses the equatorial region, an area of high temperatures and considerable precipitiation during part of the year. troposphere: the lowest layer of the atmosphere located between the earth's surface to approximately 11 miles (17 kilometers) into the atmosphere. characterized by clouds and weather, temperature generally decreases with increasing altitude. typhoon: the name for a tropical cyclone with sustained winds of 74 miles per hour (65 knots) or greater in the western north pacific ocean. this same tropical cyclone is known as a hurricane in the eastern north pacific and north atlantic ocean, and as a cyclone in the indian ocean.: : undercast: in aviation, it is an opaque cloud layer viewed from an observation point above the layer. from the ground, it would be considered an overcast. united states weather bureau: the official name of the national weather service prior to 1970. whiteout: when visibility is near zero due to blizzard conditions or occurs on sunless days when clouds and surface snow seem to blend, erasing the horizon and creating a completely white vista.. Adiabatic process: a thermodynamic change of state in a system in which there is no transfer of heat or mass across the boundaries of the system. in this process, compression will result in warming and expansion will result in cooling. clear ice: a glossy, clear, or translucent ice formed by the relatively slow freezing of large supercooled in water droplets. the droplets spread out over an object, such as an aircraft wing's leading edge, prior to complete freezing and forms a sheet of clear ice. cold high: a high pressure system that has its coldest temperatures at or near the center of circulation, and horizontally, is thermally barotropic. it is shallow in nature, as circulation decreases with height. associated with cold arctic air, it is usually stationary. also known as a cold core high. contrast with a warm high. conduction: the transfer of heat through a substance by molecular action or from one substance by being in contact with another. dog days: the name given to the very hot summer weather that may persists for four to six weeks between mid-july through early september in the united states. in western europe, this period may exist from the first week in july to mid-august and is often the period of the greatest frequency of thunder. named for sirius, the dog star, which lies in conjunction with the sun during this period, it was once believed to intensify the sun's heat during the summer months. flood stage: the level of a river or stream where overflow onto surrounding areas can occur. national hurricane center (nhc): a branch of the tropical prediction center, it is the office of the national weather service that is responsible for tracking and forecasting tropical cyclones over the north atlantic, caribbean, gulf of mexico, and the eastern pacific. : for further information, contact the nhc, located in miami, florida. sea ice: ice that is formed by the freezing of sea water. it forms first as small crystals, thickens into sludge, and coagulates into sheet ice, pancake ice, or ice floes of various shapes and sizes. showalter stability index: a measure of the local static stability of the atmosphere. it is determined by lifting an air parcel to 500 millibars and then comparing its temperature to that of the environment. if the parcel is colder than its new environment, then the atmosphere is more stable. if the parcel is warmer than its new environment, then the atmosphere is unstable and the potential for thunderstorm development and severe weather increases. snow blindness: temporary blindness or impaired vision that results from bright sunlight reflected off the snow surface. the medical term is niphablepsia. sublimation: the process of a solid (ice) changing directly into a gas (water vapor), or water vapor changing directly into ice, at the same temperature, without ever going through the liquid state (water). the opposite of crystallization.. Barometer: an instrument used to measure atmospheric pressure. two examples are the aneroid barometer and the mercurial barometer. celestial sphere: the apparent sphere of infinite radius having the earth as its center. all heavenly bodies (planets, stars, etc.) appear on the "inner surface" of this sphere and the sun moves along the ecliptic. cirrocumulus: a cirriform cloud with vertical development, appearing as a thin sheet of small white puffs which give it a rippled effect. it often creates a "mackerel sky", since the ripples may look like fish scales. sometimes it is confused with altocumulus, however, it has smaller individual masses and does not cast a shadow on other elements. it is also the least common cloud type, often forming from cirrus or cirrostratus, with which it is associated in the sky. conduction: the transfer of heat through a substance by molecular action or from one substance by being in contact with another. fresh water: water found rivers, lakes, and rain, that is distinguished from salt water by its appreciable lack of salinity. funnel cloud: a violent, rotating column of air visibly extending from the base of a towering cumulus or cumulonimbus toward the ground, but not in contact with it. it is reported as "fc" in an observation and on the metar. isobar: the line drawn on a weather map connecting points of equal barometric pressure. mud slide: fast moving soil, rocks and water that flow down mountain slopes and canyons during a heavy a downpour of rain. national weather service (nws): a primary branch of the national oceanic and atmospheric administration, it is responsible for all aspects of observing and forecasting atmospheric conditions and their consequences, including severe weather and flood warnings. : for further information, contact the nws. oxygen (o2): a colorless, tasteless, odorless gas that is the second most abundant constituent of dry air, comprising 20.946%. parhelion: the scientific name for sun dogs. either of two colored luminous spots that appear at roughly 22 degrees on both sides of the sun at the same elevation. they are caused by the refraction of sunlight passing through ice crystals. they are most commonly seen during winter in the middle latitudes and are exclusively associated with cirriform clouds. they are also known as mock suns. prognostic chart: a chart of forecast predictions that may include pressure, fronts. precipitation, temperature, and other meteorological elements. also known as a prog. salt water: the water of the ocean, distinguished from fresh water by its appreciable salinity. skew t-log p diagram: a thermodynamic diagram, using the temperature and the logarithm of pressure as coordinates. it is used to evaluate and forecast air parcel properties. some values that can be determined are the convective condensation level (ccl), the lifting condensation level (lcl), and the level of free convection (lfc). thermograph: essentially, a self-recording thermometer. a thermometer that continuously records the temperature on a chart. universal time coordinate: one of several names for the twenty-four hour time which is used throughout the scientific and military communities. wave(s): in general, any pattern with some roughly identifiable periodicity in time and/or space. it is also considered as a disturbance that moves through or over the surface of the medium with speed dependent on the properties of the medium. in meteorology, this applies to atmospheric waves, such as long waves and short waves. in oceanography, this applies to waves generated by mechanical means, such as currents, turbidity, and the wind. wedge: primarily refers to an elongated area of shallow high pressure at the earth's surface. it is generally associated with cold air east of the rockies or appalachians. it is another name for a ridge, ridge line, or ridge axis. contrast with a trough. wedge is also a slang term for a large, wide tornado with a wedge-like shape. wind speed: the rate of the motion of the air on a unit of time. it can be measured in a number of ways. in observing, it is measured in knots, or nautical miles per hour. the unit most often used in the united states is miles per hour. year: the interval required for the earth to complete one revolution around the sun. a sidereal year, which is the time it take for the earth to make one absolute revolution around the sun, is 365 days, 6 hours, 9 minutes, and 9.5 seconds. the calendar year begins at 12 o'clock midnight local time on the night of december 31st-january 1st. currently, the gregorian calendar of 365 days is used, with 366 days every four years, a leap year. the tropical year, also called the mean solar year, is dependent on the seasons. it is the interval between two consecutive returns of the sun to the vernal equinox. in 1900, that took 365 days, 5 hours, 48 minutes, and 46 seconds, and it is decreasing at the rate of 0.53 second per century.. Air mass: an extensive body of air throughout which the horizontal temperature and moisture characteristics are similar. anemometer: an instrument that measures the speed or force of the wind. barotropy: the state of a fluid in which surfaces of constant density or temperature are coincident with surfaces of constant pressure. it is considered zero baroclinity. blizzard: a severe weather condition characterized by low temperatures, winds 35 mph or greater, and sufficient falling and/or blowing snow in the air to frequently reduce visibility to 1/4 mile or less for a duration of at least 3 hours. a severe blizzard is characterized by temperatures near or below 10°f, winds exceeding 45 mph, and visibility reduced by snow to near zero. cheyenne fog: an upslope fog formed by the westward flow of air from the missouri river valley, producing fog on the eastern slopes of the rockies. drifts: normally used when referring to snow or sand particles are deposited behind obstacles or irregularities of the surface or driven into piles by the wind. dry slot: an area of dry, and usually cloud-free, air that wraps into the southern and eastern sections of a synoptic scale or mesoscale low pressure system. best seen on a satellite picture, such as a water vapor image. eye wall: an organized band of convection surrounding the eye, or center, of a tropical cyclone. it contains cumulonimbus clouds, intense rainfall and very strong winds. fogbow: a whitish semicircular arc seen opposite the sun in fog. the outer margin has a reddish tinge, its inner margin has a bluish tinge, and the middle of the band is white. an additional bow with reversed colors sometimes appears inside the first. gale: on the beaufort wind scale, a wind with speeds from 28 to 55 knots (32 to 63 miles per hour). for marine interests, it can be categorized as a moderate gale (28 to 33 knots), a fresh gale (34 to 40 knots), a strong gale (41 to 47 knots), or a whole gale (48 to 55 knots). in 1964, the world meteorological organization defined the categories as near gale (28 to 33 knots), gale (34 to 40 knots), strong gale (41 to 47 knots), and storm (48 to 55 knots). gale warning: a warning for marine interests for impending winds from 28 to 47 knots (32 to 54 miles per hour). growing season: considered the period of the year during which the temperature of cultivated vegetation remains sufficiently high enough to allow plant growth. usually considered the time period between the last killing frost in the spring and the first killing frost of the autumn. the frost-free growing season is between the first and last occurrence of 32°f temperatures in spring and autumn. ice jam: an accumulation of broken river ice caught in a narrow channel, frequently producing local flooding. primarily occurs during a thaw in the late winter or early spring. isobar: the line drawn on a weather map connecting points of equal barometric pressure. jet streak: a region of accelerated wind speed along the axis of a jet stream. latent heat: the energy released or absorbed during a change of state. nocturnal thunderstorms: thunderstorms which develop after sunset. they are often associated with the strengthening of the low level jet and are most common over the plains states. they also occur over warm water and may be associated with the seaward extent of the overnight land breeze. prevailing visibility: it is considered representative of visibility conditions at the observation station. it is the greatest distance that can be seen throughout at least half the horizon circle, but not necessarily continuous. rawinsonde: an upper air observation that evaluates the winds, temperature, relative humidity, and pressure aloft by means of a balloon-attached radiosonde that is tracked by a radar or radio direction-finder. it is a radiosonde observation combined with a winds-aloft observation, called a rawin. snow banner: a plume of snow blown off a mountain crest, resembling smoke blowing from a volcano. snowpack: the amount of annual accumulation of snow at higher elevations. thaw: a warm spell of weather when ice and snow melt. to free something from the binding action of ice by warming it to a temperature above the melting point of ice. thunderstorm: produced by a cumulonimbus cloud, it is a microscale event of relatively short duration characterized by thunder, lightning, gusty surface winds, turbulence, hail, icing, precipitation, moderate to extreme up and downdrafts, and under the most severe conditions, tornadoes. triple point: the point at which any three atmospheric boundaries meet. it is most often used to refer to the point of occlusion of an extratropical cyclone where the cold, warm, and occluded fronts meet. cyclogenesis may occur at a triple point. it is also the condition of temperature and pressure under which the gaseous, liquid, and solid forms of a substance can exist in equilibrium. veering: a clockwise shift in the wind direction in the northern hemisphere at a certain location. in the southern hemisphere, it is counterclockwise. this can either happen horizontally or vertically (with height). for example, the wind shifts from the north to the northeast to the east. it is the opposite of backing.. Arid: a term used for an extremely dry climate. the degree to which a climate lacks effective, life-promoting moisture. it is considered the opposite of humid when speaking of climates. convergence: wind movement that results in a horizontal net inflow of air into a particular region. convergent winds at lower levels are associated with upward motion. contrast with divergence. corposant: a luminous, sporadic, and often audible, electric discharge. it occurs from objects, especially pointed ones, when the electrical field strength near their surfaces attains a value near 1000 volts per centimeter. it often occurs during stormy weather and might be seen on a ship's mast or yardarm, aircraft, lightning rods, and steeples. drifting snow: snow particles blown from the ground by the wind to a height of less than six feet. eclipse: the obscuring of one celestial body by another. evaporation: the physical process by which a liquid, such as water is transformed into a gaseous state, such as water vapor. it is the opposite physical process of condensation. eye: the center of a tropical storm or hurricane, characterized by a roughly circular area of light winds and rain-free skies. an eye will usually develop when the maximum sustained wind speeds exceed 78 mph. it can range in size from as small as 5 miles to up to 60 miles, but the average size is 20 miles. in general, when the eye begins to shrink in size, the storm is intensifying. firewhirl: a tornado-like rotating column of fire and smoke created by intense heat from a forest fire or volcanic eruption. hygrometer: an instrument that measures the water vapor content of the atmosphere. intermountain high: an area of high pressure that occurs during the winter between the rocky mountains and the sierra-cascade ranges. it blocks the eastward movement of pacific cyclones. also called plateau high or great basin high. lenticular cloud: a cloud species which has elements resembling smooth lenses or almonds and more or less isolated. these clouds are caused by a wave wind pattern created by the mountains. they are also indicative of down-stream turbulence on the leeward side of a barrier. negative vorticity advection: the advection of lower values of vorticity into an area. nocturnal thunderstorms: thunderstorms which develop after sunset. they are often associated with the strengthening of the low level jet and are most common over the plains states. they also occur over warm water and may be associated with the seaward extent of the overnight land breeze. profiler: a type of doppler radar that typically measures both wind speed and direction from the surface to 55,000 feet in the atmosphere. reflectivity: a measure of the process by which a surface can turn back a portion of incident radiation into the medium through which the radiation approached. it also refers to the degree by which precipitation is able to reflect a radar beam. specific humidity: the ratio of the density of the water vapor to the density of the air, a mix of dry air and water vapor. it is expressed in grams per gram or in grams per kilograms. the specific humidity of an air parcel remains constant unless water vapor is added to or taken from the parcel. storm prediction center (spc): a branch of the national centers for environmental prediction, the center monitors and forecasts severe and non-severe thunderstorms, tornadoes, and other hazardous weather phenomena across the united states. formerly known as the severe local storms (sels) unit of the national severe storms forecast center. : for further information, contact the spc, located in norman, oklahoma. subrefraction: less than normal bending of light or a radar beam as it passes through a zone of contrasting properties, such as atmospheric density, water vapor, or temperature. subtropical air: an air mass that forms over the subtropical region. the air is typically warm with a high moisture content due to the low evaporative process. sunset: the daily disappearance of the sun below the western horizon as a result of the earth's rotation. in the united states, it is considered as that instant when the upper edge of the sun just disappears below the sea level horizon. in great britain, the center of the sun's disk is used instead. time of sunset is calculated for mean sea level. tide: the periodic rising and falling of the earth's oceans and atmosphere. it is the result of the tide-producing forces of the moon and the sun acting on the rotating earth. this propagates a wave through the atmosphere and along the surface of the earth's waters. trade winds: two belts of prevailing winds that blow easterly from the subtropical high pressure centers towards the equatorial trough. primarily lower level winds, they are characterized by their great consistency of direction. in the northern hemisphere, the trades blow from the northeast, and in the southern hemisphere, the trades blow from the southeast. warm: to have or give out heat to a moderate or adequate degree. a subjective term for temperatures between cold and hot. in meteorology, an air parcel that is warm is only so in relation to another parcel. world meteorological organization (wmo): from weather prediction to air pollution research, climate change related activities, ozone layer depletion studies and tropical storm forecasting, the world meteorological organization coordinates global scientific activity to allow increasingly prompt and accurate weather information and other services for public, private and commercial use, including international airline and shipping industries. established by the united nations in 1951, it is composed of 184 members. : for more information, contact the woe, located in geneva, switzerland. : : zenith: the point which is elevated 90 degrees from all points on a given observer's astronomical horizon. the point on any given observer's celestial sphere that lies directly above him. the opposite of nadir..
<urn:uuid:8312fb8b-dfe8-4a72-8f9d-1f0b215114dc>
CC-MAIN-2017-17
http://panipsum.com/generate/?wl114=1&p=5
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917119637.34/warc/CC-MAIN-20170423031159-00542-ip-10-145-167-34.ec2.internal.warc.gz
en
0.936051
5,227
3.4375
3
This essay has been submitted by a student. This is not an example of the work written by our professional essay writers. Speaking is considered to be one of the most difficult skills in English learning. As one of the receptive skills, speaking is the foundation to develop other language skills. For a long period of time, the teaching of English in China has mainly focused on the teaching of reading and writing in the early stage of learning and neglected the teaching of listening and speaking. Many teachers are puzzled at the situation in the English class: though many students can get a good mark in English test, not all of them can perform well in listening and speaking. But the primary function of language is for interaction and communication. So how to improve learners' oral communicative competence is a significant problem that each language teacher has to face, especially for the 12 to 15-year-old junior middle school students. But as their teachers, their oral English is very important. In China, oral English learning has been the weak point. It is of necessity to investigate factors which may obstruct or enhance oral English acquisition. They reflect on what was wrong with the teaching and try to find out the solutions to these problems. And it is obvious that the learners do not have enough and effective listening and speaking practice in a scientific way. Through our research, we know that the students are mainly influenced by the affective learning strategies. This article starts from the affective learning strategies, analyses the relations between the learners' oral English ability and the affective learning strategies. At last, this article also gives some suggestions for the English learners in junior middle school. With the rapid development of the society, frequent communication among different nations and the rapid development of the international trade, more and more English majors will be needed. So to teach the students to be excellent English talents is an important and difficult task. In recent years, more and more scholars and researchers have paid much attention to the learning methods in order to improve their learning ability and achievement. In my own school and university experience, I found that students' learning ability and achievement has much to do with the language learning strategies, especially the affective learning strategies. As we know, language learning strategies are what the teachers and students should know in their teaching and studying. In my middle school time, it reminds me that most of my English teachers did not view learning strategies as a priority and even the few who did care about them knew little or nothing to apply them to their teaching or to help the students to learn English. Due to this fact, I set down to do some researches and write an article to make a study of the learning strategies. In the past decades¼Œmuch progress has been made in English teaching in China, but there are still some problems that we have to face .One of them is that in spite of consistent practice and hard work¼Œmany junior high school students can' t use English properly after three years of learning, especially their oral English is very poor. They still use the old learning methods¼Œand are passive in English learning. Although teachers always make their students change learning strategies¼Œthey can't yet change this kind of embarrassing situation . And there is another phenomenon, many a students can do well in reading and the examinations, but when they are called to give a speech or do some oral exercises, they just can not open their mouth. It seems that there are something stuck in their throat. Both the teachers and students do not know how to solve this problem, so they don't know how to improve their speaking ability. 1.2 Theoretical significance This article mainly discusses the influences of the affective learning strategies on English speaking. So, before we start those points, let us know something about the theoretical significance of studying the learning strategies and oral communication. 1.2.1 The importance of studying the learning strategies It is meaningful and important for us to learn how to employ efficient ways in English learning. First¼Œautonomous learning is the ultimate goal for English teaching. And one of the most important ways to achieve this is to motivate students to develop their own thinking strategies and learning strategies. In junior high schools¼Œbecause of the traditional teaching methods¼Œstudents cannot develop their own learning strategies .Therefore, it is becoming more and more important to study how to help students develop and use efficient learning strategies. An old proverb tells us what to do in English teaching .It goes "Give a man a fish and he will eat it up for a day ¼Œbut teach him how to fish and he will have fish to eat. "So to help students to develop their own affective learning strategies is just like teaching them" how to fish". So in English teaching it is very important to teach students how to develop learning strategies .If they master the ways to develop learning strategies and use them freely and correctly ,students can not only improve their English fast¼Œbut also enhance their sense of responsibility in learning English . 1.2.2 The importance of oral communication In people's daily lives most of them speak more than they write, so speaking is fundamental to human communication. Many students equate being able to speak a language as knowing the language and therefore view learning the language as learning how to speak the language, or as Noonan (1991) wrote, "success is measured in terms of the ability to carry out a conversation in the target language". Therefore, if the students do not learn how to speak or do not get any opportunity to speak in classroom they may soon get de-motivated and lose interest in learning. With China's entry into WTO and successful bidding for holding 2010 EXPO in Shanghai, the need for proficient English speakers is surely increasing, which means more opportunities for those who can speak fluent English in their own fields. In order to meet this challenge and seize the opportunity, the students not only want to have profound knowledge for English reading and writing, but also need the ability to have oral communication with foreigners in English. So to improve the students' ability of oral English is becoming an important task. 2. A General Review of Affective Learning Strategies In the September of 2000, the new English Course Standard for the Basic Education Stage was issued and tried out. It's greatly different from the past syllabus. The teaching contents and goal of the course standard includes skills, knowledge, culture, affective strategies and so on. Both The Syllabus for Junior Middle School English Course of the Nine year Fulltime Compulsory Education (Revised) and The Syllabus for Full-time Senior Middle School English Course mentioned "To help the students develop the effective English learning strategies" as the teaching goal. The problem here is that we failed to get proper affective learning strategies organized in teaching and learning practice. So the brief review of the foreign and Chinese applied linguists' researches about the affective learning strategies in the latest years should be taken at first. And it starts from the following aspects: 2.1 The definition of affective learning strategies Affective strategies concern the ways in which learners interact with other learners and native speakers or take control of one's own feelings on language learning. Examples of such strategies are cooperation and question for clarification. The term affective refers to emotions, attitudes, motivations, and values. It is impossible to overstate the importance of the affective factors influencing language learning. Language learners can gain control over these factors through affective strategies. "The affective domain is impossible to describe within definable limits," according to H.Douglas Brown. It spreads out like a fine-spun net, encompassing such concepts as self-esteem, attitudes, motivation, anxiety, culture shock, inhibition, risk taking, and tolerance for ambiguity. The affective side of the learner is probably one of the very biggest influences on language learning success or failure. Good language learners are often those who know how to control their emotions and attitudes about learning. Negative feelings can stunt progress, even for the rare learner who fully understands all the technical aspects of how to learn a new language. On the other hand, positive emotions and attitudes can make language learning far more effective and enjoyable. Teachers can exert a tremendous influence over the emotional atmosphere of the classroom in three different ways: by changing the social structure of the classroom to give students more responsibility, by providing increased amounts of naturalistic communication, and by teaching learners to use affective strategies. Self-esteem is one of the primary affective elements. It is a self-judgment of worth or value, based on a feeling of efficacy-a sense of interacting effectively with one's own environment. Low self-esteem can be detected through negative self-talk, like "boy, am I a blockhead! I embarrassed myself again in front of the class." The three affective strategies related to self-encouragement help learners to counter such negativity. A mount of anxiety sometimes helps learners to reach their peak performance levels, but too much anxiety blocks language learning. Harmful anxiety presents itself in many guises: worry, self-doubt, frustration, helplessness, insecurity, fear, and physical symptoms. Tolerance of ambiguity---that is the acceptance of confusing situations-may be related to willingness to take risks (and also reduction of both inhibition and anxiety). Moderate tolerance for ambiguity, like moderate risk taking, is probably the most desirable situation. Learners who are moderately tolerant of ambiguity tend to be open-minded in dealing with confusing facts and events, which are part of learning a new language. In contrast, low ambiguity-tolerant learners, wanting to categorize and compartmentalize too soon, have a hard time dealing with unclear facts. Again, self-encouragement and anxiety-reducing strategies help learners cope with ambiguity in language learning. 2.2 Classification of affective learning strategies There are two kinds of classifications: Chamot and O'Malley's and Oxford's a: Chamot and O'Malley (1990) recognized three affective/social strategies: cooperation, questions for clarification, and self-talks. b: Oxford (1990), otherwise, gave some more detailed items: lowering your anxiety, encouraging yourself, and taking your emotional temperature for affective strategies; and asking question, cooperating with others, and empathizing with others for social strategies. In this paper, I mainly talk about Oxford's classification of the affective strategies. As shown in Figure 1 A. Lowering your anxiety (Using progressive relaxation, deep breathing, or meditation, Using music, Using laughter) Affective strategies B. Encouraging yourself(Making positive statements, Taking risk wisely, Rewarding yourself) C. Taking your emotional temperature(Listening to your body, Using a checklist, Writing a language learning diary) 2.2.1 Lowering your anxiety Three anxiety-reducing strategies are listed here. Each has a physical component and a mental component. Firstly, using Progressive Relaxation, Deep Breathing, or Meditation: Use the technique of alternately tensing and relaxing all of the major muscle groups in the body, as well as the muscles in the neck and face, in order to relax; or the technique of breathing deeply from the diaphragm; or the technique of meditating by focusing on a mental image or sound. Secondly, using Music: Listen to soothing music, such as a classical concert, as a way to relax. Thirdly, using Laughter: Use laughter to relax by watching a funny movie, reading a humorous book, listening to jokes, and so on. 2.2.2 Encouraging yourself This set of three strategies is often forgotten by language learners, especially those who expect encouragement mainly from other people and do not realize they can provide their own. However, the most potent encouragement---and the only available encouragement in many independent language learning situations---may come from inside the learner. Self-encouragement includes saying supportive things, providing oneself to take risks wisely, and providing rewards. Making Positive Statements: Say or write positive statements to oneself in order to feel more confident in learning the new language. Taking Risks Wisely: Push oneself to take risks in a language learning situations, even though there is such a chance of making a mistake or looking foolish. Risks must be tempered with good judgment. Rewarding Yourself: Give oneself a valuable reward for a particularly good performance in the new language. 2.2.3 Taking your emotional temperature The four strategies in this set help learners to assess their feelings, motivations, and attitudes, in many cases, to relate them to language tasks. Unless learners know how they are feeling and why they are feeling that way, they are less able to control their affective side. The strategies in this set are particularly helpful for discerning negative attitudes and emotions that impede language learning progress. Listening to Your Body: Paying attention to signals given by the body. These signals may be negative, reflecting stress, tension, worry, fear, and anger; or they may be positive, indicating happiness, interest, calmness, and pleasure. Using a Checklist: Use a checklist to discover feeling, attitudes, and motivations concerning language learning in general, as well as concerning specific language tasks. Writing a Language Learning Diary: Writing a diary or journal to keep track of events and feeling in the process of learning a new language. Discussing Your Feeling with Someone Else: Talking with another person (teacher, friend, relative) to discover and express feelings about language learning. 3. The Influence of Affective Learning Strategies on Speaking This article focuses on discussing about the influences of the affective learning strategies on oral English for junior high school students, which is also the researching point. We want to find out how does them influence the junior high school students' oral English, and then according to what we found we can make some suggestions. The following paragraphs will talk about the influences of three different affective strategies on speaking in detail. 3.1 The influence of lowering your anxiety As we all know in recent years, more and more foreign language researchers have taken learner variables, especially affective factors into consideration. "Among the affective factors influencing language learning, especially oral English speaking, anxiety ranks high". "Psychologically speaking, anxiety refers to the intense and enduring negative feeling caused by vague and dangerous stimuli from the outside as well as the unpleasant emotional experience involved, such as anticipation, irritation and fear". While language anxiety is the fear or apprehension occurring when a learner is expected to perform in the second language learning, it is associated with feeling such as uneasiness, frustration, self-doubt, apprehension and tension. In my own experience, I and also my friends and classmates have anxiety problems, when we participate in the English corner or give a speech; they impede us to carry on. There are many other similar cases can be found. So lowering your anxiety becomes very important. Lowering your anxiety can help you accomplish your learning tasks more peacefully and more efficiently. 3.2 The influence of encouraging yourself Confidence, also called as self-confidence, is a kind of optimistic emotion that language learners firmly believe they can overcome troubles to gain success. It is also a kind of active and upward emotional inclination that their real values can be respected by other people, collective, and society. Confidence is an important quality formed in the process of people's growth and success, and was built on the basis of their right cognition. Setting confidence is to evaluate correctly himself, look for his merits, and affirm his capability. People often say that it is important for them to know themselves wisely. This "wisdom" embodies in not only seeing their merits, but also in analyzing their shortcoming. In fact, everyone owns great potentials, and everyone possesses his advantages and strong points. If we can objectively evaluate ourselves and on the basis of knowing our disadvantages and weak points to encourage ourselves, our strong sense of self-esteem and confidence can be stimulated. Confidence is to be a kind of active affective factor. As for foreign language learners, if you want to succeed, you should possess the major quality - confidence. It often plays a decisive role in foreign language learning. Confidence is just like catalyst of foreign language learner's competence and can make all potentials be transferred, and let their potentials bring into play. However, foreign language learners who are lacking in confidence often hold suspicion on their competence. They often embody negative weakness, or lack stability and initiative. They should change their attitudes on the foreign language learning, build enough confidence. As a matter of fact, encouraging yourself is a very important way to gain confidence. So we can know how significant role does encouraging yourself play in improving the learners' speaking ability. 3.3 The influence of taking your emotional temperature Emotion, as we know plays a very important role in our life as well as in our language learning. Good emotions can help you lead a happy life and it also can help you do an excellent job when you are communicating with the others or making a speech to the public. On the contrary, bad emotions can help you nothing but ruin you instead. This strategy -- taking your emotional temperature -- helps learners to assess their feelings, motivations, and attitudes and, in many cases, to relate them to language tasks. Unless learners know how they are feeling and why they are feeling that way, they are less able to control their affective side. The strategies in this set are particularly helpful for discerning negative attitudes and emotions that impede language learning progress, and especially oral English learning progress. Through this set of strategies, the English learners can improve their speaking ability in a short time. 4. Findings and Analysis In order to make this article more persuadable and authoritative, I made a questionnaire and also make an analysis. The aim of making findings and analysis is also to find the factors which impede the junior school students' oral English ability, and then according to what we have found we can give some useful and effective suggestions to them. 4.1 Data collection 30 questionnaires were distributed and 27 were returned. All incomplete questionnaires were discarded because the results could not be described and analyzed unless all items were answered. In total, the data from 27 fully completed questionnaires were analyzed. All the questions are designed according to the affective strategies I mentioned in this thesis. 4.2 Data analysis According to the questionnaires, I made a date analysis. I analyzed the proportion of students, who choose these options. And also I analyzed the proportion of them who had the speaking obstacles and who failed to adopt the useful ways to help them to train their affective strategies. These will be shown in the following two tables. 4.2.1 Application of affective learning strategies in a junior middle school The table below shows that in general students sometimes use the affective strategies, although the level of use by strategy category differs in one way or another. The capitalized letter A, B, C, D, E orderly means¼š"I never or almost not do that", "I usually don't do that", "I sometimes do that", "I usually do that" and "I always or almost always do that". The items from 6 to 16 refer to the questions about the affective strategies. The figures in the blanks are the percent of how many students choose the items A, B, C, D, and E. The appendix at the end of this article will give you a more detailed explanation. From the table, a conclusion can be drawn that almost half of the students feel nervous or shy when they speak English, and the most important thing is that 51.9% of them cannot get rid of being nervous and 85.2 of them face the affective factors by themselves. They seldom talk about these things with others. And 70.4% of the students do not use music to lower their anxiety before they give a speech, when it refers to writing English diaries, it is even more serious. In all, the reason why this phenomenon occurs is that the students have a short cognition on the affective learning strategies. If they wanted to improve their speaking ability, the teachers should help them to have a comprehensive knowledge about them and help them apply them to their study. So the affective strategies should be paid attention to. From the above analysis, besides the learner's specific difference, social condition and learning task also greatly influence and restrict the students' learning motivation and their learning strategy applications. The middle school students in our country need a better condition for their foreign language learning, which includes the richer understandable language input, especially the oral input; they also need more chance for practicing and using the foreign language. An ancient proverb says: "Give a man a fish and he eats for a day. Teach him how to fish and he eats for a life-time." I think that guiding the students to improve some effective English learning strategies is a kind of approach to "give a man a fish" in order to expect him to "eat for a life-time". So it is very important to teach the students the learning approaches and the learning strategies in order to develop their foreign language learning ability. If the students master the strategy knowledge and use the strategies freely and correctly, they can not only accelerate the foreign language learning, but also strengthen their learning sense of responsibility, autonomy, independence, and self-guiding and self- efficiency. Then the students' inner learning motivation is aroused, so they can elaborate the facial role in the learning process, accelerate the English acquisition. Based on the above analysis and discussion, I want to give the following suggestions: 5.1 Improving speaking ability This article has just presented the definition and the classification of the affective strategies in the first few parts. We know the functions of these affective strategies, but that is not enough. If we want to improve our speaking ability, we should know how to apply them to speaking. The following parts will talk about it in detail. a). As it mentioned above, anxiety is a big negative factor which impede the English learners' speaking. So we must lower our anxiety before we make a conversation. And there are some ways to help us to do that. Use progressive relaxation, deep breathing, or meditation, music, and laughter. When we are going to make a speech or do some oral exercises we can use these strategies. b). Encourage yourself is also a very important strategy to help you to improve your speaking ability. And there are also three ways to encourage yourself. When we are studying, we can make some positive statements to remind us that we can do it, we can accomplish the tasks successfully. Here are some examples: I understand a lot more of what is said to me now. I am confident and secure about my progress. I can get the general meaning without knowing every word. And also when we train our speaking, we can take some risks wisely. May be we are always do the easy speaking tasks which may not be effective to us anymore, so we can challenge ourselves and do some difficult ones. The last way is that give yourself a reward when you gain something. But you should remember the rewards need not be tangible or visible. They can also come from the very act of doing a good job. Students can learn to relish their own good performance. c). Taking your emotional temperature is one of the affective strategies. This set of strategies for affective self-assessment involves getting in touch with feelings, attitudes, and motivations through a variety of means. Language learners need to be touch with these affective aspects, so that they can begin to exert some control over them. The strategies described here enable learners to notice their emotions, avert negative ones, and make the most of positive ones. When the learners use this set of strategies they should take the following aspects into consideration. First, they should listen to their body. One of the simplest but most often ignored strategies for emotional self-assessment is paying attention to what the body says. Second, use a checklist. A checklist helps learners in a more structured way to ask themselves questions about their own emotional state, both in general and in regard to specific language tasks and skills. Third, discuss your feeling with someone else. 5.2 Training affective learning strategies At the first of this article, it mentions the importance of studying affective learning strategies. According to that, we know it is important and necessary to study them. So the training of affective learning strategies is a must. 5.2.1 Goals of learning strategy training The goal of strategy training is to teach students how, when and why strategies can be used to facilitate their efforts at learning and using a foreign language. By teaching students how to develop their own individualized strategy systems, strategy training is intended to help students explore ways that they can learn the target language more effectively, as well as to encourage students to self-evaluate and self-direct their learning. The first step is to help learners recognize which strategies they've already used, and then to develop a wide range of strategies, so that they can select appropriate and effective strategies within the context of particular tasks. Carrell (1983) emphasizes that teachers need to be explicit about what the strategy consists of, how, when, why it might be used, and how its effectiveness can be evaluated. A further goal of strategy training is to promote learner's autonomy and learner's self-direction by allowing students to choose their own strategies, without continued prompting from the language teacher. Learners should be able to monitor and evaluate the relative effectiveness of their strategy use, and more fully develop their problem-solving skills. Strategy training can thus be used to help learners achieve learning autonomy as well as linguistic autonomy. Students need to know what their abilities are, how much progress they are making and what they can do with the skills they have acquired. Without such knowledge, it will not be easy for them to learn efficiently. The strategy training is predicted on the assumption that if learners are conscious about and become responsible for the selection, use and evaluation of their learning strategies, they will become more successful language learners by improving their use of classroom time, completing homework assignments and in-class language tasks more efficiently, become more aware of their individual learning needs, taking more responsibility for their own language learning, and enhancing their use of the target language out of class. In other words, the ultimate goal of strategy training is to empower students by allowing them to take control of the language learning process. 5.2.2 Models for affective learning strategy training Before talking about the models for affective learning strategies, I want to emphasize that learning environment is very important for training strategies. When the students meet some difficult problems, they should turn to advanced teaching facilities. It is not just a good way to study but also a very good learning strategy. So the school should take it into consideration. By now, at least three different instructional frameworks have been identified. They are Pearson and Dole model, Oxford model, and Chamot and O'Malley model. They have been designed to raise student awareness to the purpose and rationale of affective learning strategy use, to give students opportunities to practice the strategies that they are being taught, and to help them understand how to use the strategies in new learning contexts. Each of the three approaches contains the necessary components of explicit strategy training: it emphasizes discussions about the use and value of strategies, encourages conscious and purposeful strategy use and transfer of those strategies to other contexts, and allows students to monitor their performance and evaluate the effectiveness of the strategies they are using. (1) Pearson and Dole model The first approach to strategy training has been suggested by Pearson and Dole (1987) with reference to first language, but it can also be applied to the study of second and foreign languages as well. This model targets isolated strategies by including explicit modeling and explanation of the benefits of applying affective , extensive functional practice with the strategy, and then an opportunity for transfer of the strategy to new learning contexts. Students may better understand the applications of the various strategies if they at first modeled by the teacher and then practiced individually. After a range or a set of affective strategies have been introduced and practiced, the teacher can further encourage independent strategy use and promote learners autonomy by encouraging learners to take responsibility for the selection, use, and evaluation of the affective strategies that they have been taught. Pearson and Dole's sequence includes: 1. Initial modeling of the strategy by the teacher, with direct explanation of the strategy's use and importance; 2. Guided practice with the strategy; 3. Consolidation , teachers help students identifiy the strategy and decide when it might be used; 4. Independent practice with the strategy; and 5. Application of the strategy to new tasks. (2) Oxford model As for the second approach to strategy training, Oxford et al. (1990) outline a useful sequence for the introduction of the affective strategies that emphasizes explicit strategy awareness, discussion of the benefits of strategy use, functional and contextualized practice with the strategies. This sequence is not prescriptive regarding strategies that the learners are supposed to use, but rather descriptive of the various strategies that they could use for a broad range of learning tasks. The sequence they is the following: 1. Ask learners to do a language activity without any strategy training; 2. Have them discuss how they did it, praise any useful strategy and self-directed attitudes that they mention, and ask them to reflect on how the strategies they selected may have facilitated the learning process; 3. Suggest and demonstrate other helpful strategies, mentioning the need for greater self-direction and expected benefits, and making sure that the students are aware of the rationale for strategy use. Learners can also be asked to identify those strategies that they do not currently use, and consider ways that they could include new strategies in their learning repertoires; 4. Allow learners plenty of time to practice the new strategies with language tasks; 5. Show how the strategies can be transferred to other tasks; 6. Provide practice using the techniques with new tasks and allow learners to make choices about the affective strategies they will use to complete the language learning tasks. 7. Help students understand how to evaluated the success of their strategy use and to gauge their progress as more responsible and self-directed learners. (3) Chamot and O'Malley model With regard to third approach to strategy training, Chamot and O'Malley's (1994) sequence is especially useful after students have already practiced applying a broad range of strategies in a variety of contexts. Their approach to help student complete language learning tasks can be described as a four stage problem-solving process. 1. Planning: The instruction presents the students with a language task and explains the rationale behind it. Students are then asked to plan their own approaches to the task and choose strategies that they think will facilitate its completion. For example, they can set goals for the task, activate prior knowledge by recalling their approaches to similar tasks, predict potential difficulties, and selectively attend to elements of language input/output. 2. Monitoring: During the task, the students are asked to self-monitor their performance by paying attention to their strategies use and checking comprehension. For example, they can use imagery, personalize the language task by relating information to background knowledge, reduce anxiety with positive self-talk, and cooperate with peers for practice opportunities. 3. Problem-solving: As they encounter difficulties, the students are expected to find their own solutions. For example, they can draw inferences, ask for clarification, and compensate for lack of target language knowledge by using communication strategies such as substitution or paraphrase. 4. Evaluation: After the task has been completed, the learners are then given time to de-brief the activity, i.e. evaluate the effectiveness of the strategies they used during the task. They can also be given time to verify their predictions, assess whether their initial goals were met, give summaries of their performance, and reflect on how they could transfer their strategies to similar language tasks or across language skills. The above-mentioned frameworks can be used in various combinations to complement each other and add variety to a strategy training program. These insightful frameworks directly help the present author form his framework to carry out a listening strategy training study. This paper reviews the theories concerning the affective learning strategies, the definition and classification of the affective strategies, and mentions the importance of oral communication, the influences of three sets of affective strategies on speaking. And it also talks about the strategies training, based on their models, this study explored the obstacles of the students in junior high schools in English speaking comprehension. There is a gap between the students in forming the learning strategy, especially for the students between seventh grade and ninth grade, which we should pay attention to. All in all, the learning strategy level of ninth grade is much lower than that of seventh grade. So when the teacher trains their students they should think about it. To the end, the following findings are concluded: (1) As a teacher teaching English in the junior high school in the future, they should know speaking is more important to them than other three abilities; those are listening, reading and writing. (2) Learning strategies for oral English used by the students in junior high schools can be known clearly in this thesis, and obstacles are found via learning strategies. And trough my research I found that affective factors ranks high. At the end of this article, the author also give some suggestions for the junior high school students, through these suggestions the author hope they can overcome these obstacles and have a rapid improvement in their speaking ability. However, the author is fully aware that there are a number of limitations concerning the study of speaking strategies that can help smooth away the obstacles in speaking comprehension in this thesis, and what has been done is far from sufficient. It is expected that the thesis can provide some useful information to both teachers and students.
<urn:uuid:f40dac95-9896-406e-a93a-83e49ba122f9>
CC-MAIN-2017-17
https://www.ukessays.com/essays/english-language/the-effective-learning-strategy-in-english-speaking-english-language-essay.php
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917122865.36/warc/CC-MAIN-20170423031202-00426-ip-10-145-167-34.ec2.internal.warc.gz
en
0.957462
6,992
3.03125
3
This graphic does a great job of depicting race and ethnicity as distinct concepts. The orange hash marks above the racial groupings indicate the proportion of people in the racial categories that are also Hispanic by ethnicity. I made this to correct the graphics that lump race and ethnicity together (and – bafflingly – they still add up to 100%). Race and ethnicity are not the same. Race refers to differences between people that include physical differences like skin color, hair texture and the shape of eyelids though the physical characteristics that add up to a social decision to consider person A a member of racial group 1 can change over time. Irish and Italian people in America used to be considered separate racial groups, based in part on skin color distinctions that most Americans could no longer make. What does “swarthy” look like anyway? Ethnicity – a closely related concept – refers to shared cultural traits like language, religion, beliefs, and foodways. Often, people who are in a racial group also share an ethnicity, but this certainly isn’t always true. American Indians are considered a racial group but there are hundreds and hundreds of distinct tribes in the US and their religions, beliefs, foodways, and languages vary from tribe to tribe. Hispanics in America often share common language(s) (Spanish and/or English) but they may not share the same race. At the moment, most Hispanics in America self-identify as white. I have often wondered if, when I’m 60, the ethnic boundaries currently describing Hispanic people will have faded away, much like the boundaries describing Italian and Irish folks faded away, becoming more of a symbolic ethnicity that can become more important during the holidays and less important during day-to-day life. What needs work The elephant on the blog is that I have been on hiatus since February. I’m writing my dissertation and I plan to stay on hiatus through the spring to finish that. My decision may seem irresponsible from the perspective of regular readers and I apologize for my absence. As for the graphic, it was designed to run along the bottom of a two-page spread so it does not work well here on the blog. If anyone wants a higher-resolution version to use in class or in a powerpoint, shoot me an email and I’ll send it. This is a quiet story, the kind of thing that may or may not be picked up by a major national newspaper like the New York Times. Rural America is often used as a political flag to wave by politicians, but there is not often too much coverage of day-to-day life. The 2010 Census clearly shows, The Hispanic population in the seven Great Plains states shown below has increased 75 percent, while the overall population has increased just 7 percent. What is equally odd is that this story is running two graphics – the set of maps above and the one below – that more or less depict the same thing. I salivate over things like this because it gives me a chance to compare two different graphical interpretations of the same dataset. The two maps above includes a depiction of the change in the white population as a piece of contextual information to help explain where populations are growing or shrinking overall. These two maps show that 1) in many cases, cities/towns that have experienced a growth in their hispanic populations also received increases in their white populations (hence, there was overall population growth) but that 2) there are some smaller areas that are experiencing growth in the Hispanic populations and declines in the white populations. The second map shows only the growth in the Hispanic population without providing context about which cities are also experiencing growth in the white population. Looking at the purple map below, it’s hard to tell where cities are growing overall and where they are only seeing increases in the Hispanic population which is a fairly important piece of information. What needs work For the side-by-side maps, the empty and colored circles work well in the rural areas but get confusing in the metropolitan areas. For instance, look at Minneapolis/St. Paul. Are the two central city counties – Hennepin and Ramsey – losing white populations to the suburbs? That is kind of what it looks like but the graphic is not clear enough to show that level of detail. But at least the two orange maps allow me to ask this question. The purple map is too general to even open up that line of critical analysis. This next point is not a critique of the graphics, but a direction for new research. The graphics suggest, and the accompanying article affirms, that Hispanic newcomers are more likely to move into rural areas than are white people. Why is that? Is it easier to create a sense of community in a smaller area, something that newcomers to the area appreciate? If that is part of the reason new people might choose smaller communities over larger ones, for how many years can we expect the newcomers to stay in rural America? Will they start to move into metro areas over time for the same reason that their white colleagues do? Are there any other minority groups moving into (or staying in) rural America? Here I am thinking about American black populations in southern states like Alabama, Mississippi, and Arkansas. Are those groups more likely to stay in rural places than their white neighbors? For that matter, what about white populations living in rural Appalachia. Are they staying put or are they moving into cities like Memphis, Nashville, and Lexington? How do things like educational attainment and income levels work their way into the geographies of urban migration? Pew Research has created a tidy series of interactive graphics to describe the demographic characteristics of American generational cohorts from the the Silent Generation (born 1928 – 1945) through the Boomers (born 1946 – 1964), Generation X (1965 – 1980) [this is a disputed age range – a more recent report from Pew suggests that Gen Xers were born from 1965-1976), and the Millennial Generation (born 1981+ [now defined as being born between 1977 and 1992]). The interactive graphics frame the data well. They offer the timeline above as contextual background and a graphic way to offer an impressionistic framework for understanding generational change. Then users can flip back and forth between comparing each generation to another along a range of variables – labor force participation, education, household income, marital status – while they were in the 18-29 year old age group OR by looking at where each generation is now. The ability to interact makes the presentation extremely illustrative and pedagogically meaningful. It is much easier to understand patterns that are changing over time versus patterns that are life course specific. For instance, marital trends have been hard to talk about because the age at first marriage moves up over time, so it’s hard to figure out at what age we can expect that people will have gotten married if they are ever going to do so (I tried looking at marriage here). What I like about the Pew Research graphics is that they show us not only what the generations looked like when they were between 18 and 29 years old (above) but also what they look like now (below). Not only does it become obvious how many millennials are choosing to remain unmarried (either until they are quite a bit older or forever – hard to say because the oldest millennials are still in their 30s), but it also becomes clear that in addition to divorce, widowhood is a major contributor to the end of marriage. Keep that in mind: somewhere around half of all marriages end in divorce so that means the other half ends in death. I would guess that a vanishingly small number of couples die simultaneously which means there are quite a few single older folks who did not choose to be single (of course, even if they didn’t choose to outlive their spouses, they may prefer widowhood to other alternatives, especially if their spouse had a long illness). Labor force participation Here’s another set of “when they were young” vs. “where they are now” comparisons, this time on labor force participation. It appears that the recession has walloped the youngest, least experienced workers the hardest. They have the highest unemployment rate AND the highest rate of educational attainment (and school loan debt), which leaves them much worse off as they start out than their parents were in the Boomer Generation. Even if their parents were in Generation X, they were still better off than today’s 20-something Millennials. What needs work – Are generations meaningful? My first minor complaint is that the graphic does not make clear *exactly* what “when they were young” means. If we look at the first graphic in the series, the timeline, it appears that “when they were young” was measured when each generation was between 18 and 29 years old. I hope that is the case. I might have had an asterisk somewhere explaining that “when they were young = when they were 18-29 years old”. The concept of generations, in my opinion, is a head-scratcher. The idea that I had to come update this blog because the definition Pew was using to define Millennials and GenXers changed (without explanation that I could find) adds to my initial skepticism about the analytical purchase of generational categories. What is the analytical purchase of looking at generations – strictly birth-year delimited groups that supposedly share a greater internal coherence than other affinal or ascribed statuses we might imagine? If we believe that social, technological, and most all kinds of change happen over time, of course there are going to be measurable differences between one generation and the next. I imagine, though I have never seen the comparison, that if social scientists split people into 10- or 20-year pools based on their birth years they would end up with the same sorts of results. So why not think of generations as even units? And is it clear that the meaningful changes are happening in 20-year cycles? Or would 10-year age cohorts also work? The real trickiness comes in when we think about individuals. Say someone is like myself, born in a year on the border between one generation and the next. Am I going to be just as much like a person born firmly in the middle of my cohort as a person on the far end of it? Or will people like me have about as much in common with the people about 8 years above and below us, but less in common with the people 15 years older than us who are considered to be in the same generation, and thus to have many similar tendencies/life chances/characteristics? A better way to measure the cohort effect would seem to be to consider each individual’s age distance from each other individual in the sample – the closer we are in age, the more similar we could be expected to be with respect to things like labor force participation and educational attainment. Large structural realities like recessions are going to hit us all when we have roughly similar amounts of work force experience, impacting us similarly (though someone 10 years older and still officially in the same generation will probably fare much better). Since it is computationally possible to run models that can take the actual age distances of individuals in the same into account, I don’t understand the analytical purchase of the concept of generations. Mapping the Measure of America is a social science project that deliberately includes information graphics as a communication mechanism. In fact, it is the primary tool for communicating if we assume that more people will visit the (free) website than buy the book. And even the book is quite infographic dependent. I support this turn towards the visual. I also support the idea that they hired a graphic designer to work with them. Often, social scientists do not do well when left to their own under-developed graphic design skill set. Fair enough. The website presents a unified view of the three images above. I couldn’t get them to fit in the 600 pixel width format, so I presented them one at a time. I encourage you to go to the website because one of the greatest strengths of this approach is the interactivity and layering. I happen to have picked Massachusetts, but each state plus DC has it’s own graphics available. There are other charts and whatnot available, but I think that this set of graphics (which you see all at once) are the strongest. What needs work Maps. Maps are too often used. Here’s why I think maps are a problem. Look, folks, political boundaries are meaningful when it comes to making policy or otherwise dealing with state-based funding. And that’s about it. Political boundaries occasionally coincide with geographical boundaries, but not always. Geographical boundaries are meaningful for some things – life opportunities may be based on natural resources or on historical benefits accruing to natural resources. But political boundaries and maps are often not all that useful because they imply that the key divisions are the divisions between states or counties or neighborhoods. Like I said, sometimes this is true because funding tends to be like the paint bucket tool – it flows right up to the boundaries and not beyond, even if the boundaries are arbitrary or oddly shaped. But where the issues are not heavily dependent on funding, thinking in terms of political boundaries makes it harder to see patterns that are organized along other axes. For instance, I wonder what would have happened if some of these categories – education, longevity, income – had been split between urban, suburban, and rural areas. Or urban and ex-urban areas if you prefer that perspective on the world as we know it. In the end, I think the title is both accurate and disappointing: “Mapping the Measure of America”. Figuring out how to do information graphics well means figuring out which variables are the key variables. In this case, it seems that the graphic options might have determined the display of the information. Maps are easy enough – they appear to offer a comparison between my local and other people’s local. Those kinds of comparisons offer readers an easy way to access the information because everyone is from somewhere and there is a tendency to want to compare self to others. But ask yourself this: to what degree do you feel that state-level information is a reflection of yourself? Do you see yourself in your state? This graphic is a bit too cartoon-ish for my tastes but it does a good job of illustrating a health care gap that, even during the health care debate, went over-looked. I figured Halloween – a holiday whose commercialization revolves around candy – might be a good time to post the dental health care graphics developed over at the GOOD magazine transparency blog. In the spirit of full disclosure: I was a dental assistant for a summer. The numbers here are accurate and have very real consequences. I used to see kids who did not know (they had no idea) that drinking soda was bad for their teeth. These kids sometimes had 7 and 8 cavities discovered in one check up. For older people, dry mouth would lead them to suck on lozenges or hard candy all day and they’d end up with a bunch of cavities, too. Bathing the mouth in sugar is bad. Combining the sugar with the etching acid in soda is even worse. Once a tooth has a cavity, it needs to be filled or the bacteria causing the decay will continue to eat away at the tooth, eventually hitting the pulp in the middle of the tooth. Once that happens, the person is usually in pain and needs a root canal. Even if they aren’t in pain, they need to have the infected tissue removed (that’s what a root canal treatment does) or the infection can spread, sometimes into the jaw bone. There is no way for the body to fight an infection in a tooth because the blood supply is just too little to use the standard immune responses. Dental decay progresses slowly. Kids lose their primary teeth any decay in those teeth goes with them. Therefore, it’s not all that common to see teenagers needing root canals. But it does happen. Root canals are expensive. It’s a lengthy procedure requiring multiple visits and a crown. Pricey stuff. BUT, this process allows the tooth to be saved. Without dental insurance, sometimes folks opt for the cheaper extraction option. Once a tooth is extracted, that’s it. It’s gone. (Yes, there is an option to have a dental implant but that’s even more expensive.) So a teenager who likes to suck on soda all day long and who may not be all that convinced about the benefits of flossing could end up losing teeth at a young age. I can tell you because I’ve seen it: a mouth without teeth is not a happy mouth. All those teeth tend to hold each other in place. Once some of them are extracted, the others can start to migrate. Extract some more and things get more interesting and people start to build diets around soft foods. Eventually, once enough of them are extracted the entire shape of the mouth flattens out – not even a denture can hang on to help the person eat. Unfortunately, poor dental health disproportionately impacts poor people, as these graphics demonstrate. But that disproportionate impact can double down. Dental health is often seen as a sign of class status. People with poor dental health have trouble getting good jobs, especially in a service economy. For what it’s worth, I bet they also have more trouble in the dating/marriage market. Why draw attention only to the fathers? Clearly there must be quite a few unmarried mothers out there as well. I hope this isn’t suggesting that deciding to take a relationship into marriage is somehow only or primarily the man’s responsibility. Both women and men have agency around the marital decision. It would be nice if cultural constructs supported equal opportunity for popping the question…but headlines that emphasize men’s agency over women aren’t going to get us any closer to equality on that front. It’s nice to see that this graph points out where definitions of racial categories change. It is also nice that it draws attention to the problem that many American children are being born into poverty or at least situations where resources are extremely constrained. In another graph elsewhere, the same group also reminds us that these births are largely NOT happening to teen parents. The other critical point is that out of wedlock births are on the rise even though birth rates for teen mothers are declining. If in the past it was possible to think that the problem is just that teens are out having unprotected sex that leads to accidental births, we can no longer be so sure that this is what is happening. Age at first sex is decreasing which means that most of the people having children out of wedlock are capable of having sex without getting pregnant. They probably have been doing just that for years. Having children out of wedlock is best understood to be a choice, then, not an accident. Any efforts to prevent child poverty are probably not going to be successful if they rest on sex ed or free condoms (though I personally believe those things are important for other reasons). The American Heritage Foundation believes that if people would just get married, these kids wouldn’t be born into poverty. Others aren’t so sure it’s that simple. What needs work The problem with the write-up accompanying this chart is that it implies that the causal mechanism goes something like this: for whatever reason couples have children together but do not get married. The failure to get married means that these children will be far more likely to be raised in poor or impoverished conditions. For emphasis, I’ll restate: the parents’ failure to marry one another leads to children being raised in poverty. Now. Here’s what I have to say about the chart. First, if that is the message, why not depict the out-of-wedlock birth rate by poverty status, preferably poverty status prior to pregnancy? I’d settle for poverty status at some set time – like the child’s birth or first birthday, but that isn’t as good. I feel like showing these numbers by race is subtly racist, implying that race matters here when what really matters is poverty, at least according to the story that they are telling and the story that many marriage scholars care about. Yes, it is true that poverty and racial status (still) covary rather tightly in America, but if the story being told is about poverty, I’d like to see the chart address that directly rather than through the lens of race. Furthermore, if race DOES matter, where are Asians? American Indians? Moving away from the chart for a moment and getting back to the causal story, marriage researcher Andrew Cherlin finds that the causal arrow might go the other way. Being poor may be a critical factor in preventing folks from getting married. William Julius Wilson was an earlier proponent of this concept, especially with respect to poor African Americans. His work suggested that during and after the post-industrial decline in urban manufacturing jobs, African American men were systematically excluded from the work force and this made them appear to be poor marital material. Cherlin’s more recent work applies more broadly, not specifically to African American men, and bolsters the idea that marriage is something Americans of all backgrounds feel they shouldn’t get into until they are economically comfortable. What ‘comfortable’ means varies a lot, but most people like to have steady full-time jobs, they like to be confident that they won’t get evicted, that the heat or electricity will not be turned off, that they will have enough to eat. The more important question would be: why don’t these assumptions apply to having children? Whereas getting married can represent an economic gain if you are marrying a working spouse, having children certainly does not (state subsidies do not cover the full cost of having children no matter how little the children’s parents make). Perhaps what we are faced with is people for whom getting married may not represent an economic gain. Marrying a person without a steady job could present more of a drain on your resources than staying single, whether or not you have kids. I was looking around for a nice EU-contextualized graph showing Spain’s unemployment rate. I found what you see above which shows unemployment rates in other EU countries. That was one of my requirements – in the EU economics are sort of local and then again not so local so it’s silly to try to look at one country without taking into account the others nearby. What we see, and what has continued since this graphs last data point in 2008 is that Spain has a notably high unemployment rate. News earlier this week put the current unemployment rate here (yes, I’m in Spain) at 19.7%. Personal anecdotes with no scientific validity whatsoever When I’m out on the street, I would say this appears to be true – everyday is like a holiday! Well, not really. There are no parades or obvious drunkenness. But there are all sorts of young, able-bodied folks walking around, having a caña, getting on with life. People’s demeanors and attitudes do not, on their surface, suggest depression, destitution, or downtroddenness. Furthermore, I had the brazenness to open my American mouth and ask a Spaniard man I barely know what he thinks of Spain’s economic situation. He said that the unemployment rate is not at all reflective of the actual unemployment rate because everyone is working under the table. That sort of reality, if it is true, would not be reflected in graphs like the ones above and below. If people are working under the table, I can’t imagine they have full time positions just judging by how many young capable-looking people are on the street on weekdays. What needs work I don’t know about you, but I don’t like the gradient on the graphs. Seems superfluous. I would have lost the grey background and just gone with some rather straightforward area blocks (no lines between each bar in the graph). In simplifying that portion of the visual, I think there would have been space for more contextual data. I’m no economist, so I looked around to see what economists think of as smart ways to contextualize unemployment rates. I found this (which has a Spanish focus): The story here is that – oh yes – we can see that unemployment rate bouncing right up. But we also see that Spaniards are saving more. This has been attributed to the expectation by Spaniards that they are going to be taking in their out-of-work sons, daughters, and assorted other relatives during this crisis. We would be shocked to have such a high savings rate in America. What needs work I am still trying to figure out what is going on in Spain. At least as I perceive the general attitude, Spaniards appear to be prepared to weather this little ripple in the amazing growth of their prosperity over the last 60+ years by either working under the table (maybe) or leaning on family. Is an 18% savings rate meaningful in the context of a nearly 20% unemployment rate? Will this crisis simply introduce more inequality – those with stable jobs will go unscathed while those without steady unemployment sink lower than family are able to stoop to help them out? And if my man on the street is any kind of correct and the unmeasured economy is booming, how do we measure it? If you happen to have some expertise on any of these questions, please post to the comments. What Terri Chiao and Deborah Grossberg Katz from Columbia University’s GSAPP design school have done is come up with a way to represent percentages using a flow-chart. Not only is it creative in the sense that this sort of data rarely gets displayed this way, but it helps turn the data into a narrative. In order to figure it out, the viewer quite literally has to reconstruct a story that sounded something like this in my head: “The population they are concerned about has 40% of people already experiencing homelessness with another 60% at risk of homelessness. The folks who are already homeless are the only ones living on the street, but really, 75% of already homeless people live in shelters. As for the at-risk-of-homelessness people, 60% live with family or friends. Twenty-five percent of the at-risk population owns their homes … why, then, are they at risk of homelessness? Both the at-risk and already homeless groups have far more families than single folks. And what does it mean to be homeless in jail/prison? That you aren’t sure where you will go when you exit? Somehow I feel like that could describe a lot of the prison population. And what about half-way houses? Those still exist, right?” The flow-chart concept is not typically used to describe the breakdown of percentages and what works here is that it forces the viewer to walk through the narrative. As a pedagogical maneuver, it’s quite successful. Because of the way the information is presented, it invites questions in a way that a pie chart or a bar graph may not. It’s also a little harder to interpret. Graphics that invite questions often are a bit more challenging to ingest, not quite so perfectly sealed as other more common strategies might appear. What needs work I spent a good deal of time looking at this chart trying to figure out what the blue means. I still don’t understand what the blue means. I also would like to see on the graphic some explanation of how they determined who was at risk of being homeless. Because when I got to the section of the flow-chart that showed how many of the at-risk population owned their homes, I began to get confused. By ‘own home’ do they not mean actually owning the home, but renting it or paying a mortgage on it? And if they do mean that folks actually own their homes outright, how can they be at risk of homelessness? Is the home about to be seized by eminent domain to make way for Atlantic Yards? At risk of being condemned (I hope NYC doesn’t have so many properties at risk of condemnation)? I’m sure if the makers of the graphic ever find their way to this page they will be upset because ‘at-riskness’ is described in the paper. But in life online, stuffing a little more text into the graphic is often a good idea because cheap folks like me will take the graphic out of context and whatever isn’t included will be lost. In this case, though, all is not lost. First, you can visit the blog on which I found this lovely graphic and get the whole story. But if you aren’t ready for all that, note that the authors define those who are at risk of homelessness as anyone who has spent some time in a shelter in the past year, regardless of whether they happened to have been homeless at the time of the survey. They also included the graphic below. I still don’t know what the blue means. This graphic does make it easier to understand that being truly homeless appears to mean running out of friends and family who have homes to share. Because none of the truly homeless live with family and friends. It’s also clear from both graphics that most homeless people are not visibly homeless. The folks you might see sleeping on the train or the street 1) may not be homeless, they could be sleeping away from home for reasons unrelated to homelessness per se and 2) if they are homeless, they may be quite different from the rest of the homeless population. They’re more likely to be single adults than families and more likely to be men than women. This visually arresting graphic does a great job of presenting data about national spending in an apolitical but altogether fascinating way. It’s interactive, by the way, but I’m not commenting on the interactive part, just the static graphic. I find that getting the static graphic clear is an important first step towards making a functional interactive graphic. If ever I hear someone say ‘but it’s interactive’ as an excuse for having a weak static graphic, I cringe. See my post about the USDA mypyramid food guide for a case study on the importance of a strong relationship between the static and interactive iterations of graphics as tools. Each dot represents a different department or governmental program with the size corresponding to the funding level. Smart. If you link through to the originating site, you’ll be able to follow blog posts that take readers through the development of the graphic. They ask for input and do their best to incorporate it. I like that approach. Good use of technology, OKF. What needs work I can’t quite tell why the circles are arranged the way they are or why their hues are the shades they are. Graphics, especially the beautiful ones, are the best when their simple clarity gives way to an elegant complexity. In other words, when I pose the question: “why does the hue vary within given funding types?” I’d like the graphic to lead me to an answer. I’m sure there is a reason for each hue, I just haven’t been able to figure it out. One tiny, American-centric request: Add ‘UK’ to the page or the graphic somewhere. Maybe change “Total spending” to “Total UK spending”. Or “Where does my money go?” could be “Where do UK taxes go?”. These here interwebs are global. Yes, of course, the £ symbol tends to give it away. Maybe I’m just being too picky. I like the inset map. Architects often include a small site map in the main exterior section of a new building to help the viewer understand where the building is in relation to the rest of the world. News programs often start out international stories with maps. I love that this line graph comes with an orienting map. I might have included just a shadow of some neighboring states simply because many Americans have only a fuzzy idea of where Wisconsin is. Sad but true. The lines show a great deal of information, some of which is not addressed in the article. Quoting the main thrust of the article: “Here in Dane County, Wis., which includes Madison, the implausible has happened: the rate of infant deaths among blacks plummeted between the 1990s and the current decade, from an average of 19 deaths per thousand births to, in recent years, fewer than 5. The steep decline, reaching parity with whites, is particularly intriguing, experts say, because obstetrical services for low-income women in the county have not changed that much.” Then it goes on to quote a local doctor and professor: ““This kind of dramatic elimination of the black-white gap in a short period has never been seen,” Dr. Philip M. Farrell, professor of pediatrics and former dean of the University of Wisconsin School of Medicine and Public Health, said of the progress in Dane County. “We don’t have a medical model to explain it,” Dr. Farrell added, explaining that no significant changes had occurred in the extent of prenatal care or in medical technology.”” The graph suggests an explanation that the article (and the doctor) may not have considered. Presenting information visually is about more than presentation; rearranging data to reveal patterns is a research tool in itself. What needs work This is a critique of the article, based on the line graph: isn’t it possible that the at-risk folks in Dane County ended up moving to Racine for some reason? Right at the time the infant mortality rate in Dane was plummeting, the rate in Racine was spiking. From the line graph it seems that this happened in the vicinity of Clinton era welfare reform. Maybe there were some reasons for the most at-risk folks to get out of Dane and into Racine at this time. If there is no medical explanation, let’s have a look at other possible explanations. Analyzing the visual presentation of social data. Each post, Laura Norén takes a chart, table, interactive graphic or other display of sociologically relevant data and evaluates the success of the graphic. Read more…
<urn:uuid:5480714d-b673-42b5-b1ac-341ef1e19fa1>
CC-MAIN-2017-17
https://thesocietypages.org/graphicsociology/tag/stratification/
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917118831.16/warc/CC-MAIN-20170423031158-00364-ip-10-145-167-34.ec2.internal.warc.gz
en
0.966806
7,116
3.171875
3
Thursday, September 30, 2010 John Wilkins has started an important project. John notes that something called "the scientific method" is often the focus of social and political arguments over the demarcation of science from pseudoscience. Those arguments, in turn, can have enormous consequences, as in the case of global warming or proper education. However, as was the case with the attempt to arrive at a demarcation criteria between science and pseudoscience, the demarcation between the scientific method and all the other nonscientific methods of investigation will not be some bright-line, easy to draw, sum-it-up-in-10-words-or-less rule suitable for grade school textbooks. Of course, most nonscientists wind up with just such a grade school understanding of "the scientific method," often involving a set of rigid steps, such as: Observation --> Hypothesis --> Testing --> Theory --> More Testing --> Law. Indeed, as John points out, there is no such thing as the scientific method but, instead, "there are many, but like a family portrait, they all have a resemblance, and there are clearly some that have been adopted from outside the family tree." That is no reason to throw up our hands and declare that there is no way at all to distinguish scientific methodology from nonscientific. It's just a messier and, ultimately, imperfect project but still well worth the effort, in that it will assist nonscientists in evaluating conflicting claims to the mantle of science. John proposes to aid in this task by producing a kind of "operating manual" for nonscientists: So when a non-scientist approaches scientific reasoning, it pays for them to know how science is done and why, and if they aren't about to undertake a scientific education, or worse, a philosophy of science education, then they don't want to have to deal with these complexities and nuances. This book will be written for them. We aim to provide simple summary explanations of what science does, and justify those practices. Why, for example, do medical researchers use double blind methods? Why do psychologists test null hypotheses? Why are error bars used? How do physicists come up with these increasingly complex and odd theories? And should they do this? Labels: Wilkins' Manual Tuesday, September 28, 2010 Of Calliopes and Intellectual Cotton Candy The Discoveryless Institute’s Stupendous Traveling Circus and Amazing Pandemonium Show recently made a visit to Southern Methodist University. The Sensuous Curmudgeon has already featured the student newspapers’ report of the event, which includes the incredibly garbled statement that: While some who study geology believe in the Cambrian explosion, in which animals did not evolve from small organisms but were created by a 60-million-year long explosion, Darwin thought otherwise. On the other hand, it is heartening to see the reaction of the SMU faculty to the charlatans in their midst: Last Thursday evening, the SMU community witnessed another dishonest attempt to present a particular form of religion as science, entitled "4 Nails in Darwin's Coffin: New Challenges to Darwinian Evolution". It was designed and presented by Seattle's Discovery Institute (and its subsidiary the Biologic Institute). This was a follow-up to their equally dishonest 2007 presentation "Darwin vs. Design". We were outraged by the dishonesty of Thursday's presentation, but not entirely surprised by it. The Discovery Institute is a well-financed organization that has repeatedly attempted to discredit Darwinian biology and thereby advance its brand of religion called Intelligent Design. We do not object to religion as such. But we do object to blatant distortions of Darwinian thinking, and to pseudo-scientific alternatives to it that are falsely alleged to be better supported by the evidence. The final word should go to the SMU faculty: The Discovery Institute is a fringe group of pseudo-scientists who are busily trying to pass themselves off on the unwary as legitimate scientists. Updates: The Sensuous Curmudgeon points to where the Empire Whines Back and also to where the Religious Studies Department of SMU has joined the Biology Department in describing the vacuity of ID and the people who peddle it. Then Casey Luskin does the same thing by ... well ... being Casey Luskin. Creationism's Hot Air Sunday School Times, June 3, 1922 Another 88 years of bad prognostication. Monday, September 27, 2010 PZ's Long Arms Sunday, September 26, 2010 Taking Care of Education Saturday, September 25, 2010 A Little Learning ... Florida's schools come in for a lot of criticism. But the schools still are full of students who produce thoughtful work. An individual cannot yell, "Fire!" in a crowded movie theater, but an individual can interrupt a presidential speech by yelling, "Lie!" The first instance, if allowed, would endanger lives by causing a stampede. The second instance would endanger the very core of American democracy if we disallowed it. It is this supposedly thoughtful work, however, which is so confused on so many different levels, that attracted my attention: The freedom of religion is ideal because believing something in your heart and not being allowed to express it is so limiting. It keeps all of the citizens in the same mindset. All of the modern day theories, like the big bang theory or intelligent design, were created because the people of the United States are allowed to express their religious opinion. Allowing the freedom of religion keeps our minds open for change. It's inspiring and allows us to come together and appreciate what a wonderful country we live in. Thus the student and VerSteeg are both thoughtlessly laboring under the misconception that every great idea, including the Big Bang, has been the product of the United States, when that is, in fact, far from the truth. Also, while there was some concern originally in the scientific community that Lemaître was advancing a religious idea of creation, the rigor of the math and astronomical evidence he presented quickly disabused scientists of that notion. While I agree that freedom of religion is a very good idea, neither it nor the American formulation of that freedom had anything to speak of to do with the Big Bang Theory. Naturally, calling Intelligent Design Creationism a modern day scientific theory on the order of the Big Bang means that you are not merely thoughtless but deeply ignorant as well. The "design argument" is ancient, going back as least as far as the golden age of Greek philosophy, and was formulated, in all the major aspects it sports today, no later than William Paley's Natural Theology in 1802. Naturally, ID is a religious belief that anyone is allowed to hold in the US but which, because of freedom of religion, cannot be foisted on others in public schools by the government. One thing that must be pointed out is that the proponents of ID, with their constant drumbeat about how "Darwinism," which they claim is a religious belief, leads to all bad things, clearly disagree that it is a good idea to keep our minds open for change. That's why they use every chicanery to try to keep young people from truly considering the evidence for the scientific fact of evolution. When Science Based Medicine Goes Bad Well, I had my surgical procedure and am recovering nicely now. I'll probably be napping quite a lot for the next few days but I just thought I'd let my select group of readers know that I'm still here ... whether you like it or not. Thursday, September 23, 2010 Lose Faith, Get Sick Here's an interesting report in ScienceDaily, entitled "Losing Your Religion May Be Unhealthy, Research Suggests." People who leave strict religious groups are more likely to say their health is worse than members who remain in the group, according to a Penn State researcher. The percentage of people who left a strict religious group and reported they were in excellent health was about half that of people who stayed in the group, said Christopher Scheitle, senior research assistant, in sociology. About 40 percent of members of strict religious groups reported they were in excellent health, according to the study. However, only 25 percent of members in those groups who switched to another religion reported they were in excellent health. The percentage of the strict religious group members who dropped out of religion completely and said their health was excellent fell to 20 percent. Strict groups typically require members to abstain from unhealthy behaviors, such as alcohol and tobacco use. These groups also create both formal and informal support structures to promote positive health, according to Scheitle. The social bonds of belonging to the group might be another factor for better health. "The social solidarity and social support could have psychological benefits," Scheitle said. "That could then lead to certain health benefits." Religious beliefs may also promote better health by providing hope and encouraging positive thinking. Besides losing connection to these health benefits, exiting a religious group may increase stressful situations. "You could lose your friends or your family becomes upset when you leave, leading to psychological stress and negative health outcomes," said Scheitle. But that bit about providing hope and encouraging positive thinking seems doubtful to me ... unless the hope we're talking about is hoping to see all the people you don't like burning in hell and the positive thinking is being positive you won't be among them. Saturday, September 18, 2010 Moran On the Definition of "Accommodationism" Larry has changed his own definition of "accommodationism." Today, at least, "accommodationism" means "rhetoric [that] comes from atheists (secularists) who direct a great deal of anger toward the vocal atheists but go out of their way to excuse their religious friends." Hmmmm ... just 6 months ago, Larry declared the following, from Peter Hess' article "God and Evolution" on the NCSE website, had "all the earmarks" of accommondationism: Of course, religious claims that are empirically testable can come into conflict with scientific theories. For instance, young-earth creationists argue that the universe was created several thousand years ago, that all the lineages of living creatures on Earth were created in their present form (at least up to the poorly-defined level of "kind") shortly thereafter, and that these claims are supported by empirical evidence, such as the fossil record and observed stellar physics. These fact claims are clearly contradicted by mainstream paleontology, cosmology, geology and biogeography. However, the theological aspect of young-earth creationism—the assertions about the nature of God, and the reasons why that God created the universe and permitted it to develop in a particular way—cannot be addressed by science. By their nature, such claims can only be—and have been—addressed by philosophers and theologians.No anger directed at atheists nor indiscriminate excuse of the religious. The science of evolution does not make claims about God's existence or non-existence, any more than do other scientific theories such as gravitation, atomic structure, or plate tectonics. Just like gravity, the theory of evolution is compatible with theism, atheism, and agnosticism. Can someone accept evolution as the most compelling explanation for biological diversity, and also accept the idea that God works through evolution? Many religious people do. In fact, aren't the people Larry is calling "accommodationists" those people who were dubbed "faitheists" by Jerry Coyne? Now, of course, Larry is free to make up his own private definition for any word he likes but, out of common courtesy, he might wave a flag or set off a flare or something so the rest of us know ... just so we don't get whiplash. Labels: Accommodationism Incompatiblism But It's All About the Science! David Klinghoffer is over at the Undiscovery Institute's Ministry of Misinformation once again demonstrating that the real aim of Intelligent Design Creationism is theological, not scientific: Under a scientific view that leaves open the possibility that we really do reflect God's intelligent designing purpose, making us in a genuine sense his "handiwork" and the "fruit of [his] labor," we can make a plausible claim on his mercy. A very plausible claim, perhaps more so even than a child's claim on the mercy of his mortal father. But under an extremely attenuated vision of God's involvement in our having come to existence, like that proposed by theistic evolutionists, it's much harder to see what claim I have on God's mercy. Not being his handiwork in any meaningful sense, exactly what relationship do I have to him? Via The Sensuous Curmudgeon Friday, September 17, 2010 Not Accommodating Stupidity Larry Moran appears to believe that people who don't think "science" is necessarily "incompatible" with religion (i.e. "accommodationists") are somehow duty-bound to point out that Pope What those comments have to do with the question of whether or not science is "compatible" with some religious beliefs is beyond me. But, though I don't feel my accommodationism obligates me to criticize the Pope and the dope, I am happy to do so on the general principle that stupidity should be mocked. The Pope started it off by lauding British resistance to Nazism and contrasting that to "the sobering lessons of atheist extremism of the 20th century." As John Wilkins points out, that is nothing but historic revisionism*, something that anyone who fancies himself a scholar should be ashamed of. But the dope, unsurprisingly, multiplies the dopery. According to Donohue, who ups the ante by adding Stalin and Mao, it was the "anti-religious impulse that allowed them to become mass murderers." Riiight! Only those with an anti-religious impulses become mass murderers! So, the genocide against Mesoamericans was carried out by atheists who just happened to be wearing the robes of the Catholic Church and carrying the banners of Christ in the vanguard of the conquistadors? Torquemada was anti-religious? The Malleus Maleficarum was written and enforced by atheists? To his small credit, Donohue knows he's spouting nonsense but he thinks it's okay: ... since the fanatically anti-Catholic secularists in Britain, and elsewhere, demand that the pope—who is entirely innocent of any misconduct—apologize for the sins of others. We might not expect murderous political thugs to repent the actions of their institution, but the man who claims the moral authority to lecture others on their lives might do a little better. * See, also, PZ's repost of a list of Hitler's quotes on his "anti-religionism" compiled by Doug Theobald. Wednesday, September 15, 2010 Skimming the Subject This is a repost from October of 2005 (while I wait for the Percocet to kick in), though I've added a picture as is now my custom. Apparently conservatives have fallen love with the nature documentary, "March of the Penguins". According to conservative film critic and radio host Michael Medved, quoted by Jonathan Miller in his New York Times article "March of the Conservatives: Penguin Film as Political Fodder" (still available as a pdf file): [March of the Penguins is] the motion picture this summer that most passionately affirms traditional norms like monogamy, sacrifice and child rearing. It was a movie about pitiless Darwinian circumstances. Drop the egg, it freezes and the embryo dies. Newborn chick wanders away, it freezes and dies. One parent dies of predation or weather, the other has to abandon the young to starve, freeze, and die. John Wilkins, at his Evolving Thoughts site, has pointed out that this is: . . . an old tradition in Christian treatment of nature. Ever since the classical period, there has been a tradition of drawing moral lessons from organisms. Of course, such people only read into the organisms, like the lion, the eagle or the fox, what they want to find there. It's not like they actually learn from nature or anything. Time Wounds All Heels In particular, mine. Okay. I already hinted at the fact that I was scheduled to undergo a moderately serious surgery. That was to deal with a blockage in an artery. So, naturally, the universe, in its infinite humor, decided that I should fracture my heel and be really laid up for a while. This may or may not affect blogging for the worse but I thought I'd at least post some oldies that might be of interest to my select readership and provide DM with some activity in his pathetic life. Look for some reruns shortly ... after all, if television moguls can get away with it, why not me? Saturday, September 11, 2010 Okay, this (via Ed Brayton) made me laugh out loud (even all by myself alone): Obama wants the government to take over social security. That's why I'm voting Tea Party.But it's rueful laughter. It's been 3,287 days since the terrorist attacks on the World Trade Center, the Pentagon and a field in Pennsylvania. There's not much more to today's date than that. While it is probably near the top of the most lives taken by a small, non-governmental group in a single day, on the scale of human self-butchery it barely warrants a blip. The Rwanda genocide, not to mention the Holocaust, Hiroshima, Dresden and thousands of other crimes we have committed against ourselves, make it pale in comparison. Sadly, many in my home country, which has, at least, aspired to implement the ideals of the Enlightenment, have let this act of hatred, cruelty and unreason drive them from their own senses. To those who would justify senseless war ... you have let the terrorists win, since that is what they want. To those who would justify torture, supposedly in the name of the "greater good" ... you have let the terrorists win, since that is what they want. To those who would restrict the rights of their fellow Americans to freedom of religion and speech ... you have let the terrorists win, since that is what they want. Make no mistake, the real significance of this day lies in what we have become, not in what happened back then. Friday, September 10, 2010 Who To Root For? It seems a couple of Christian pastors in Ghana got their passports for the hereafter stamped in a rather silly way. It is reported that, coming to a flooded stream, they ignored the advice of the locals and their own driver to wait until the flood receded. Instead, calling the driver a man of little faith, one of the pastors took the wheel and drove the vehicle into the stream, where it stalled and was overturned in the current. Two of the four passengers died. But as silly as dying because you think God is going to protect you from rampaging water is the explanation given by one of the locals: A renowned herbal practitioner cum spiritualist Dr. Ebenezer Adjakofi of Shakina Herbal Science Centre has made a startling revelation that river gods were responsible for the accident which untimely claimed the lives of two senior pastors of the Church of Pentecost. ...Now the part about heeding the advice of locals is good but the rest just brings to mind a divine WWF Smackdown. According to Dr. Adjakofi, who got to the scene moments after the accident enroute to Dambai, the gods were angered by the insistence of the two Men of God to forcibly cross over the river ... Dr. Adjakofi related that the behavior of the pastor and his team angered the river gods who decided to teach them a bitter lesson thus stopping the vehicle midway through the flooded water and sweeping it away. He claims the angered goods spared the lives of the driver and Mrs. Gadzekpo who were also in the vehicle but punished the two pastors for failing to heed to advice and daring them (gods). He noted that the river gods are already angered that the necessary rituals were not performed before the construction of a bridge in the area recently. He also advised motorists to heed advise from local residents of the areas they ply to avoid such avoidable accidents in future. But I have to admit that I am envious of the title "cum spiritualist" and will, therefore, piously pass on the opportunities offered by Dr. Adjakofi's name. Wednesday, September 08, 2010 Making Up Your Mind My, my ... Wild Bill Dembski, at his blog, Uncommon So what event are we talking about? A scientific conference, surely, since we all know ID is science (because the Discoveryless Institute tells us so)! Not quite! It's the: National Conference on Christian Apologetics 2010, Defending the Faith and the Family It must be tough to decide whether you are a scientist or an apologist and Biblical worldview leader on any particular day. Via the excellent Homologous Legs Monday, September 06, 2010 They Know Their Own Wiley Richards, a retired professor of theology and philosophy at The Baptist College of Florida in Graceville, helps by identifying the nature of Intelligent Design: Arguments to prove God's existence fall under two broad categories, general revelation and special revelation. Christian apologetics in the public arena largely is pursued along the lines set forth by William Paley (1743-1805), an English theologian who used the example of a watch. He argued that its existence demanded a watchmaker. A present-day example has been set forth by Michael J. Behe in his seminal book, Darwin's Black Box, in which he contends that Darwin and his followers posit the existence of the first living cell but have failed to explain how it could come from non-living matter. This view is commonly called the argument of Intelligent Design, the ID approach.... as if we didn't know that already. I Resemble That Remark The ever wonderful Wiley Miller and Non Sequitur. Wednesday, September 01, 2010 Ugly rhetoric leads to ugly acts. First there was arson at a mosque ... no, not the one at "Ground Zero," but one in Murfreesboro, Tennessee, about 900 miles, as the Cadillac flies, from "Sacred Ground." Now there was some convenience store clerk abuse by someone too dense to know the difference between Sikhs and Muslims. Hate is not supposed to be an American value. Unfortunately, that is something more honored in the breach than in the observance. Acme Philosophy Corp. Sometimes you just have to wonder: I'm starting to realize that my quest for free will in philosophy may be futile, because I have a narrow notion of what I mean by the term. I see free will as the way most of us conceive of it: a situation in which one could have made more than one choice. If that's how you see it, and you're a determinist—which I think you pretty much have to be if you accept science—then you're doomed.
<urn:uuid:1d35dc71-e8b4-4da7-acd6-e1cada62a350>
CC-MAIN-2017-17
http://dododreams.blogspot.com/2010_09_01_archive.html
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917125532.90/warc/CC-MAIN-20170423031205-00311-ip-10-145-167-34.ec2.internal.warc.gz
en
0.964411
4,843
2.890625
3
from the Past A maverick linguist has devised a new way to scope out what our ancestors were up to 50,000 years ago. By: Bob Adler Languages Hidden Voice What people say tells us a lot about them. By the time someone says a dozen words, we know whether theyre young or old, with-it or out-of-it, from halfway around the block or halfway around the world. Language doesnt just say what we want it toit tattles about our history, whispers about where we come from and where weve been. That hidden, other voice of language captivated Johanna Nichols and has dominated her life for the past 35 years. Nichols is a linguista scientist who studies language and languages. Shes trekked to the far reaches of the former Soviet Unionto Chechnya, Ingushetia and Makhachkalato describe and preserve native languages. These days, however, she spends her time in a quiet, sixth-floor office with a panoramic view of the U. C. Berkeley campus. Though she often gazes out the window, Nichols is not looking at the towering eucalyptus trees, the rolling green lawns, or the students hurrying by. She is preoccupied with faraway places and ancient times. Voices from hundreds of languages are clamoring for her attention. What can languages today, she wonders, tell us about a great wave of exploration and migration that began 50,000 years ago and eventually circled the Pacific Ocean? What do they reveal about the earliest seafaring people and about who first discovered and populated the New World? Does the babel of modern tongues hide a linguistic clock that can date the birth of human language itself? A Linguistic Time Machine To answer these questions, Nichols has spent the past 15 years creating a kind of time machinea unique new way of using languages to listen to the fading echoes of human events thousands of years in the past. Just as a radio telescope lets astronomers catch and decode faint signals from far away and long ago, the novel, and controversial, method that Nichols has developed lets her detect signals from the past hidden within the languages of the present. Through those messages Nichols believes she can chart the migrations of our ancestors tens of thousands of years ago. To use language to discern the shape of such ancient events, Nichols had to break out of a mindset that still holds most other linguists back. She set aside the successful, tried-and-true study of language family trees, and developed a new approach that focuses on identifying and mapping unique grammatical building blocks, even when the languages that carry those building blocks are not related. She bases her studies on similarities between languages that other linguists have refused to interpret because the patterns did not make sense to them. Just as the infamous ozone hole lurked undetected for years because scientists had pre-programmed their computers to reject readings outside the range they expected, language patterns the size of the Pacific Ocean remained invisible until Nichols discovered them, took them seriously, and began to ask what they meant. For nearly two centuries, linguists have painstakingly worked out family trees for languages, and theyve succeeded remarkably well. As if slowly assembling a gigantic puzzle, linguists have sorted out most of the 6,000 languages spoken today, along with many that have died out. When two languages share enough similarities, linguists can be sure that they are related. For example, French, Spanish, Italian and Portuguese are sister languages, children of the same famous parent, Latin. Together they form a language family. English and Dutch are siblings in another family, sharing Low German as their parent. Even when an ancestral language has disappeared, linguists can recreate much of its vocabulary and grammar from its offspring. Linguists have even found relationships between the remote ancestors of many of todays languages. For example, English and Dutch share an extinct great-grandparent, Germanic, with Swedish, Danish and Norwegian. Often, a whole family tree turns out to be a branch of an even bigger, and older, tree. Ancient Greek, Latin, and seven other language families including our own, are all parts of a very large, very old family called Indo-European. But linguists constructing language family trees eventually hit a wall. They could trace many language families back about 6,000 years, and a very few close to 10,000 years. Beyond that blurry horizon, random changes in words swamp any genuine similarities. Everything changes over time in languages, says Nichols, and even the most durable signs of similarity eventually fade out. Rather than discovering one great family tree with all the worlds languages and language families on it, linguists found themselves wandering in a forest. They had identified 200 to 300 separate language families, called stocks. Some stocks were big and bushy, bearing dozens of related languages, while some were skinny, with just a few branches. Some, like Basque, a language with no known relatives that is spoken only in the Pyrenees of France and Spain, stood alone. But few stocks could be traced back much beyond 6,000 years, and none of them was provably related to any other. A Stroke of Genius While most linguists continue to push that 6,000-year wall back a bit at a time, Nichols vaulted over it. She stopped trying to refine or connect family trees. Instead she focused on certain language featuresshe calls them grammatical building blocksthat allow her to ask questions about those 200-plus deeply rooted language stocks without trying to pin down how they might be related. Until Nichols came along, historical linguists had talked themselves into a straitjacket, in which they saw their major enterprise to be the reconstruction of proto-languages, says John Moore, an anthropologist at the University of Florida in Gainesville. She loosened that up to show that there are other important projects to be accomplished. Im trying to be restrained, but I think it was a stroke of genius. Nichols made her next advance when she selected a representative language from each stock and plotted those that used particular building blocks on a map of the world. To her surprise, striking geographic patterns appeared; groups of languages thousands of miles apart contained clusters of identical building blocks. While other linguists attributed such shared features to chance, Nichols puzzled over them. She became convinced that many of the patterns that appeared on her maps could not be explained by chance. Nichols has spent a decade asking just what these patterns mean. Today, Nichols uses several dozen grammatical features to sort languages into different types. Many of the grammatical building blocks she uses are familiarfor example, whether a language favors prefixes or suffixes, whether it puts verbs at the beginning, middle or end of a sentence, and how it indicates possession. For the most part, she uses grammatical features rather than words or sounds because, as the skeleton of a language, grammar tends to change more slowly than words. Still, Nichols has included several useful sound features in her set of linguistic building blocks. She classifies languages according to how many of their pronounswords like I, you, he, or she" start with an m or n sound. She also notes if a language uses tones, as in music, to change word meanings. Chinese, Thai and Navajo do this, to name a few. One important grammatical feature that Nichols considers is whether a language uses numeral classifiers, a feature English lacks. In English and most related languages, we can link a number and what its counting without any frills. Six beagles, we say, or nine daisies. But many languages require a kind of fastener to join the number and the object together. Most Asian languages, including Mandarin Chinese, Japanese and Korean, require this verbal Velcro. In them, youd say something like seven-classifier-duck or three-classifier-drum. These special joining words are called classifiers because they specify the shape of the thing being counted. To translate completely, youd have to say something like five long-skinny-classifier pencil or a dozen round-classifier balloon. And because things come in many shapes, languages that need numeral classifiers use lots of them. Yurok, spoken by a Native American tribe, has 15, Korean 26, and Mandarin Chinese an impressive 51. It was Nichols great idea to make grammatical building blocks the focus of her study, rather than languages and their degree of relatedness. By giving up trying to figure out just how languages are related, she freed herself to start asking other, more important questions, such as where the different linguistic markers were distributed around the globe, and how they might have gotten there. The Pacific Rim Necklace In Nichols mind, the picture is clear. An enormous and sustained wave of human migration started about 50,000 years ago somewhere in Southeast Asia. Over thousands of years, successive bands of people spread out from the region. They could move relatively quickly because they were coastally adaptedthey knew how to make simple boats and make a living from the sea. Over thousands of years, some carried their languages south and west through coastal New Guinea and into northern Australia, while others moved clockwise up the coast of Asia, across the Bering Strait into Alaska, then down the west coast of North and South America. The evidence for this slow-motion human tsunami appears on Nichols world maps, where strands of languages sharing particular features ring the Pacific Ocean. Languages that start their pronouns with m and n sounds, languages that put their verbs first, and languages that use numeral classifiers form a pattern that circles the Pacific Ocean like a necklace. Languages sharing these features dot the islands of New Guinea, bead the coast of Asia from the southeast to the northwest, and trail the length of the Pacific coast of North and South America. Languages using numeral classifierschance distribution or echoes of a great migration? Numeral classifiers are endemic, ubiquitous, frequent and striking in the languages of AsiaChinese or Japanese or Korean or Thai, Nichols said at a recent scientific meeting. They are not infrequent in Melanesia and New Guinea. And theyre found up and down the West Coast of the Americas, and nowhere else. This is one feature that genuinely seems to be found nowhere else on earth but in these areas. Nichols uses standard statistical techniques to calculate the probability that this necklace around the Pacific might show up on her maps by chance. The odds are vanishingly small that so many of the languages with these key features would cling to the Pacific Rim, while so few appear in the vast areas of Asia, Africa, and inland America. Nichols is frequently asked how the languages that form the Pacific Rim necklace can share these grammatical building blocks if they are not related to each other. She explains that the stocks they represent may have originated in the same geographic region; neighboring but unrelated languages often share a significant number of traits. Or, as groups of people interacted over time, they may have borrowed language features from one another or from cultures that had arrived earlier, a sort of cross-fertilization. Language stocks with several identical grammatical markers clearly share some ancient affinities, Nichols says, but its not possible, or necessary, to figure out just what those affinities are. Intriguingly, evolutionary biologists have recently discovered genetic evidence suggesting that Nichols has uncovered something more substantial than mere linguistic echoes. Ted Schurr, part of a team of geneticists at Emory University in Atlanta, has spent years comparing mitochondrial DNA, a kind of genetic material that is passed only from mother to child, from groups of people around the world. He discovered a genetic marker that shows up in approximately the same Pacific Rim pattern Nichols found. Nichols interprets this match-up between her findings and genetics cautiously. She simply suggests that the genetic mutation Schurr found may have started in the same gene pool as the Pacific Rim languages. Recent research in China also supports her conclusions. Genetic studies there indicate that after leaving Africa, early modern humans colonized Asias southern coast before they spread north. The Earliest Americans The Pacific Rim pattern that Nichols discovered leads her to think that some of the earliest immigrants to the New World were seafaring people who paddled from island to island in the Aleutians and hugged the Pacific coast of Alaska. Since the earliest parts of this migration took place during the Ice Age, the groups that made it to the New World must have known how to make a living from the sea and protect themselves from the cold. They may have found shelter in refugia, coastal areas that archaeologists think may have remained ice-free throughout the Ice Age. The waxing and waning ice sheets also play a role in Nichols dating of the colonization of the New World. For 60 years, archaeologists believed that pioneers from Asia first crossed the land bridge that linked Siberia and Alaska no more than 11,500 years ago, toward the end of the Ice Age. These intrepid hunters, it was thought, swept from the edge of the ice sheets to the tip of South America in search of mammoth and bison. Their fluted stone projectile points, first found near Clovis, New Mexico, turn up throughout North and South America. The consistent age of these so-called Clovis sites, around 11,000 years, as well as the absence of convincing evidence for any earlier inhabitants of the New World, led archaeologists to conclude that these hunters were the first Americans. The Clovis worldview solidified into an entrenched archaeological edifice. Generations of students memorized the mantra Clovis first. This view changed dramatically in January 1997. Thats when Tom Dillehay of the University of Kentucky in Lexington invited nine of his archaeological peers to Monte Verde, Chile, to examine a site he had been excavating 20 years. The Clovis Police, as they were nicknamed, came, saw, and were convinced. The tent stakes, digging sticks and footprints that Dillehay had found proved, for the first time, that people lived in the New World 12,500 years ago, a thousand years before the dawn of the Clovis culture. Nichols was not surprised. She had previously developed two different language-based dating methods that led her to believe, and suggest to archaeologists, that people had entered the Americas many thousands of years before the Clovis time line. The Language Clock Nichols based one of her dates for the discovery of the Americas on a striking linguistic feature of the New World, the vast diversity of Native American language families. Native American languages comprise close to 150 independent stocks, half of all the worlds language families. Nichols studied all known language stocks to determine how often they have branched off to create new languages. Like human families, some language stocksLatin for exampleproduce lots of offspring. But most languages spawn just a few, some survive, but sire no offspring, and some lines die out entirely. Nichols found that, on average, one-and-a-half new languages developed per stock every six thousand years. In effect, Nichols created a kind of linguistic clock, ticking once every 6,000 years. Based on her calculations, it would have taken 20,000 to 30,000 years, at the very least, for even multiple waves of prehistoric immigrants to produce the abundance of languages found in North and South America. David Meltzer, an anthropologist at Southern Methodist University, lauds Nichols for advancing this finding. When most linguists were arguing for a short time span that would fit with a Clovis chronology, he says, Nichols was arguing that the linguistic evidence suggested much greater antiquity, from the sheer diversity of language families. Nichols based her second estimate not on the linguistic birthrate, but on how language families have moved across the globe. She studied historical and archaeological records to determine the rates at which languages have spread across different kinds of geographic areas. She used those rates to estimate how long it would have taken an expanding family of languages to cross the unpopulated regions from the edge of the great ice sheets to the sites of early human habitation in the Americas. She calculated, for example, that it would take at least 7,000 years for a language to become established deep in South America. With Monte Verdes 12,500-year-old artifacts in mind, Nichols reasoned that immigrants must have entered the New World at least 7,000 years earlier, or 19,500 years ago. But once again the Ice Age figures in. That was the very height of glaciation, she says, when it was probably impossible to get in. That suffices to tell us that people got in before the very height of the glaciation, certainly before 22- or 24,000 years ago. Nicholas Evans of the University of Melbourne, however, is not convinced that Nichols linguistic clock keeps good time. His research on Australian languages suggests that the clock has ticked more slowly there. Australia has been populated for perhaps 50,000 years, but has far fewer language families than the Americas. He suspects that something turned the clocks hands faster in the Americas, explaining its large number of language families without having to assume immigration before the peak of the Ice Age. Still, studies of human genes appear to support Nichols 24,000-year timeline. Douglas Wallace, one of the pioneers of human genetics, estimated that humans first crossed from Siberia into America 20,000 to 40,000 years ago. More recently, Antonio Torroni, a colleague of Ted Schurr at Emory University, used mitochondrial DNA to narrow that range to 22,000 to 29,000 years ago. The First Four Discoveries of America Nichols believes that she can detect the linguistic echoes of four distinct discoveries of the Americas through detailed mapping of language features. The very first group, she thinks, arrived more than 22,000 years ago, and spread throughout the habitable regions of North and South America during the Ice Age. Next, as the ice sheets melted, groups from South and Central America, perhaps carrying the Clovis toolkit, moved north to repopulate North America. They eventually met and interacted with new coastal immigrants working their way down the Pacific Rim and gradually moving inland. And finally, about 5,000 years ago, the Eskimo-Aleut peoples, with known Siberian roots, entered and occupied the arctic and subarctic regions. The Birth of Language Nichols has also consulted her linguistic clock to address the question of the birth of human language. The great diversity among the worlds language stocks, she found, can be accounted for only by pushing the dawn of language back 100,000 to 132,000 years. Even then, it is necessary to assume that language must have sprung up at about the same time in 10 or more human groups spread out across the East African cradle of modern humanity. Nichols dates fall neatly between the estimates of some biologists who believe that language evolved gradually over several hundred thousand years and some archaeologists who think that fully modern language flowered along with cave paintings, sculpture and other symbolic activities as recently as 50,000 years ago. Conflict and Controversy To say that Nichols is bold to use linguistic tools to attempt to recreate events 100,000 years ago is an enormous understatement. No one doubts that she is a trailblazer. She is the first, and so far the only, linguist to use languages to go so far back in time. But like most pioneers, shes drawn plenty of criticism. Lyle Campbell, a highly respected linguist at New Zealands University of Canterbury, writes, While in other areas I think she is one of the smartest and most independent and astute of living linguists, in this area I think she is very, very, very wrong. Although most of Campbells criticisms are highly technical, his conclusion is clearthat the geographic patterns Nichols has identified are either accidental or the product of parallel but unconnected development. They do not, he asserts, support historical conclusions about the peopling of the Americas or elsewhere. Evans, the Australian linguist, is somewhat less critical, though he too is not convinced that the grammatical building blocks Nichols has identified are solid enough to support her conclusions. There is something out there that requires explanation, he says. I just dont accept the explanation that she is giving. Its a very interesting area, but its going too far too fast. The interpretations have gotten ahead of the data. Others, however, support Nichols approach. One problem in evaluating her work is that she is the only one using her methodology, says University of Florida anthropologist John Moore. Most scholars are hopeful that shes right, that the envelope can be pushed back thousands of years by using the methodology shes developed. The problem is to have her work corroborated by other scientists and by data from other parts of the world. That corroboration may have to wait, since Nichols remains the only linguist generating these kinds of data. This is a territory that other respectable, competent linguists havent dared to explore, says Victor Golla, a linguist who works with Native American languages. So shes out on a limb, exploring territory that not very many are willing to follow her into. As for Nichols, shes turned her linguistic radio telescope back to the Americas. At this years meeting of the American Association of Physical Anthropology, Nichols described a new geographic pattern that sheds light on the second-oldest cluster of languages in the Americas and reinforces her views on the antiquity of the first Americans. What keeps Nichols going, out on that limb, still alone, and often in the face of harsh criticism? "We can trace language prehistory back extremely far," she says, choosing her words carefully. I can get you times, places and directions of movement, answering the same kind of questions geneticists and archaeologists ask about the origins and migrations of people. Thats very interesting to me. Text © 1999 Bob Adler Illustrations © 1999 Simon Lo
<urn:uuid:2e7303dd-3d8e-4c9e-8f7a-ed11d5a32dc9>
CC-MAIN-2017-17
http://sciencenotes.ucsc.edu/9901/echoes/echoes.htm
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917122041.70/warc/CC-MAIN-20170423031202-00544-ip-10-145-167-34.ec2.internal.warc.gz
en
0.96119
4,522
3.28125
3
Read Today’s Top Stories in Nanotechnology and the ‘Business’ of Nanotechnology. Stories about the Discoveries and Technologies that will reshape our world and drive New Economic Engines for the Future. and much more … Genesis Nanotechnology, Inc. ~ “Great Things from Small Things” 25 Mar 2016 “This microbial nanowire is made of but a single peptide subunit,” said Gemma Reguera, lead author and MSU microbiologist. “Being made of protein, these organic nanowires are biodegradable and biocompatible. This discovery thus opens many applications in nanoelectronics such as the development of medical sensors and electronic devices that can be interfaced with human tissues.” Since existing nanotechnologies incorporate exotic metals into their designs, the cost of organic nanowires is much more cost effective as well, she added. How the nanowires function in nature is comparable to breathing. Bacterial cells, like humans, have to breathe. The process of respiration involves moving electrons out of an organism. Geobacter bacteria use the protein nanowires to bind and breathe metal-containing minerals such as iron oxides and soluble toxic metals such as uranium. The toxins are mineralized on the nanowires’ surface, preventing the metals from permeating the cell. Reguera’s team purified their protein fibers, which are about 2 nanometers in diameter. Using the same toolset of nanotechnologists, the scientists were able to measure the high velocities at which the proteins were passing electrons. “They are like power lines at the nanoscale,” Reguera said. “This also is the first study to show the ability of electrons to travel such long distances — more than a 1,000 times what’s been previously proven — along proteins.” The researchers also identified metal traps on the surface of the protein nanowires that bind uranium with great affinity and could potentially trap other metals. These findings could provide the basis for systems that integrate protein nanowires to mine gold and other precious metals, scrubbers that can be deployed to immobilize uranium at remediation sites and more. Reguera’s nanowires also can be modified to seek out other materials in which to help them breathe. “The Geobacter cells are making these protein fibers naturally to breathe certain metals. We can use genetic engineering to tune the electronic and biochemical properties of the nanowires and enable new functionalities. We also can mimic the natural manufacturing process in the lab to mass-produce them in inexpensive and environmentally friendly processes,” Reguera said. “This contrasts dramatically with the manufacturing of humanmade inorganic nanowires, which involve high temperatures, toxic solvents, vacuums and specialized equipment.” This discovery came from truly listening to bacteria, Reguera said. “The protein is getting the credit, but we can’t forget to thank the bacteria that invented this,” she said. “It’s always wise to go back and ask bacteria what else they can teach us. In a way, we are eavesdropping on microbial conversations. It’s like listening to our elders, learning from their wisdom and taking it further.” - Sanela Lampa-Pastirk, Joshua P. Veazey, Kathleen A. Walsh, Gustavo T. Feliciano, Rebecca J. Steidl, Stuart H. Tessmer, Gemma Reguera. Thermally activated charge transport in microbial protein nanowires. Scientific Reports, 2016; 6: 23517 DOI: 10.1038/srep23517 Chemotherapy isn’t supposed to make your hair fall out—it’s supposed to kill cancer cells. A new molecular delivery system created at U of T could help ensure that chemotherapy drugs get to their target while minimizing collateral damage. Many cancer drugs target fast-growing cells. Injected into a patient, they swirl around in the bloodstream acting on fast-growing cells wherever they find them. That includes tumours, but unfortunately also hair follicles, the lining of your digestive system, and your skin. Professor Warren Chan (IBBME, ChemE, MSE) has spent the last decade figuring out how to deliver chemotherapy drugs into tumours—and nowhere else. Now his lab has designed a set of nanoparticles attached to strands of DNA that can change shape to gain access to diseased tissue. “Your body is basically a series of compartments,” says Chan. “Think of it as a giant house with rooms inside. We’re trying to figure out how to get something that’s outside, into one specific room. One has to develop a map and a system that can move through the house where each path to the final room may have different restrictions such as height and width.” One thing we know about cancer: no two tumours are identical. Early-stage breast cancer, for example, may react differently to a given treatment than pancreatic cancer, or even breast cancer at a more advanced stage. Which particles can get inside which tumours depends on multiple factors such as the particle’s size, shape and surface chemistry. Chan and his research group have studied how these factors dictate the delivery of small molecules and nanotechnologies to tumours, and have now designed a targeted molecular delivery system that uses modular nanoparticles whose shape, size and chemistry can be altered by the presence of specific DNA sequences. “We’re making shape-changing nanoparticles,” says Chan. “They’re a series of building blocks, kind of like a LEGO set.” The component pieces can be built into many shapes, with binding sites exposed or hidden. They are designed to respond to biological molecules by changing shape, like a key fitting into a lock. These shape-shifters are made of minuscule chunks of metal with strands of DNA attached to them. Chan envisions that the nanoparticles will float around harmlessly in the blood stream, until a DNA strand binds to a sequence of DNA known to be a marker for cancer. When this happens, the particle changes shape, then carries out its function: it can target the cancer cells, expose a drug molecule to the cancerous cell, tag the cancerous cells with a signal molecule, or whatever task Chan’s team has designed the nanoparticle to carry out. Their work was published this week in two key studies in the Proceedings of the National Academy of Sciences and the leading journal Science. “We were inspired by the ability of proteins to alter their conformation—they somehow figure out how to alleviate all these delivery issues inside the body,” says Chan. “Using this idea, we thought, ‘Can we engineer a nanoparticle to function like a protein, but one that can be programmed outside the body with medical capabilities?'” Applying nanotechnology and materials science to medicine, and particularly to targeted drug delivery, is still a relatively new concept, but one Chan sees as full of promise. The real problem is how to deliver enough of the nanoparticles directly to the cancer to produce an effective treatment. “Here’s how we look at these problems: it’s like you’re going to Vancouver from Toronto, but no one tells you how to get there, no one gives you a map, or a plane ticket, or a car—that’s where we are in this field,” he says. “The idea of targeting drugs to tumours is like figuring out how to go to Vancouver. It’s a simple concept, but to get there isn’t simple if not enough information is provided.” “We’ve only scratched the surface of how nanotechnology ‘delivery’ works in the body, so now we’re continuing to explore different details of why and how tumours and other organs allow or block certain things from getting in,” adds Chan. He and his group plan to apply the delivery system they’ve designed toward personalized nanomedicine—further tailoring their particles to deliver drugs to your precise type oftumour, and nowhere else. Explore further: Cylindrical nanoparticles more deadly to breast cancer More information: Edward A. Sykes et al. Tailoring nanoparticle designs to target cancer based on tumor pathophysiology, Proceedings of the National Academy of Sciences(2016). DOI: 10.1073/pnas.1521265113 In one of the first efforts to date to apply nanotechnology to targeted cancer therapeutics, researchers have created a nanoparticle formulation of a cancer drug that is both effective and nontoxic — qualities harder to achieve with the free drug. Their nanoparticle creation releases the potent but toxic targeted cancer drug directly to tumors, while sparing healthy tissue. The findings in rodents with human tumors have helped launch clinical trials of the nanoparticle-encapsulated version of the drug, which are currently underway. Aurora kinase inhibitors are molecularly targeted agents that disrupt cancer’s cell cycle. While effective, the inhibitors have proven highly toxic to patients and have stalled in late-stage trials. Development of several other targeted cancer drugs has been abandoned because of unacceptable toxicity. To improve drug safety and efficacy, Susan Ashton and colleagues designed polymeric nanoparticles called Accurins to deliver an Aurora kinase B inhibitor currently in clinical trials. The nanoparticle formulation used ion pairing to efficiently encapsulate and control the release of the drug. In colorectal tumor-bearing rats and mice with diffuse large B cell lymphoma, the nanoparticles accumulated specifically in tumors, where they slowly released the drug to cancer cells. Compared to the free drug, the nanoparticle-encapsulated inhibitor blocked tumor growth more effectively at one half the drug dose and caused fewer side effects in the rodents. A related Focus by David Bearss offers more insights on how Accurin nanoparticles may help enhance the safety and antitumor activity of Aurora kinase inhibitors and other molecularly targeted drugs. The above post is reprinted from materials provided by American Association for the Advancement of Science. Note: Materials may be edited for content and length. - Susan Ashton, Young Ho Song, Jim Nolan, Elaine Cadogan, Jim Murray, Rajesh Odedra, John Foster, Peter A. Hall, Susan Low, Paula Taylor, Rebecca Ellston, Urszula M. Polanska, Joanne Wilson, Colin Howes, Aaron Smith, Richard J. A. Goodwin, John G. Swales, Nicole Strittmatter, Zoltán Takáts, Anna Nilsson, Per Andren, Dawn Trueman, Mike Walker, Corinne L. Reimer, Greg Troiano, Donald Parsons, David De Witt, Marianne Ashford, Jeff Hrkach, Stephen Zale, Philip J. Jewsbury, and Simon T. Barry. Aurora kinase inhibitor nanoparticles target tumors with favorable therapeutic index in vivo. Science Translational Medicine, 2016 DOI: 10.1126/scitranslmed.aad2355 Nanoparticles disguised as human platelets could greatly enhance the healing power of drug treatments for cardiovascular disease and systemic bacterial infections. These platelet-mimicking nanoparticles, developed by engineers at the University of California, San Diego, are capable of delivering drugs to targeted sites in the body — particularly injured blood vessels, as well as organs infected by harmful bacteria. Engineers demonstrated that by delivering the drugs just to the areas where the drugs were needed, these platelet copycats greatly increased the therapeutic effects of drugs that were administered to diseased rats and mice. The research, led by nanoengineers at the UC San Diego Jacobs School of Engineering, was published online Sept. 16 in Nature. “This work addresses a major challenge in the field of nanomedicine: targeted drug delivery with nanoparticles,” said Liangfang Zhang, a nanoengineering professor at UC San Diego and the senior author of the study. “Because of their targeting ability, platelet-mimicking nanoparticles can directly provide a much higher dose of medication specifically to diseased areas without saturating the entire body with drugs.” The study is an excellent example of using engineering principles and technology to achieve “precision medicine,” said Shu Chien, a professor of bioengineering and medicine, director of the Institute of Engineering in Medicine at UC San Diego, and a corresponding author on the study. “While this proof of principle study demonstrates specific delivery of therapeutic agents to treat cardiovascular disease and bacterial infections, it also has broad implications for targeted therapy for other diseases such as cancer and neurological disorders,” said Chien. The ins and outs of the platelet copycats On the outside, platelet-mimicking nanoparticles are cloaked with human platelet membranes, which enable the nanoparticles to circulate throughout the bloodstream without being attacked by the immune system. The platelet membrane coating has another beneficial feature: it preferentially binds to damaged blood vessels and certain pathogens such as MRSA bacteria, allowing the nanoparticles to deliver and release their drug payloads specifically to these sites in the body. Enclosed within the platelet membranes are nanoparticle cores made of a biodegradable polymer that can be safely metabolized by the body. The nanoparticles can be packed with many small drug molecules that diffuse out of the polymer core and through the platelet membrane onto their targets. To make the platelet-membrane-coated nanoparticles, engineers first separated platelets from whole blood samples using a centrifuge. The platelets were then processed to isolate the platelet membranes from the platelet cells. Next, the platelet membranes were broken up into much smaller pieces and fused to the surface of nanoparticle cores. The resulting platelet-membrane-coated nanoparticles are approximately 100 nanometers in diameter, which is one thousand times thinner than an average sheet of paper. This cloaking technology is based on the strategy that Zhang’s research group had developed to cloak nanoparticles in red blood cell membranes. The researchers previously demonstrated that nanoparticles disguised as red blood cells are capable of removing dangerous pore-forming toxins produced by MRSA, poisonous snake bites and bee stings from the bloodstream. By using the body’s own platelet membranes, the researchers were able to produce platelet mimics that contain the complete set of surface receptors, antigens and proteins naturally present on platelet membranes. This is unlike other efforts, which synthesize platelet mimics that replicate one or two surface proteins of the platelet membrane. “Our technique takes advantage of the unique natural properties of human platelet membranes, which have a natural preference to bind to certain tissues and organisms in the body,” said Zhang. This targeting ability, which red blood cell membranes do not have, makes platelet membranes extremely useful for targeted drug delivery, researchers said. Platelet copycats at work In one part of this study, researchers packed platelet-mimicking nanoparticles with docetaxel, a drug used to prevent scar tissue formation in the lining of damaged blood vessels, and administered them to rats afflicted with injured arteries. Researchers observed that the docetaxel-containing nanoparticles selectively collected onto the damaged sites of arteries and healed them. When packed with a small dose of antibiotics, platelet-mimicking nanoparticles can also greatly minimize bacterial infections that have entered the bloodstream and spread to various organs in the body. Researchers injected nanoparticles containing just one-sixth the clinical dose of the antibiotic vancomycin into one of group of mice systemically infected with MRSA bacteria. The organs of these mice ended up with bacterial counts up to one thousand times lower than mice treated with the clinical dose of vancomycin alone. “Our platelet-mimicking nanoparticles can increase the therapeutic efficacy of antibiotics because they can focus treatment on the bacteria locally without spreading drugs to healthy tissues and organs throughout the rest of the body,” said Zhang. “We hope to develop platelet-mimicking nanoparticles into new treatments for systemic bacterial infections and cardiovascular disease.” - Che-Ming J. Hu, Ronnie H. Fang, Kuei-Chun Wang, Brian T. Luk, Soracha Thamphiwatana, Diana Dehaini, Phu Nguyen, Pavimol Angsantikul, Cindy H. Wen, Ashley V. Kroll, Cody Carpenter, Manikantan Ramesh, Vivian Qu, Sherrina H. Patel, Jie Zhu, William Shi, Florence M. Hofman, Thomas C. Chen, Weiwei Gao, Kang Zhang, Shu Chien, Liangfang Zhang. Nanoparticle biointerfacing by platelet membrane cloaking. Nature, 2015; DOI: 10.1038/nature15373 Bryan Berger, Himanshu Jain, Chao Zhou and a half dozen other faculty members were invited to give presentations last month at the TechConnect 2015 World Innovation Conference. Lehigh’s participation was organized by the Office of Technology Transfer. Lehigh scientists and engineers won three National Innovation Awards recently at the TechConnect 2015 World Innovation Conference and National Innovation Showcase held in Washington, D.C. The awards were for a nanoscale device that captures tumor cells in the blood, a bioengineered enzyme that scrubs microbial biofilms, and a safe, efficient chemical reagent that is stable at room temperature. Lehigh’s TechConnect initiative was led by the Office of Technology Transfer (OTT) which manages, protects and licenses intellectual property (IP) developed at Lehigh. Yatin Karpe, associate director of the OTT, spearheaded the Lehigh effort and is pursuing IP protection and commercialization for the innovations. The P.C. Rossin College of Engineering and Applied Science, led by former interim Dean Daniel Lopresti, and the Office of Economic Engagement, led by assistant vice president Cameron McCoy, supported Lehigh’s third-straight appearance at the annual conference. The three National Innovation Awards were chosen through an industry review of the top 20 percent of annually submitted technologies and based on the potential positive impact the technology would have on industry. This is the third year in a row that Lehigh has won Innovation Awards. No institution received more than three in 2015. Lehigh’s National Innovation Awardees were: • Yaling Liu, assistant professor of mechanical engineering and mechanics and a member of the bioengineering program, has developed a tiny device that can capture tumor cells circulating in the blood and can potentially indicate disease type, as well as genetic and protein markers that may provide potential treatment options. • David Vicic, professor and department chair of chemistry, has created a new chemical reagent that is stable at room temperature, potentially eliminating the use of traditional hazardous regents. TechConnect is one of the largest multi-sector gatherings in the world of technology intellectual property, technology ventures, industrial partners and investors. The event brings together the world’s top technology transfer offices, companies and investment firms to identify the most promising technologies and early stage companies from across the globe. “This event is a productive opportunity to establish new connections with industry and government partners, many within easy reach of Lehigh,” said Gene Lucadamo, the industry liaison for Lehigh’s Center for Advanced Materials and Nanotechnology and the Lehigh Nanotech Network. “Some of these connections are with alumni in business or government, and even with nearby Pennsylvania companies that were attracted to Lehigh innovations. These interactions allow us to promote research capabilities and facilities which are available through our Industry Liaison Program, and to identify opportunities for collaborations and funding.” In addition to the three National Innovator Awards, Lehigh researchers won seven National Innovation Showcase awards and presented five conference papers in areas as diverse as the biomanufacturing of quantum dots, a 3-D imaging technique 20 times faster than current systems, the creation of a miniature medical oxygen concentrator for patients with Chronic Obstructive Pulmonary Disease (COPD), and a biomedically superior bioactive glass that mimics bone. Attendees include innovators, funding agencies, national and federal labs, international research organizations, universities, tech transfer offices and investment and corporate partners. The 2015 TechConnect World Innovation event encompasses the 2015 SBIR/STTR National Conference, the 2015 National Innovation Summit and Showcase, and Nanotech2015—the world’s largest nanotechnology event. The following is a list of the Lehigh faculty members who gave presentations at TechConnect 2015: • A wavy micropatterned microfluidic device for capturing circulating tumor cells (Principal investigator: Liu) • Bioengineered enzymes that safely and cheaply fight bacterial biofilms (Principal investigator: Berger) • New reagents for octafluorocyclobutane transfer that eliminate the use of hazardous tetrafluoroethylene (Principal investigator: Vicic) • A method to cheaply manufacture quantum dots using bacteria (Principal investigator: Berger) • A multiplexing optical coherence tomography technology 20 times faster than current systems that preserves image resolution and allows synchronized cross-sectional and three-dimensional (3D) imaging. (Principal investigator: Chao Zhou, electrical and computer engineering) • A miniature medical oxygen concentrator for COPD patients (Principal investigator: Mayuresh Kothare, chemical and biomolecular engineering) • A biomedically superior bioactive glass that enables the production of porous bone scaffolds that can be tailored to match the tissue growth rate of a given patient type (Principal investigator: Himanshu Jain, materials science and engineering) • A new distributed-feedback technique that dramatically improves the laser beam patterns and increases the output power levels of semiconductor lasers (Principal investigator: Sushil Kumar, electrical and computer engineering) • A new pretreatment process to remove unwanted impurities in ceramic powders without any change in the physical properties, leading to better reproducibility of properties and reliability in the final products (Principal investigator: Martin Harmer, materials science and engineering)
<urn:uuid:3d6a0044-f16f-4c3b-936c-a3be8d778c53>
CC-MAIN-2017-17
http://genesisnanotech.com/tag/nano-bio/
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917124297.82/warc/CC-MAIN-20170423031204-00544-ip-10-145-167-34.ec2.internal.warc.gz
en
0.920477
4,663
3
3
This essay has been submitted by a student. This is not an example of the work written by our professional essay writers. Smart card is a type of card that has an embedded integrated circuit chip that contents can be altered by microcontroller or 3rd party device that can access the internal memory. The card is connected through card reader by means of physical contact or wireless radio frequency interface. The microchips in the smart card have the abilities to store large amounts of data and carry its own function like encryption or mutual authentication. The smart card technology conforms to international standards, ISO/IEC 7816 and ISO/IEC 14443. On top of it, it comes with varies form like plastic cards, fobs, SIM for GSM phones and USB dongle, contact and contactless. History of Smart Cards The proliferation of smart cards started in the USA in the early 1950s where the cheap plastic PVC material were much more suitable for everyday use than the paper and cardboard cards previous used, which could not adequately withstand mechanical stresses and climatic effects. One of the first all plastic smart card to produce in the market is issued by Diners Club in 1950. It allows the exclusive class of individual to pay for his/her goods instead of cash their 'good name'. The entry of Visa and MasterCard led to a very rapid proliferation of credit cards in the market. It started off from USA and European countries following in the next few years. In today's world, shoppers can stop without cash everywhere around the world without the hassle for currency exchange and these cards is never at a loss for means of payment it's so widely used today. The development of the smart card has been made possible due to the enormous progress in microelectronics in the 1970s. Making it possible to integrate data storage and processing logic on a single integrated circuit chip the size of a small finger. The idea of integrating the processing chip into an identification card was proposed and patent by the German inventors Jurgen Dethloff and Helmut Grotrupp as early as 1968. This was followed in 1970 by a similar patent by Japanese inventor Kunitaka Arimura. However said this, the first real breakthrough in the technology was conducted by Roland Moreno from France in 1974. It was only then that the semiconductor industry was able to supply the necessary integrated circuits at acceptable prices. Even though the patents were all well-protected, developing and the processing of developing it was very difficult and there were many technical problems to be solved before the first prototypes, some requiring to integrate more than several integrating chips. It was until then that it could be transformed into reliable products that could be manufactured to the public at reasonable cost at large quantities. Since the basic invention of the smart card is originated from France and Germany, it is not surprising that they played very leading roles in developing and marketing the smart card in today's world. The major breakthrough of the technology was achieved in 1984, when a French company called PTT (postal and telecommunications services agency) successful carried out a field trial with telephone cards. In this field trial, smart cards immediately proved to meet all expectations with regards to high reliability and protection against manipulation. Significantly, this breakthrough for smart cards did not come in an area where traditional cards were already used, but in a new application. Introducing new technology in a new in a new application has the great advantage that compatibility with existing systems does not have to be taken into account, so the capabilities of the new technology can be fully exploited. In 1984, engineers in Germany conducted a comparative tests using different material based on Magnetic-stripe cards, optical-storage cards and smart cards. Smart card proved to be the winners in this pilot study. Inclusive of the advantage of the smart card is the high degree of reliability, security against manipulation and promising the greatest degree of flexibility for future applications. Further developments followed the successful trials of telephone cards, first in France and then in Germany, with breathtaking speed. By 1986, several million telephone smart cards were in circulation in France alone and it rose to nearly 60 million in 1990 and more than 300 millions worldwide in 1997. Germany experienced similar progress with technology lag of 3 years to France. Telephone cards with integrated chips are currently used in more than 50 countries. Microprocessor card using EEPROM is first introduced in 1988 by German Post Office as an authorization card for the analog mobile telephone network. The reason for introducing such cards was an increasing incidence of fraud with the magnetic-stripe cards used up to that time. This provide as a fundamental for the introduction of smart cards into the digital GSM network, which was put into service in 1991 in Europe. The smart card proved to be an ideal medium as it could safely store secret keys and execute cryptographic algorithms. In addition, smart card so small and easy to handle that everybody everywhere used it in their everyday life. Thus, its natural for people trying to crack the smart card and the bank card's measure of security definitely has to be much higher level compared to the normal non-money credited smart card. French banks were among the first to introducing bank card in 1984 with a trial run of 60,000 cards from 1982. However it took another 10 years before they integrate it with chips. German banks stepped in 1985 using a multi-functional payment card with chip, however it did not manage to issue a specification for the integration of chips and thus, failing the consumers trust. In 1996, multifunctional smart cards with POS (point-of-sales) functions, an electronic purse and optional value-added services were issued in all of Austria. This made Austria the first country to have a nationwide e-purse system. In the USA, where smart card has had a hard time picking up, begins to establish. VISA experimented with the smart card purse payment system in 1996 Olympic Summer Games in Atlanta. However, the problem associated with making secured payment via the internet but anonymously is still not been solved in a satisfactory manner. Due to this issue, several European countries have initiated the introduction of electronic signature systems after a legal basis for the use of electronic signatures was provided in 1999. Besides using cards for payment purposes, telephone or healthcare purposes, it has high degree of functional flexibility and they are particularly convenient and user friendly. Uses of Memory cards The first smart cards used in mass public were memory cards for telephone applications. These prepaid cards have the balance of the card stored into the chip inside of the card and reducing through each use. However, using of the type of card is easily manipulated with a magnetic-stripe card where all user would have to do is record the data stored at the time of purchase and rewrite them to the magnetic stripe after using the card. This type of manipulation is known as 'buffering'. However, it can be prevented by using smart phone cards with chips secured by security logic and by using the logic; it makes manipulation impossible and irreversible. This type of smart card not only applies to telephone cards, but also whenever goods or products are purchased though the 'cashless' means. Examples of possible uses include mass public transport, vending machines, cafeterias or car parks. The advantage of this type of card les in its simplicity behind the technology (the area of the chip is less than a few millimeters) and also its low cost. However, the disadvantage is that the card cannot be reuse once the value is empty and must be discarded. Another common application of the smart card is the German health insurance card, which is issued since 1994 to every citizen enrolled into the plan. The patient's information is stored into the chip and laser-engraved onto the card. Using a data-storage chip in the card enables it "machine-readable" for authentication. In short, these type of smart cards have limited functionality and their security logic feature makes it possible to protect privacy and against manipulation. They are usually used as prepaid cards or identification cards in the low cost environment. Microprocessor cards were first used in the form of bank cards in France. They has the ability to store private keys and execute modern cryptographic algorithms made it possible to implement highly secure offline payment systems. Since the microprocessor has a limited capacity, its functionality is subjected to the memory available in the processer. However, it can be freely programmed according to the needs and it's limited to the designer's imagination on what they want to implement into it. Following a drastic cost reduction of smart cards in early 1990s due to mass production, it opens a new chapter every year. The use of smart cards with mobile telephones has been especially important for their international proliferation. After being successfully tested in Germany on their analog telephone system, smart cards were prescribed as the access medium for their own European digital telephone system (GSM). Since this type of cards is separated from the telephone, it opens a new possibility in the marketing strategy where without the smart card, mobile operators or telephone sellers would still be about to sell the telephone separately. Possible applications for microprocessor cards include identification, access control systems for restricted areas, secure data storage, electronic signatures and electronic purses, as well as multifunctional cards with several applications in a single card. Most modern smart card systems also allow new applications to be installed into even after it has been issued to the user, without compromising on its security. This new flexibility opens up completely new application areas. For example, personal security modules are indispensable if Internet commerce and payments are to be made trustworthy. Such security modules could securely store personal keys and execute high-performance cryptographic algorithms. These tasks can be performed in an elegant manner by a microprocessor with a cryptographic coprocessor. Specifications for secure Internet applications using smart cards are currently being developed throughout the world. Within a few years, we can expect to see every PC equipped with a smart-card interface. In short, the main advantage of microprocessoer smart cards is their large storage capacity and its ability to store confidential data under high security as well as its ability to execute cryptographic algorithms. This opens door to wide range of application to its current application. The potential of this card by no means yet exhausted and its expanding in contrast of the present semiconductor industry. Contactless cards are memory cards or microprocessor cards that transfers data without any electrical contract with the terminal. This card has achieved the status of commercial products in the last few years and will be in the next decades. Although contactless can usually work in a matter of 1 centimeters from the terminal, but it does not necessarily have to be held in the user's hand during user, but can be remained in the user's purse. Contactless cards are particularly suitable for applying wide range of applications in the public and it's very easy to deploy, these are the sample applications and uses, - Access control in private areas, be it company or apartment - Public transportation - Airline staffs check in/ checkout - Baggage identification - Immigration identification With its high security preventive measures, this card when used over a long distance could cause problems and should thus be prevented. A typical example is an electronic purse when a declaration of intent on the part of the cardholder is normally required to complete the finalized transaction. This confirms the amount of the payment and customer's agreement to pay, this provide opportunity for the con artist to remove money from the electronics purse without the knowledge of the cardholder after the user confirmed the indicated amount using the keypad. The best remedy is by offering dual-interface card where it has both contact and contactless functionality in a single card. Such a card cans communication with the terminal via its contact or contactless terminal according to the user's favor. Contactless smart card is typically common in the field of public transportation when the frequency and speed of alighting and entering the bus decides the revenues of the transportation company. Smart card is recognized by international standard of ISO/IEC standards. The standard defines the basic properties of smart cards. ISO stands for International Organization for Standardization, while IEC stands for the International Electro technical Commission. Cooperation between these 2 organizations will occur when there is duplication of effort and through the cooperation; IEC will cover the fields of electrical technology and electronics, while ISO will cover the rest of other fields. Combined working groups are formed to deal with themes of common interest, and these groups produce combines ISO/IEC standards. Smart Cards belong to this category. As seen in the table, there are 2 technical committees that are concerned with standardization of the smart cards. With ISO TC68/SC6 responsible for the standardization of cards used in the financial area, while ISO/IEC JTC1/SC17 responsible for general applications. After more than 20 years effort for standardization, the most important ISO standards for smart cards are now complete and these standards are based on prior ISO standards in the 7810, 7811, 7812 and 7813 families, which define the properties of identification cards in the ID-1 format. These standards include embossed and magnetic-stripe cards. In the past few years, an increasing number of specifications have been put forward and published by industrial organizations and with no attempt being made to incorporate them into the standardization activities of ISO. This practice is de to the manner of working in which ISO operates is too slow to catch up with the fast pacing, short innovation cycles of the informatics and telecommunication industries. It is a major challenge of the future of ISO to devise a working practice that can safeguard general interest idea without hampering the innovation pace. Types of Cards Embossing is the oldest form of cards for adding machine-readable features to identification cards. It still exist in some of the 3rd world countries. The embossed characters, like the numbers embossed on the present day credit cards, can be easily read visually. The nature and location of the embossing are specified n the ISO 7811 standard ('Identification Cards - Recording Technique'). Some magnetic stripes deal with embossing characters too. At first glance, transferring information by printing from embossed characters may appear quite easy; however the simple technique has made worldwide proliferation of credit cards possible. The exploitation of this technology requires neither electrical energy nor a connection to a telephone network. Magnetic-stripe cards are read by pulling it across a read head, either manually or automatically, with the data being read and stored electronically. No paper is required. The magnetic stripe may contain up to 3 tracks, track 1 and 2 are specified to be read-only tracks while track 3 may be written to. Although the storage capacity is about 1000 bits, which is not very much, it is enough to store required information contained n the embossing. Anymore data can be read or write in the track 3, such as the most recent transaction data in credit card. However, the main drawback of the technology is that the stored data can be altered very easily. Manipulating the characters inside requires certain amount of manual dexterity, and it can be easily detected by a trained eye. On top of that, the data recorded in the stripe can be easily altered using a standard read/write device, and it is difficult to afterwards prove that the data have been altered. Adding on to its vulnerability, its often used in automated terminals in which visual inspection is not possible. Potential criminal, having retrieved the valid card data, can easily use duplicated cards n such unattended machine with having to forge the visual security features of the cards. Manufacturers have developed various to protect the data record on the stripes. German Eurocheque cards contain an invisible, unalterable code in the body of the card, which makes it impossible to alter or manipulate the data. However, this technology requires special device in terminal and it increase the cost. Smart card is the newest and cleverest member of the card identification family. It features an integrated circuit embedded in the card which allowing multiple tasking like transmitting, storing and processing data. The data can be transmitted using either contact or contactless mean, taking advantage of the electromagnetic fields. Compared to magnetic stripe cards, smart cards offer many times greater capacity for storage. With more than 256kB of memory available, this allows a lot more application to be allowed on the smart cards than magnetic-stripe cards. With the big storage capacity, it is also protected against unauthorized access and manipulation. Since the data can only be accessed using serial interface that is controlled by the operating system and security logic, confidential data can be written to the card and stored in a manner that prevents them from ever being read from outside the card. Such confidential data can only be processed internally by the chip's processing unit. This makes it possible to construct a variety of security mechanisms, which can also be altered to the specific requirements of a particular application. Combing the ability to compute cryptographic algorithms, this allows smart cards to be used to implement convenient security modules that can be carried by users at all times. Some additional advantages of smart cards are their high level of reliability and long life compared with magnetic-stripe cards, which is lifeline, is limited to 1 or 2 years at most. Smart cards can be divided into 2 groups, which differ in both functionality and price: memory cards and microprocessor cards. Memory Smart Cards The data needed by the application are stored in the EEPROM memory. Access to the memory is controlled by the security logic, which n the simplest case consists only of writes protection or erases protection for the memory or certain memory regions. Transferring of data from the card used via I/O port and some smart cards used the I2C bus, which is commonly used for serial-access memory. The main function of a memory card is usually optimized for a specified application. Although this severely restricts the flexibility of the cards, it makes them quite inexpensive. Memory cards are typically used for prepaid telephone cards and health cards. Transferring of data is done though this method: Microprocessor Smart Card Microprocessor smart card acts like a normal PC sitting in our home with a CPU and memories such as ROM, EEPROM, RAM and an I/O port. In the microprocessor unit, ROM will be responsible in storing the chip's operating system, which is 'burned in' when the chip is manufactured. The contents in the ROM chip cannot be changed once produced. The EEPROM is the chip's non-volatile memory, in which data and program code is written to and read from, under the control of the operating system. Ram is the working memory for the processor and is volatile, so all the data stored in it are lost when the chip's power is switched off. Microprocessor cards are very easy to use in term of flexibility. Simplest to say, they contain a program optimized for a single application, so they can only be used for this particular application. In modern days however, the operating system allows multiple application to run at a same time. Recent technology breakthrough even allows application programs to be loaded into a card after it has already been personalized and issued to the cardholder. Contactless Smart Cards Despite the success of the contact smart card on the market, its still bond to some failures. Examples are the wear and tear of the cards, contaminated and in mobile equipment; vibrations can cause brief intermittent contacts. This will affect the integrated circuit chip embedded in card, therefore, there is a risk that the chip may be damaged or destroyed by electrostatic discharge. These technical problems are elegantly avoided by contactless smart cards. Not compromising on the technology on the contact smart cards, contactless smart card still offers a range of new and attractive potential application. For example, contactless cards does not required to be insert into a card reader, since avoiding the friction caused on the card. This is a great advantage in access-control systems where door has to be opened, since the access authorization of a person can be checked without requiring the card to be removed from a purse and inserted into a reader. Another is using it in public transport where people can be identified in the shortest possible time. The memory chip embedded in the card body uses inductive coupling, with a single coil for power and data transfers. With this technology, the terminal can both read and write data at a distance of up to 1m. The typical transaction time between the terminal and the card in this system is around 100ms, and a clock frequency of 13.56MHz. Smart Cards in Singapore Singapore recently introduces the NETS flashpay card which primary application is the NETS application. This card can be used for paying of goods in the comfort of contactless technology. We can use this for public transportation as well as using it for payment in participating outlets. Another smart card is the magnetic stripe credit card that can be used for taking public transportation. It's a contactless magnetic stripe credit card that has multi-functional purpose be it crediting your account or using it for mass transport in the comfort of just one single card. Case Study of the Smart Card and Its Security Measure 2009 NETS flashpay (Singapore) The NETS flashpay is a new generation contactless, multipurpose stored value CashCard that can be used for your daily purchasing of products or desired goods. On top of its main application which is NETS, it's also a card that you can use for taking public transport. However said that, contactless card has some security breaches that in fond to attacks. Most know successful attacks on smart cards have been at the logical level. These attackers arise from pure mental reflection or computation. This category includes classical cryptanalysis, as well as attacks that exploit known faults in smart card operating systems. Attacks can be divided into passive and active types. In a passive attack, the attacker analyzes the ciphertext or cryptographic protocol without modifying it, and many for example make measurements on the semiconductor device. In an active attack, by contrast, the attacker manipulates the data transmission process or the microcontroller. Modern smart cards are attacked at the physical level, and it normally takes large amount of technical effort. Depending on the attack scenario, the equipment required may include a microscope, a laser cutter, micromanipulators, focused ion beams, chemical etching equipment and very fast computers for analyzing, logging and evaluating the electrical processes in the chip. This equipment and the knowledge of how to use it are available to only a few specialists and organizations, which strongly reduces the probability of an attack at the physical level. Nevertheless, a card or semiconductor manufacturer must assume that a potential attacker could employ the devices and equipment necessary for such an attack, which means that suitable protection, must be built into the hardware. In order to conduct an attack at the physical level, a few preliminary steps are necessary. The first thing that has to be done is to remove the module from the card, which can easily be done using a sharp knife. After this, the epoxy resin must be removed from the chip. Anderson and Kuhne used fuming nitric acid for this with an infrared lamp as a heat source, followed by an acetone rinse to clean the chip. After this, the semiconductor chip is free and still fully operational. Many people think that the chip now lies unprotected before them and only has to be 'read out', but this is by no means so. An attacker still has to work through a manifold of security measures before he can gain access to the secrets. The protective measures in the hardware can be divided into passive and active components. The passive components are based directly on the techniques used in semiconductor manufacturing. They include all processes and options that can be used to protect the memory region and the other functional parts of the microcontroller against various types of analysis. There is a full spectrum of active components available on a silicon chip to complement the passive possibilities offered by the semiconductor technology. Active protection means the integration of various types of sensors into the silicon crystal. These sensors are queried and evaluated by the smart card software as needed. This is naturally only possible when the chip is fully powered and operational. A chip without electrical power cannot measure any sensor signals, let alone evaluate them. Sometimes the boundary between useful protective components and technical gadgetry is particularly narrow where sensors are concerned. A light-sensitive sensor that is supposed to prevent optical analysis of the memory will not respond if the chip is located on the object carrier of an optical microscope without power or a clock signal. In addition, it is very easy to visually identify such a sensor on the chip surface and cover it with a drop of black ink, so its protective function can easily be neutralized even when the chip is operating. However, this can be countered by distributing a large number of light sensors over the entire chip. Long-term functional security is also an important consideration. Min response to a brief but non-damaging overheating of the chip makes absolutely no contribution to increased functional security or security against an attack. Consequently, most smart card microcontrollers employ only a few sensors. In the following descriptions, we explain the protective mechanisms of smart card microcontrollers that are the most important and the most often used in practice. By now, we all understand how secured is the smart card system and we can look forward to it in the next few decades. Be it biometrics or transport taking, we should expect our life to be much easier and lighter wallet. We should expect only all-in-1 card that contain your private information from your credit balance to your accessing your company compound. Frequent travelers would need to go through the hassle of checking in and out in the long queue and SIM card will automatically retrieve your personal information and enters them into your hand phone. Everything will be seamless and contactless and as the semiconductor industry grows, we will be able to install more and more application into our smart card. - Franz Weikmann, Klaus Vedder: Smart Cards Requirements, Properties and Applications, in: Tagungsband Smart Cards, Vieweg Verlag, Braunschweig 998 - James Nechvatal, Elaine Barker, Lawrence Bassham, William Burr, Morris Dworkin, James Foti, Edward Roback, NIST: Report on the Development of the Advanced Encryption Standard (AES), Internet, 2000 - Dan Boneh, Richard A. DeMillo, Richard J. Lipton: On the Importance of Checking Computations, Math and Cryptography Research Group, Bellcore 1996 - RSA Data Security Inc.: DES Crack Fact Sheet, Internet,1997
<urn:uuid:41e1571c-5dde-4b93-becb-ab9c6520ae55>
CC-MAIN-2017-17
https://www.ukessays.com/essays/computer-science/smart-card-technology.php
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917120338.97/warc/CC-MAIN-20170423031200-00071-ip-10-145-167-34.ec2.internal.warc.gz
en
0.949781
5,444
3.15625
3
For our engineering project, our tutors wanted us to face the challenges of designing a real-time system with relatively high performance on limited ressources (memory, bandwidth). The specifications require a gaming platform using the following hardware: - a Digilent's Nexys 3 board (for implementing a GPU on the FPGA). - a Keil's MCBSTM32F400 board (for hosting the OS of the platform and storing the game data). - a Display Tech DT035TFT LCD with a Novatek NT39016 driver (protable true colour display). There are two teams of two students who are working on this project. One team is focused on the ARM MCU and the other one on the GPU. The platform has to match the performance of a 16-bit commercial gaming platform such as SNES, Sega MegaDrive, with the multilayer frames and scrolling. The platform consists of two main components: the MCU of the motherboard and the GPU connected to the video output. - The MCU specific requirements are graphics API for the GPU, audio API for the onboard audio codec, user IO, MCU/GPU interface, SD card interface. Programming of the video game. A module for configuring the LCD screen (brightness, contrast, etc.) is also considered inside the GPU. - The GPU specific requirements are multilayer display, blending of different layers using transparency, 16-bit RGBA colours, multilayer scrolling, basic 2D operations (bitblit (copy), color fill, transparency modification, and their combination (clear, move, etc), primitive generation (lines, circles, text), LCD and VGA video outputs. Graphics oriented memory controller with DMA access. The two teams will need to collaborate regularly to develop the two main components previously mentioned. We have designed the architecture of the platform to ensure this. Our team will start the implementation by providing all the required interfaces surrounding the GPU, such as the support for the LCD and the connection towards the MCU board. This will be developed in parallel with the design of the HDL modules associated to these interfaces. At this point, a preliminary integration with the GPU will take place in order to ensure the consistency and interoperability of both modules. This will be followed by software design on the MCU of required peripherals drivers, audio and video API and finally the RTOS. After the final integration with the graphics team, involving all the GPU modules, the planned game will be implemented and tested. Before going through the details, you can check out this Youtube link containing a brief summary of our project and a video showing what we managed to do this far. The project is not yet complete but we will keep updating this page anytime a new feature is added to the project. First demonstration : Animation In this demo there are two display layers in the frame buffer, the background is an image of stars at 320x240 and the foreground image is a 3200x240 with a fixed background colour that is set to a transparent colour while converting the bmp file into our format. The MCU scrolls periodically on the foreground image to create the animated movement. Second demonstration : A short gameplay In this demo we provide a short gameplay using sprites and background images from the Street of Rage (abandonware). In this case, animations are created using bitblits on the foreground and the movement of the character is created using scrolling. You can also see primitive generation at the end to display a message. Step 1: Materials In order to work on this project, you will need these following materials: MCBSTM32F400 - ARM CORTEX M3 This MCU board is the host of our real time operating system, the high-level graphics API and the high-level audio API. Key features regarding our project : Audio CODEC with Line-In/Out and Speaker/Microphone is available on the MCU board and will be used for in game audio. 2.4 inch Color QVGA TFT LCD with resistive touchscreen: this LCD screen will be removed from the MCU Board revealing a 34 pins connector that will use to connect the Nexys 3 board to the MCU. Flexible Static Memory Controller (FSMC): The embedded in the MCU board. It has four Chip Select outputs supporting the following modes: PCCard/Compact Flash, SRAM, PSRAM, NOR Flash and NAND Flash. For our application, we will use the SRAM mode in order to transfer data between the FPGA board and the MCU board. DMA Controller: The devices feature two general-purpose dual-port DMAs with 8 streams each. They are able to manage memory-to-memory, peripheral-to-memory and memory-to-peripheral transfers. We will use the DMA controller to do quick and direct transfers of sprites and background images to the FPGA memory (video RAM). MicroSD Card Interface: The SD Card Slot available in the MCBSTM32F400 board will be used to load any game to be run on our portable console. Push Buttons and 5-position Joystick: The MCU ARM CORTEX M3 we used have two Push-buttons and a 5-position Joystick that we can actually use to play any game on our console. FGPA - Xilinx Spartan 6 Our GPU will be implemented on the Nexys 3 board. Key features regarding our project: - 16Mbyte Micron Cellular RAM: The Cellular RAM can carry out asynchronous operations with a 70 access time, and burst acess operations up to 80 MHz rate. - 8-bit VGA: The VGA port will be used for debugging purpose. Actual application will be displayed on the Display Tech DT035TFT LCD. - Four double-wide Pmod™ connectors: These connectors will be used to connect the MCU board with the Nexys 3 board. - VHDC connector: This connector will be used to connect the LCD with the FPGA board. Display Tech DT035TFT LCD: This LCD will be replacing the one integrated with the MCU board. It is a more powerful 24 bit RGB LCD with Novatek NT39016 driver. LCD - Nexys 3 PCB: The main purpose of this PCB is to connect the FPGA to the LCD using the VHDC connector the FPGA. The first thing to do is to connect the data signals coming for the NOVATEK chip of the LCD to the connector where the VHDC connector will be plugged. The ground will be directly connected to the supply ground. In order to generate the 18 V, used to power up the backlight of the LCD, we used the a variable tension regulator to convert the 24 V generated by the power supply to the 18 V connected directly to the LCD. To generate the 3.3 V used as an power supply for the LCD, we used another fixed tension regulator. Since this regulator generates the 3.3 V from a 15 V voltage, we used a voltage divider bridge to generate a 15V voltage from the 24 V brougth by the power supply. Nexys 3 - MCU PCB: In order to connect the STM32 microcontroller to the the FPGA, we designed a very simple PCB containing only two connectors. The first one is connected to the LCD connector pins on the STM32. These pins are directly connected to the FSMC peripheral. The second connector is connected to the Pmod connectors of the FPGA. So we can summarise this PCB as a simple circuit directing the signals coming from the FSMC to the FPGA, more precisely the MCU interface implemented on the FPGA. Step 2: Bitmap Conversion Software Since the beginning of the project, we knew that it was impossible for us to use all kinds of color formats on the images that we want to display. In order to fulfil the specifications and respect the technical constraints, we needed to chose a fixed format and stick with it. The color format required is the RGBA with 16 bit pixels and 12 bit color and 4 bit transparency component. Knowing that this format is not a standard one, we had to develop a conversion software in order to create images of this format. Another advantage of such a software is the ability to modify any some characteristics of the image, such as the transparence. We chose the C++ as a programming language and we used the QT creator for the graphics library. The C++ is a software development language that we are used to and we knew that file streamss would be easy to handle. We managed to read the images that we wanted to modify without any problem. Thanks to the QT Creator software and the Qt library, we created a very simple graphic interface hence making the use of the software very intuitive. The software that we designed is very useful for changing the format of the images if we want to successfully use them in the FPGA card. The images to convert must be in the BITMAP 24 bit format because of three reasons: - This type of format do not compress the images .Since the image that we want to store in the FPGA will be not be compressed, a decompressed image will be necessary. - FPGA image processing for compression/decompression (JPEG/MPEG) already exists as open core IPs but it is very hard to implement. This is why it is easier to process images that are already decompressed. We should note that the excessive size of this type of images is not a problem because of available memory zone and the speed of data transfer via the DMA. - Its quality is superior to 16 bits. - It is available everywhere (a lot of available software like “Paint” can convert any type of images in BITMAP 24 bit format). - Transparency is not available. Which gives us more flexibility in dealing with the transparency in our own way. We mentioned earlier the transparency management. In a matter of fact, our software was also created in order to be able to set the transparency levels of the colors of a given image. Since our graphic card can handle up to 4 independent display layers, it is crucial for us to be able to change the transparency of an image or set a transpareny colour on itn otherwise multilayer display will bring no profit. We have two different options for this transparency: - The first one consists of making opaque a series of color (5 maximum). Ex: Make the background of a sprite transparent - The second one consists of choosing the transparency of all the colors not taking in consideration in the number one option. Ex: Make the image of a fire 50% transparent in order to refine its animation. First step: Loading the image and choosing the right parameters of configuration. We begin by choosing the image to convert and the path to where we want to save it. Then, we put the transparency parameters that we explained in the previous section. When the image has finished loading, the software will start by reading the first bytes of the given image. These bytes contains the dimensions of the image and will not be copied in the output file because the FPGA card do not take into consideration these data but uses only the data corresponding to a pixel. After the acquisition of the first bytes of the image, the software can start the conversion. Second step: Converting the image. In this second phase the software will only read every byte defining the colors of a pixel in order to put it in a 16 bit format. It is a simple process using a right binary shift of 4 bit to be able to have 4 bit per color instead of 8 bit. We the add to these 12 bit og colors 4 bit of transparency using the parameters defined at the beginning of the procedure. Third step : Setting the right format of the image. This final step consists on adjusting the data. The BITMAP matrix stores the pixels in decreasing order considering the lines level. Since the specifications of the image format we want impose a increasing order of the pixels, we have to rearrange the pixels in the right order. We encountered the same problem for the bit order. We corrected it by changing 0xRGBA to 0xGRAB. Once the image is in the wanted format, the only thing left is to transfer the image to the RAM of the FPGA using the STM32 microcontroller. Step 3: Architecture We chose an architecture that is both generic and flexible, allowing place for further improvements on the project, and allowing us to easily add or remove different modules. The architecture presented in the image is inspired from a few existing ones from which we kept the aspects that seemed useful to our specifications. In this architecture the use of a shared-memory bus and the use of module-specific register maps provides huge flexibility for changes in the GPU. To summarize the roles of different modules, the MCU Interface allows the STM32 to write into the registers of several modules, regrouped into Register Maps, the written data can configure different aspects of the GPU or launch an image processing operation. Among those modules we have Video Display Controller, which provides the right synchronization signals for either VGA or LCD output, and the Frame Buffer will also be synchronized to this module. The Frame Buffer is responsible for fetching lines to be displayed from memory, apply some blending and scrolling functionality and most importantly for it is responsible for providing the correct RGB data at the right moment. The line fetching is done through a graphics optimized memory bus, provided by the RAM Controller. This controller provides an priority-oriented shared memory bus that is used by all modules that require access to the RAM. Among those we have the Block Processing Unit, who can operate on rectangular image portions, Primitive Generator Unit, who can generate geometric figures at specified destination and DMA Controller provides a way to quickly transferr image data to the on-board RAM. And finally, LCD Configuration Unit is used for making SPI data transfers into the LCD Controller’s internal registers; these register can be altered to set the brightness, contrast and many other features of the LCD display. Concerning the modules integrated in the MCU board: The Real Time Operating System is responsible of the management of timing constraints regarding the video games. The High-level Graphics API helps the user to easily control the graphics car, by creating primitives, structures and macros. The High-level Audio API helps the user to play any music previously created on a PC. In the following, sections, detailed explanation of the MCU Team related modules will be provided. Step 4: MCU Interface - FPGA Xilinx Spartan 6 We designed the MCU interface in a way to be able to share data between the STM32 microcontroller and the Nexys 3 FPGA. To insure a fully functional graphics card, data received from the STM32 must be directed toward the correct register or toward the DMA controller without any discontinuity or lost data. The STM32 should also be able to read the data from the registers without jeopardizing the write process. The MCU Interface Protocol The LCD connector used to connect the STM32 microcontroller to the Nexys 3 FPGA is a 17 by 2 board to board connectors. In order to use the FSMC asynchronous SRAM protocol, we need: - A 16 bit data bus, available directly from the LCD connector (D0 to D15) - The NOE, NWE and NE4 signals, also available from the LCD connector (RD, WR and CS) - A 26 bit address not available on the LCD connector, there only the A bit. This why we had to create our own protocol so we can use the FSMC to transfer data from the STM32 microcontroller. To cover for the unavailability of the 26 bit address bus, we decided to divide a transaction (read or write) into three successive transactions. The first one is a write transaction containing the address of the register we want to read or write in. The second one is a read or a write transaction depending on the type of operation that we need to do. In case of a write transaction, the data bus will be containing the data that we want to write in the registers which address is specified in the previous transaction. Since the GPU registers are 32 bit registers, we need two write (or read) 16 bit transaction. To resume the protocol, in order to write in a register, we need three write transaction. The first one holding the address and the two others containing the 32 bit data (LSB then MSB). In case we want to read the data from the registers, the first transaction will be a write one containing the address. The other two hold the data from the registers (LSB then MSB). Data and address bus management The data and address bus management block updates the 32 bit data buffer and the 16 bit address buffer. On the first transaction from the STM32, the 16 bit data are transferred to the address buffer. The following two transactions are transferred to 32 bit data buffer. In case of a DMA controller data transfer, there is no need for a three write transaction since there is no address and the data bus is only 16 bit. In that case, the STM32 16 bit data bus are transferred to the DMA data bus. We used the LSB of the 26 bit address bus of the FSMC, named RS, for the sake of identifying the location of the data transfer (DMA controller or Register Map). Data are transferred from the buffers to the available bus according to the type of the transfer, detected using the NOE, NWE signals from the FSMC, as shown in the previous section. Since these signals are asynchronous ones, we added to the MCU interface a Synchronous signal generator that can be used to synchronise the other blocks, but with a delay of 10 to 20 ns. Bus Request Management In case of a data transfer from or to the Registers Map block, the data bus has to be granted for the transaction to be successfully processed. If the bus is unavailable, a request must be sent to RegisterMap and the STM32 must stay idle until the bus becomes available. That is why used one of the general purpose input/output (GPIO), available from the LCD connector, is configured to send a signal named BUSY that is tells the STM32 that the FPGA is busy and cannot pursue the transaction until the BUSY signal is put again to ‘0’. This simple procedure guarantee that every transfer to or from the Registers Map will be processed successfully and without any data loss. When the bus is available, an output enable is sent to registers in the case of a read transaction. If it is a write transaction, a load signal is sent to registers in the same time as the address bus. In this demo you will see how we managed turn on and off LEDs on the Nexys 3 board from the MCU board. Step 5: LCD Configuration Unit - FPGA Xilinix Spartan 6 This block is designed with the purpose of configuring the LCD the way we want to. We could for example change the contrast or the brightness of the LCD. The purpose of this block is to update the registers contained in the LCD configuration module. When a data is written in the LCD registers of the register map, the “Set Data” signal is sent to this module in order to activate the update process. For each register in the LCD Configuration module, the corresponding address is sent to the Register Map and the register is updated. Of course the bus must be granted and the output enable signal must be sent with the address bus. If not, the bus request signal will be set and the module will stay idle until the bus is granted. Everytime the update process is done, the RegMap communication block compares the new data received with the old data stored in a buffer. If a change has been made, the LCD SPI Bus Management block will be informed. The address of the register changed, as specified in the NOVATEK data sheet, along with the data changed will be stored in a Buffer to be accessed later by the LCD SPI Bus Management block. LCD SPI Bus Management This block is designed in order to send configuration data to the LCD. In a matter of fact, the LCD is connected to the NOVATEK NT39016 chip which uses the 3 wire Serial Port Interface (SPI) for all the internal parameter configuration. Step 6: Kernel and Middleware - MCU ARM CORTEX M3 High level Graphics API Our goal is to implement a graphics card using the Nexys 3 FPGA. The STM32 microcontroller will be the one running the operating system and sending the commands to the FPGA. The commands are sent via a bus assuring the communication of both of the cards using the FSMC peripheral. In order to make this process easier, we created primitives, structures and macros. These three utilities in addition to a DOXYGEN documentation help the user to easily control the graphics card. It is the same idea as for the STM libraries. The configuration of a system require a really hard work of research in the different documentation and source files available. Of course the functions of the source files can be directly modified by the user, but we should avoid that. Any modification can cause a dysfunction in any previous program using these source files and written by another user. To avoid all these risques, we added in a file (macro) the information needing modification in order to make the program work. There are two main categories for the macros: - Configuration Macros : These macros give the user the ability of modifying the different peripherals put to use during the communication. By doing this, there is no need to change anything in the source file. For example, this macro can fully configure the duration of cycles of the FSMC. - Macros defining the technical specifications : The main purpose of this macro is to indicate the specification of our system, like the screen size, the initial addresses of the planes and the addresses of the FSMC. These macros garanties an API that can be used anywhere. We can change for example a bigger screen by just modifying the right constant. The Doxygen documentation includes all the necessary explications in order to use the macros. With same idea as the drivers created for the STM32, all the structures created for this API are designed to make the use of the primitives and the data storage a lot easier. It is important for instance to keep in mind in which address of the RAM an image has been stored. As a matter of fact, the address of an image can possibly change in the FPGA after an operation such as the Bit Blit for example. If the original address is not stored somewhere, the user will lose the image. - Image Structure: It is used for every image created. The user identify its name, size and its address. All we have to do after this is to put this structure as a parameter in a move type primitive in order for the operation on the image to take place. The update of these parameters is included in the operations. - Color Structure It is very useful for the functions using colors. This structure is applied to avoid giving the 3 primary colors and the alpha transparency level at each use. - Display Plane Structure Our GPU can handle up to 4 planes. That is why we created 4 plane type structure for each of the 4 display planes. We declared the reference in a global variable. It is not necessary to create a new display plane on this FPGA, because 4 is the maximum number allowed. This structure contains all the data required to configure a layer: Width, length, RAM address, scrolling ovec X and over Y. - ConfPlane Structure This is the configuration structure for the 4 layers. There is only one type of this structure declared in the programme, with a reference as a global variable. It helps the user choose which layer to activate, to activate the transparency or not, to launch a test procedure for the communication and to activate the 4 display planes or These 4 structures are not created to be directly modified. It is highly advised to use the associated functions of each structure in order to initialize or modify them. As a matter of fact, the written data in these structure are just the reflexion of what is inside the FPGA card. Changing the parameters only in the STM is useless. For Example, if we want to manually modify the size of a layer, it will not be changed on the GPU because the command will not be sent. The associated functions also helps establishing the communication between both of the cards. This layer is used to configure the different peripherals used for the communication between the two cards. For example, we can use it to initialize the FSMC. The FSMC is initialized in a way to have the same behavior as described in the Section 2.b. A GPIO is also initialized in order to play the role of the busy signal used in the MCU interface. We can also find in this layer the original functions used to read and write in the associated address of the PIN connected to the FPGA. The parameters of these functions are the address of the register to be modified in the FPGA and the data to be written in that register. The associated functions for these drivers are not made to be used directly. They are already used in the service layer. Nevertheless, the user can in some cases access these functions in order to change the configuration of the software. The service functions are the main functions of our graphics card. In this layer, we can find all the functions necessary for the execution of the different operations of the GPU. The details of these functions will be available in the doxygen documentation section. We should just clarify that it is possible to place an over-layer in order to fulfill more complex operations using the original functions written in the service layer. For example it is possible to create a function in order to be able to handle an animation. This function will be using the move operation in order to do that. The high level audio API and Real time Operating system are not yet implemented.
<urn:uuid:8d0e0218-6e5d-4a08-b8f6-f3124c98d87a>
CC-MAIN-2017-17
http://www.instructables.com/id/Portable-Game-Console-ARM-MCU-Team/
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917121644.94/warc/CC-MAIN-20170423031201-00073-ip-10-145-167-34.ec2.internal.warc.gz
en
0.899218
5,471
2.796875
3
|Home | About | Journals | Submit | Contact Us | Français| Human chromosomal fragile sites are regions of the genome that are prone to DNA breakage, and are classified as common or rare, depending on their frequency in the population. Common fragile sites frequently coincide with the location of genes involved in carcinogenic chromosomal translocations, suggesting their role in cancer formation. However, there has been no direct evidence linking breakage at fragile sites to the formation of a cancer-specific translocation. Here, we studied the involvement of fragile sites in the formation of RET/PTC rearrangements, which are frequently found in papillary thyroid carcinoma (PTC). These rearrangements are commonly associated with radiation exposure; however most of the tumors found in adults are not linked to radiation. In this study, we provide structural and biochemical evidence that the RET, CCDC6, and NCOA4 genes participating in two major types of RET/PTC rearrangements, are located in common fragile sites FRA10C and FRA10G, and undergo DNA breakage after exposure to fragile site-inducing chemicals. Moreover, exposure of human thyroid cells to these chemicals results in the formation of cancer-specific RET/PTC rearrangements. These results provide the direct evidence for the involvement of chromosomal fragile sites in the generation of cancer-specific rearrangements in human cells. Cancer development can be initiated by the accumulation of various genetic abnormalities that lead to the disregulation of genes involved in various cellular processes. Chromosomal translocations are one of such abnormalities commonly seen in cancer cells. Translocations result in the rearrangement of genetic material, which typically leads to the expression of an oncogenic fusion protein contributing to the neoplastic process (Gasparini et al., 2007). To date, there are a total of 705 known recurrent translocations in cancer that involve 459 different gene pairs, and are present in many different types of cancer (Mitelman, 2008). In all translocations, the development of breaks in DNA strands must occur. There are various ways in which a cell can acquire these breaks, such as ionizing radiation (Weterings and Chen, 2008). DNA breaks are commonly repaired by two pathways, homologous recombination or non-homologous end joining (Shrivastav et al., 2008), but dysfunction of these pathways can contribute to the formation of chromosomal translocations (Gasparini et al., 2007). Alternatively, an overwhelming accumulation of DNA breaks could prevent these normally functioning pathways from eliminating all of the breaks, leading to translocation events. Chromosomal fragile sites are known to contribute to the formation of DNA breaks and are hotspots for sister chromatin exchange (Glover and Stein, 1987), chromosomal translocations, deletions (Glover and Stein, 1988), and viral integrations (Popescu, 2003). Fragile sites are non-random specific loci which are stable under normal conditions, but upon certain culture conditions can form visible gaps or breaks in metaphase chromosomes (Durkin and Glover, 2007). Depending on their frequency in the population, fragile sites can be divided into two classes: common and rare. Common fragile sites, which constitute the majority of the two classes, are present in all individuals, and are a normal component of chromosome structure (Glover, 2006). Common fragile sites can be further classified based on their mode of induction, as not all sites are induced by the same compounds, nor to the same extent. Aphidicolin (APH) induces expression of the majority of common fragile sites. Other known fragile site-inducing conditions include the addition of 5-bromodeoxyuridine (BrdU), 5-azacytidine, and distamycin A and the removal of folic acid (Sutherland, 1991). Also, certain dietary and environmental factors have been shown to contribute to fragile site expression, including caffeine (Yunis and Soreng, 1984), ethanol (Kuwano and Kajii, 1987), hypoxia (Coquelle et al., 1998), and pesticides (Musio and Sbrana, 1997). Together, genetic influences on fragile site instability, along with external influences from chemical, dietary and environmental factors, suggest a possible role for fragile sites in sporadic cancer formation. Fragile sites are also known to be late replicating regions of the genome. Delayed DNA replication has been observed in all fragile sites examined to date (Handt et al., 2000; Hansen et al., 1997; Hellman et al., 2000; Hellman et al., 2002; Palakodeti et al., 2004; Pelliccia et al., 2008; Wang et al., 1999). Delayed replication at fragile sites is believed to be attributed to the high propensity of DNA sequences to form stable secondary DNA structures (Gacy et al., 1995; Hewett et al., 1998; Mishmar et al., 1998; Samadashwily et al., 1997; Usdin and Woodford, 1995; Zhang and Freudenreich, 2007; Zlotorynski et al., 2003). Difficulties in passing of the replication fork, caused by secondary DNA structure formed within the fragile DNA regions, could result in stalled replication. ATR, a major replication checkpoint protein, is crucial for maintaining fragile site stability (Casper et al., 2004), and its inhibition by 2-aminopurine (2-AP) in conjunction with fragile site inducing chemicals significantly increases common fragile site expression (Casper et al., 2002). Therefore, it is suggested that DNA breakage at fragile sites results from delayed replication forks that escape the ATR-mediated checkpoint pathway (Durkin and Glover, 2007). Many studies point towards the association between fragile sites and formation of cancer-specific translocations (Arlt et al., 2006). In a comprehensive survey, we found that 52% of all known recurrent simple chromosomal translocations have at least one gene located within a fragile site, strongly suggesting a potential role for fragile sites in the initiation of translocations events (Burrow et al., 2009). Also, Glover and colleagues found that upon addition of APH, submicroscopic deletions within FHIT, located in the fragile site FRA3B and associated with various human cancers, were detected and resembled those seen in cancer cells (Durkin et al., 2008). However, there has been no direct evidence linking breakage at fragile sites to the formation of cancer-causing chromosomal aberrations. Genes participating in the two main types of RET/PTC rearrangements, RET/PTC1 and RET/PTC3, have been mapped to known fragile sites (Burrow et al., 2009). RET/PTC rearrangements are commonly found in papillary thyroid carcinomas (PTC), and in all cases result in the fusion of the tyrosine kinase domain of RET to the 5′ portion of various unrelated genes (Nikiforov, 2008). In the case of the RET/PTC1 and RET/PTC3, RET is fused with CCDC6 and NCOA4 respectively (Santoro et al., 2006). These rearrangements result in the expression of a fusion protein possessing constitutive tyrosine kinase activity, which is tumorigenic in thyroid follicular cells (Nikiforov, 2008). Both genes involved in the RET/PTC3 rearrangement, RET and NCOA4, are located at 10q11.2 within fragile site FRA10G, a common fragile site induced by APH. The CCDC6 gene, involved in RET/PTC1, is located at 10q21.2 within fragile site FRA10C, a common fragile site induced by BrdU. Major breakpoint cluster regions for these genes have been identified, and are located within intron 11 of RET, intron 5 of NCOA4, and intron 1 of CCDC6 (Nikiforov et al., 1999; Smanik et al., 1995). RET/PTC rearrangements are known to be associated with radiation exposure, although most of adult tumors are sporadic and those patients lack the radiation exposure history (Nikiforova and Nikiforov, 2008), implying that other mechanisms should be responsible for DNA breakage and RET/PTC formation in most tumors. Clinical studies have shown that RET/PTC3 rearrangements are common in radiation-induced tumors (Fugazzola et al., 1995; Motomura et al., 1998; Nikiforov et al., 1997). In contrast, sporadic PTC tumors have shown a greater prevalence of RET/PTC1 rearrangements (Fenton et al., 2000), which account for 70% of all RET/PTC tumor types (Nikiforova and Nikiforov, 2008). Because the participating genes co-localize with fragile sites and there is a well-established association between RET/PTC rearrangements and DNA damage induced by ionizing radiation, these rearrangements offer an excellent model to examine directly the role of fragile sites in the formation of cancer-specific chromosomal translocations. In this study, we demonstrate that fragile site-inducing chemicals can create DNA breaks within the RET/PTC partner genes and ultimately lead to the formation of RET/PTC rearrangements, offering direct evidence for the role of fragile sites in cancer-specific translocations. To examine whether chromosomal regions involved in RET/PTC rearrangements are part of fragile sites, HTori-3 human thyroid cells were exposed to APH, APH+2-AP, and BrdU+2-AP. Metaphase spreads of cultured HTori-3 cells were hybridized with fluorescently labeled BAC probes covering the entire genomic sequence of RET, NCOA4 and CCDC6 (Figure 1). Without exposure to fragile site-inducing chemicals, metaphase chromosomes of HTori-3 cells appeared normal with smooth contours and intact RET signal (Figure 1a). With exposure to fragile site-inducing chemicals, the morphology of metaphase chromosomes appeared distorted with irregular surfaces and loss of continuity. After treatment with 0.4 μM APH for 24 hours, RET was disrupted in 6 ± 0.35% of chromosomes (Figure 1b; Table 1), NCOA4 was disrupted in 0.62% of chromosomes and no breaks were identified in the CCDC6 gene (Table 1). The appearance of breaks in RET but not in CCDC6 is consistent with the characteristics of the fragile sites in which each of these genes are located (RET located at APH-induced FRA10G, and CCDC6 at BrdU-induced FRA10C). The frequency of breakage observed in RET is in agreement with the previously published levels at FRA10G obtained using Giemsa-stained chromosomes, which were found to be on average at 4.6% following treatment of human skin fibroblasts with 0.2 μM APH for 26 hours (Murano et al., 1989). After addition of APH and 2-AP, 5.93 ± 0.52% of chromosomes showed breaks in RET; 0.63 ± 0.08 % showed breaks in NCOA4 and 0.98 ± 0.58% showed breaks in CCDC6. 2-AP is a general inhibitor of ATR kinase and is known to increase fragile site expression with or without the addition of replication inhibitors like APH (Casper et al., 2002). While breakage in RET and NCOA4 did not change significantly, breakage was now seen in CCDC6, consistent with 2-AP action. Treatment with BrdU and 2-AP resulted in 2.72 ± 0.78% of chromosomes showing breaks in CCDC6 (Figure 1c). However, RET and NCOA4 were each disrupted in 0.6 ± 0.08% of chromosomes after BrdU and 2-AP treatment (Table 1). Increased breakage in CCDC6 is consistent with its fragile site mode of induction. Also, the level of breakage at CCDC6 is comparable with previous reports at FRA10C, with DNA breakage ranging from 4–20% following treatment of human blood lymphocytes from ten patients with 50 mg/L BrdU for 4–6 hours (Sutherland et al., 1985). The breakage frequency seen in RET and NCOA4 with BrdU and 2-AP treatment is similar to that observed in CCDC6 after treatment with APH and 2-AP, showing consistency with 2-AP induced breakage. In concert, these results demonstrate directly that chemicals known to result in fragile site breakage cause DNA breaks within genomic sequences of genes participating in RET/PTC rearrangements. All RET/PTC rearrangements involve the fusion of the tyrosine kinase domain of RET, and the major breakpoint cluster region identified in tumor cells is located within intron 11 (Smanik et al., 1995). While fluorescence in situ hybridization (FISH) experiments allowed us to detect breaks occurring within the RET gene sequence, whether or not the breaks are located in intron 11 was next examined using ligation-mediated PCR (LM-PCR). HTori-3 cells were treated with APH for 24 hours, and the genomic DNAs from both the treated and untreated cells were subjected to primer extension with biotinylated primers that are specific to the regions of interest (Materials & Methods; Supplementary Figure 2). The synthesis reaction terminated at a DNA break to produce a duplex with a blunt end, and the duplex was ligated to a linker. The linker-attached DNAs were then isolated by streptavidin beads, amplified by two rounds of PCR, and visualized by agarose gel electrophoresis (Figure 2). Each lane on the agarose gel represents the DNA breaks isolated from approximately 4000 cells, and each band observed on the gel corresponds to a break found within the region of interest. DNA breaks were observed within intron 11 of RET after treatment with APH (Figure 2a) with a frequency of 0.024 ± 0.015 breaks per 100 cells, which was significantly higher than that in the untreated cells (0.004 ± 0.009/100 cells, p = 0.010) (Figure 2b). DNA samples from lanes 1, and 3–6 in Figure 2a (marked with asterisks) were sequenced to determine the location of the induced breakpoints in the RET gene (Figure 3). DNA sequencing revealed the breakpoints to be located within intron 11, and at a distance from exon 12 that is consistent with the size of the PCR product observed on the agarose gel in Figure 2a. The locations of these breakpoints were compared to the location of known breakpoints found in PTC tumors containing RET/PTC rearrangements (Figure 3) (Bongarzone et al., 1997; Klugbauer et al., 2001). Each induced breakpoint was found to be located near a human tumor breakpoint, with distances ranging from 2–15 base pairs. It is important to note that these induced breakpoints were detected prior to a rearrangement event, while the breakpoints found in tumors have been identified after a rearrangement event has occurred. In most cases, small modifications, such as deletions and insertions of 1–18 nucleotides, have been observed surrounding the fusion points in human tumors. These results confirm that the exposure of thyroid cells to APH induces the formation of DNA breaks within the major breakpoint cluster region found in the RET gene, and these induced breakpoints are located close to known breakpoints found in human tumors. DNA breaks were also examined within FRA3B after APH treatment. FRA3B is the most inducible fragile site in the human genome and contains FHIT, a gene involved in several cancers, where microscopic deletions have been observed after treatment with APH (Durkin et al., 2008; Wang et al., 1999). Intron 4 of the FHIT gene, a major region of high instability in various tumors and APH-treated cells (Boldog et al., 1997; Corbin et al., 2002), was examined here for DNA breaks. DNA breaks were detected within intron 4 of FHIT upon APH treatment (Figure 2c) at a frequency of 0.036 ± 0.020 breaks per 100 cells, confirming that indeed the APH treatment can induce fragile site breakage. An increased number of breaks were observed within FRA3B in comparison to RET, which is consistent with FRA3B being the most active fragile site in the genome. A non-fragile region, 12p12.3 (Zlotorynski et al., 2003) and the G6PD gene, within FRAXF (a rare folate-sensitive fragile site not induced by APH), were examined after treatment with APH, and in contrast to RET and FRA3B, no DNA breaks were observed within the 12p12.3 region (Figure 2d) or in exon 1 of G6PD (Supplementary Figure 3). The absence of breaks in 12p12.3 and G6PD suggests that the DNA breaks observed within RET and FRA3B after exposure to fragile site-inducing chemicals are due to their fragile nature in response to APH. To test for the induction of RET/PTC rearrangement after exposure to fragile site-inducing chemicals, HTori-3 cells were treated with APH and 2-AP for 24 hours with the addition of BrdU for the last 5 hours. These treatment conditions were chosen because they have been previously established to be optimal for the induction of fragile sites FRA10C and FRA10G (Murano et al., 1989; Sutherland et al., 1985). To confirm breakage in the genes after exposure, metaphase spreads were made and chromosomes were scored for disruption of the probe (Figure 1d). The breakage in the probes for RET, NCOA4 and CCDC6 were 7.47%, 1.15% and 2.87% respectively. The mRNA was then isolated and used in RT-PCR for detection of RET/PTC1 and RET/PTC3 formation. To assure that a cell with the rearrangement would be detected, 1 × 106 cells in a 10 cm culture dish were divided among 30 culture dishes 24 h post-exposure. Therefore, each well received no more than 3 × 104 cells, and if a dish contained only one cell with RET/PTC, it would constitute 1 part in 3 × 104, a fraction within the limit of detection (Caudill et al., 2005). No RET/PTC rearrangement was detected without any treatments in five independent experiments (Figure 4), indicating an extremely low level of spontaneous generation of RET/PTC in this human cell line and the absence of contamination. Similarly, no RET/PTC rearrangement was detected using the same experimental approach in HTori-3 cells in four independent experiments in a study reported by Caudill et al. (Caudill et al., 2005). Exposure to a combination of APH, 2-AP and BrdU resulted in the generation of RET/PTC1, with 5 total events identified in 5 independent experiments, each assaying 106 cells (incidence of 2, 1, 2, 0, 0 events per 106 cells) (Figure 4b). However, no RET/PTC3 rearrangements were identified. Representative RT-PCR blots are shown in Figure. 4a. Statistical analysis revealed a significant difference in the incidence of RET/PTC1 induction between untreated cells (zero events) and cells treated with fragile site-inducing agents (five total events) (p = 0.027). These results demonstrate that the exposure of thyroid cells to fragile site-inducing chemicals can lead to the formation of a carcinogenic RET/PTC rearrangement. Chromosomal rearrangements contribute to the development of many types of human tumors. Therefore, it is critical to understand the mechanisms of chromosomal rearrangements in cancer cells. Here, we demonstrated that DNA breakage at fragile sites FRA10C and FRA10G under fragile site-inducing conditions initiates and leads to the generation of RET/PTC1 rearrangement, which is known to contribute to PTC development. To our knowledge, this is the first demonstration that a cancer-specific rearrangement can be produced in human cells by inducing DNA breaks at fragile sites. Interestingly, only RET/PTC1 rearrangements were observed, and no RET/PTC3 rearrangements were identified. While breakage was seen within NCOA4, the RET/PTC3 partner gene, the frequency of breakage was lower when compared to RET and CCDC6. NCOA4 breakage remained relatively constant with each combination of fragile site-inducing chemicals, and was about 10-fold lower than the breakage observed within RET, and about 4.5-fold below the level found in CCDC6. The lower incidence of breakage within NCOA4 could contribute to the lack of RET/PTC3 rearrangement events. Also, clinical studies have revealed that RET/PTC3 rearrangements are frequent in radiation-induced tumors (Fugazzola et al., 1995; Motomura et al., 1998; Nikiforov et al., 1997), while RET/PTC1 rearrangements are more commonly seen in sporadic tumors (Fenton et al., 2000). Our observation of RET/PTC1 rearrangement, but not RET/PTC3 rearrangement, generated by fragile site induction, further supports the idea that sporadic PTC tumors may result from breakage at fragile sites. It is known that specific environmental and food toxins (such as caffeine, alcohol, tobacco) (Kuwano and Kajii, 1987; Yunis and Soreng, 1984), and other stress factors (such as hypoxia) (Coquelle et al., 1998) can induce fragile sites. Therefore, our results suggest that these exogenous factors may contribute to the occurrence of chromosomal rearrangements, and therefore cancer initiation in human populations, by a mechanism of DNA breakage at fragile sites. To demonstrate that fragile site-inducing chemicals can cause DNA breaks at RET/PTC participating genes, FISH analysis of chromosome 10, and LM-PCR analysis at the nucleotide level of the RET gene were performed. Using FISH, we showed that upon exposure of human thyroid cells to fragile site-inducing chemicals, chromosomal breaks are formed within the RET and CCDC6 genes. RET and CCDC6 are located respectively within the APH and BrdU-induced fragile sites, and display breakage only after the addition of APH or BrdU, accordingly. These results demonstrate not only that the fragility is indeed present within the genes involved in RET/PTC rearrangements, but also underline the specificity of fragile site induction that was observed in these regions. While 2-AP addition is known to overall increase chromosomal breakage and fragile site FRA3B expression (Casper et al., 2002), no significant increase in breakage at RET and NCOA4 genes was noted in HTori-3 cells, indicating its weaker influence on the FRA10G site. Furthermore, the addition of 2-AP in combination with APH resulted in the appearance of breaks within CCDC6, while its combination with BrdU resulted in breaks within RET and NCOA4. This nonspecific effect of 2-AP on induction of DNA breaks at fragile sites is in agreement with its ability to inhibit ATR protein, which provides a key maintenance role in fragile site stability. The DNA breaks generated in RET after exposure to APH were confirmed to be located within intron 11, which is the breakpoint cluster region identified in thyroid tumors, while untreated cells showed little to no breaks. These breaks are further confirmed to be fragile in nature, when comparing the formation of breaks within FRA3B, 12p12.3 and G6PD regions. FRA3B, the most inducible fragile site in the human genome (Durkin et al., 2008; Wang et al., 1999), displayed DNA breaks after treatment with APH (Figure 2c); while 12p12.3, a non-fragile region, and the G6PD gene, located within a rare folate-sensitive fragile site, showed no DNA breakage with the same treatment (Figure 2d and Supplementary Figure 3b). Together with cytogenetic analysis, these results demonstrate that fragile site-inducing chemicals can generate breaks within RET and CCDC6 genes, which could result in the formation of cancer-causing RET/PTC1 rearrangement. The induction rate of RET/PTC rearrangement by fragile site-inducing chemicals was four magnitudes lower than the frequency of chromosomal breaks observed in RET and CCDC6 genes. DNA breaks, a serious threat to genome stability and cell viability, can trigger DNA repair pathways, including homologous recombination or non-homologous end joining (Shrivastav et al., 2008). The action of these pathways ensures proper repair of DNA breaks, and prevents the deleterious consequences of such breakage. However, some (small number of) DNA breaks escaping the repair pathways will ultimately result in large-scale chromosomal changes, such as RET/PTC rearrangement. This study provides important information about the mechanisms of formation of carcinogenic chromosomal rearrangements in human cells. In addition, it establishes an experimental system that will allow for testing the role of specific environmental substances, dietary toxins, and other stress factors in the generation of chromosomal rearrangements and tumor initiation. The experiments were performed on HTori-3 cells, which are human thyroid epithelial cells transfected with an origin-defective SV40 genome. They are characterized as immortalized, partially transformed, differentiated cells having three copies of chromosome 10 with intact RET, NCOA4 and CCDC6 loci and preserve the expression of thyroid differentiation markers such as thyroglobulin production and sodium iodide symporter, as we reported previously (Caudill et al., 2005). The cells were purchased from the European Tissue Culture Collection and grown in RPMI 1640 medium (Invitrogen) supplemented with 10% fetal bovine serum. HTori-3 cells (1 × 106) were plated in 10-cm culture dishes and 16 h later exposed for 24 h to APH (0.4 μM) or APH and 2-AP (2 mM) (Casper et al., 2002). When desired, cells were treated with BrdU (50 mg/L) for the last 5 h to in addition to 2-AP and/or APH for 24 h. For DNA breaksite detection 5 × 105 cells were plated in 10-cm culture dishes and treated the same as above with 0.4 μM APH. HTori-3 cells exposed to various chemicals were treated with 0.1 μg/ml of Colcemide for the last 2 hours before harvesting. Cells were incubated in hypotonic solution (0.075 M KCL), fixed in multiple changes of methanol:acetic acid (3:1) and dropped onto moistened slides in order to obtain metaphase spreads. Slides were aged overnight and pretreated with RNase before proceeding for hybridization. BAC clones RP11-351D16 (RET), RP11-481A12 (NCOA4), RP11-435G3 and RP11-369L1 (CCDC6) were obtained from BAC/PAC Resources, Children's Hospital, Oakland. BAC clone RP11-481A12 containing the NCOA4 gene was subcloned into fosmid vector after cutting with restriction enzymes (Epicentre). A mixture of subcloned probes (SC10, SC19) containing 70 kb of the NCOA4 gene and its flanking regions was used as a probe for NCOA4. The probes were labeled by nick translation using Spectrum Green-dUTP, Spectrum Orange-dUTP or Spectrum Red-dUTP (Vysis Inc.). Hybridization was performed as previously described (Ciampi et al., 2005). On average 150 chromosomes were scored for breaks in the RET, NCOA4 and CCDC6 probes for each condition. To detect DNA breaks within intron 11 of RET induced by APH, a 5′-biotinylated primer RET-7 corresponding to the RET at the 5′ end of exon 12 (the grey arrow in Figure 3a) was used to extend into intron 11. For the first and second rounds of nested PCR primers RET-R1b and RET-R1 were used, respectively. To isolate the DNA breaks, a duplex DNA linker LL3/LP2 was used as described (Kong and Maizels, 2001) as well as the corresponding linker specific primers LL4 and LL2 (Supplementary Figure 2). For FRA3B, the biotinylated primer FRA3B-20 was used to allow identification of break sites occurring at intron 4 of the FHIT gene, which contains major clusters of APH-induced breakpoints in FRA3B (Boldog et al., 1997; Corbin et al., 2002), and primers FRA3B-9 and FRA3B-23 were used in the first and second rounds of nested PCR, respectively. For detection of breaks within the 12p12.3 region, the biotinylated primer 12p12.3-1 and primers 12p12.3-2 and 12p12.3-3 were used. For detection of breaks within exon 1 of G6PD, the biotinylated primer G6PDF3 and primers G6PDF and G6PDF2 were used. Sequence of linkers and PCR primers is described in the Supplementary Figure 1. DNA breaksite mapping was performed as described (Kong and Maizels, 2001) with modifications (Supplementary Figure 2). Genomic DNA was isolated from HTori-3 cells with or without APH treatment. Primer extension was performed using 200 ng of DNA at 45°C, and the DNA breaks were isolated through ligation of the LL3/LP2 linker, and then using streptavidin beads. Amplification of these DNA breaks was achieved by nested PCR of the extension-ligation products. The final PCR products were resolved by electrophoresis on a 1.3% agarose gel. Each band observed on the gel corresponds to a break isolated within the region of interest. To confirm the bands observed were located within intron 11 of RET, the PCR products were sequenced. The exact breakpoint sites were determined from the sequencing results by identifying the nucleotide adjacent to the LL3/LP2 linker sequence. Upon treatment with fragile site-inducing agents for 24 hours, cells were split into 30 6-cm culture dishes at a density of approximately 3 × 104 cells per dish and grown for 3–4 days. To sustain growth for 9 days, cells were transferred to 10-cm culture dishes 4–5 days after seeding into 6-cm dishes. RNA was extracted from each culture dish using a Trizol reagent (Invitrogen). Then, mRNA was purified using the Oligotex mRNA minikit (QIAGEN). RT-PCR was performed using a Superscript first strand synthesis system kit and random hexamer priming (Invitrogen). PCR was performed to simultaneously detect RET/PTC1 and RET/PTC3 rearrangement using primers RET/PTC1 forward, RET/PTC3 forward, and common reverse (Supplementary Figure 1). As positive controls, cDNA from RET/PTC1-positive TPC-1 cells and RET/PTC3 positive tumor sample were used. Ten μl of each PCR product was electrophoresed in a 1.5% agarose gel, transferred to the nylon membrane, and hybridized with 32P-labeled oligonucleotide probes specific for RET/PTC1 and RET/PTC3 (Supplementary Figure 1). Evidence of RET/PTC rearrangement in the cells from a given flask was scored as one RET/PTC event. All statistics performed using one-tailed Student's t-test. This work was supported by the National Cancer Institute (CA113863 to Y.-H. W and Y. E. N.). The authors declare no conflict of interest.
<urn:uuid:dab63c42-c48a-445e-8081-3812eedc4c3e>
CC-MAIN-2017-17
http://pubmedcentralcanada.ca/pmcc/articles/PMC2855398/
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917118963.4/warc/CC-MAIN-20170423031158-00422-ip-10-145-167-34.ec2.internal.warc.gz
en
0.943772
6,811
2.625
3
Turning History into Justice: Holocaust-Era Assets Records, Research, and Restitution War and Civilization Lecture University of North Carolina-Wilmington, North April 19, 2001 From the end of World War II until five years ago the Holocaust was primarily viewed as the greatest murder in history. And indeed it was. But since the spring of 1996 it has become ever more apparent that the Holocaust was also the greatest robbery in history. The Nazi-era witnessed the direct and indirect theft of well over $150 billion of tangible assets of victims of Nazi persecution. This evening I will be discussing this robbery, efforts to right the wrongs of the past, and the importance of archival research to the restitution and compensation process. The process of taking assets began with Aryanization of Jewish property in the 1930s; followed by the looting of real, personal, intellectual, and cultural property throughout the war; and the looting of gold from the central banks of occupied countries. The process even involved the taking the gold filings, rings, and other valuables of those murdered in the Final Solution. Art was a favorite target of the Nazis and the Nazis looted some 600,000 pieces. Many items were subsequently sold to raise funds to support the Nazi war machine. There was also the indirect lost of wealth by victims of Nazi persecution. To protect their assets many European Jews during the 1930s sent funds to one or more of the over 400 Swiss Banks. Many of the depositors who were victims of Nazi persecution did not survive the war and often neither did their heirs. Thus the Swiss banks, which never close an account, kept the deposits, estimated today as being worth over $1 billion. Additionally, survivors and heirs found it difficult, if not impossible, to withdraw funds for lack of a secret bank account number or the lack of a death certificate; something that the Nazis did not create at the death camps. Many Jews in the 1930s bought property and death insurance policies assuming that insurance would provide them or their heirs with financial protection. During the war, German authorities systematically confiscated the insurance policies of Holocaust victims, as well as cashing in policies once the insured person was murdered. Survivors and heirs after the war found it difficult, if not impossible, to have insurance companies honor policies. Often lack of a death certificate or lack of a copy of an insurance policy precluded payments. Another form of indirect lost of monies was the Nazi uses of forced and slave labor. Some 12 million people, many from Poland and Russia, were forced into labor on behalf of the Third Reich. Some were minimally compensated. Most were not. The Allies were well aware of the thefts taking place in Nazi Europe and did take action during and after the war to identify, locate, and recover Nazi looted assets. This was done to keep the Nazi war machine from using the looted assets to acquire items it needed to continue the war and to provide restitution to those who had lost property. During the course of tracking, recovering, and restituting the looted assets some 30 agencies of the US Government created well over 30 million pages of records. These records had an importance then for administrative, fiscal, and legal purposes. These same records certainly have an importance today. A most important concern of the Allies was the Nazi theft of some $6 billion dollars (in today's dollars) of central bank gold. The discovery of hidden gold in Germany in the last days of the war and subsequent negotiations with neutral countries that had acquired the looted gold, resulted in the Allies being able to recover about two-thirds of the stolen gold. Some of this gold was non-monetary gold, some of which had been re-smelted and merged with monetary gold and some was gold watches and wedding bands, as well as victim dental gold. After the war the Allies provided that the non-monetary gold be restituted to individuals through the auspices of an international refugee organization. The monetary gold was turned over to a Tripartite Commission for the Restitution of Monetary Gold (TGC) that would decide how much gold would be returned to each country that had its central bank gold looted. The TGC, composed of American, British, and French representatives, restituted to claimant countries most of the gold in the 1950s. The advent of the Cold War, the restitution of most of the monetary gold, and other factors, resulted in diminishing interest in all the questions surrounding Nazi looted assets. It should be noted that immediately after the war, survivors were primarily concerned with putting their lives back together and did not have the energy or means to regain what was lost. And many Jews were reluctant to pursue what was rightfully theirs, mostly out of fear that their efforts would fuel anti-Semitism and because they did not want to relive the horrors of the Holocaust-era. Also, many initial claims were meet with resistance and obstruction from the holders of the assets, and thus subsequent efforts to regain property were never pursued. Few countries, companies, and banks, unfortunately, made any concerted efforts, if any at all, to find the heirs of victims. Some restitution and recovery of assets, however, was forthcoming. Many assets, such as cultural property and works of art, were recovered and restituted by the US Army occupiers in Germany, Austria, and Italy. The government of the Federal Republic of Germany began making substantial reparation payments to Holocaust survivors, heirs, and to the state of Israel and signed bilateral agreements with more than a dozen countries set up pensions and annuities for victims in western Europe. But, during the Cold War, the communist regimes in Central and Eastern Europe prevented Holocaust survivors and heirs from receiving such payments. There was some Jewish interest in dormant and closed bank accounts in Switzerland and some Swiss banks, pushed by Israel and others in the early 1960s, undertook a relatively inadequate attempt to ascertain how much they held and to return it. Identified and returned to depositors or their heirs, beginning in 1964, was about $2.5 million. Questions periodically arose about the restitution of assets, but not enough to cause a groundswell of international action. For all intents and purposes the issues surrounding looted assets exited from center stage. For 40 years there was not much interest in Nazi looted assets and almost no research. That all changed in early 1996 when Edgar Bronfman, head of the World Jewish Congress, asked Senator Alfonse D'Amato, the head of the Senate Banking Committee, to investigate the supposedly large quantities of dormant Jewish bank accounts in Swiss banks. Bronfman believed that there were billions of dollars in accounts and that the Swiss banks were making it difficult, if not impossible, for survivors of the Holocaust and heirs of victims of Nazi persecution to retrieve. When the senator agreed to look into the matter, it touched off a renewed interest in Holocaust-Era Assets. For the National Archives and Records Administration (NARA) the interest first truly manifested itself in March 1996, when D'Amato sent a researcher to Archives II to look for information about Jewish dormant bank accounts in Swiss banks. Very early in her research the researcher located records that contained detailed information about Jewish deposits in a Swiss bank. Within a month of her discovery D'Amato's Senate Banking Committee held hearings on Nazi looted assets and the Swiss bank accounts and shortly thereafter began a major, worldwide research effort into Holocaust-Era assets. This research effort, along with diplomatic, political, legal, moral, and economic pressures, have prompted countries, organizations, and companies to come to gripes with the past and to agree to help in the process of righting the wrongs of the past. During the past several years progress has been accomplished in various aspects of Holocaust-Era assets. Early in October 1996 class action lawsuits were filed in the U.S. District Court in Brooklyn against the two largest Swiss Banks alleging that they had blocked the survivors' efforts to reclaim money that was directly deposited in the banks or that the Nazis had looted and stored in the banks. The plaintiffs sought $20 billion in compensation. Responding to the negative publicity, the Swiss in December 1996, created an independent commission of experts to spend five years studying the Swiss role in World War II. Early in 1997, the Swiss established a $200 million fund for Holocaust survivors. This fund would grow to over $400 million by 1999. Responding to economic and legal pressures, the Swiss Bankers Association and the Swiss government persuaded Paul Volcker, former head of the Federal Reserve Board, to head up an international committee to oversee the auditing of dormant bank accounts to ascertain how much of these accounts belonged to the Holocaust survivors and heirs. To assist Volcker the Swiss Government lifted, for five years, Swiss Bank Secrecy Laws. During the latter half of 1997 the Swiss began to publish names of Holocaust-Era dormant accounts. Eventually, in August of 1998, the Swiss Banks agreed to a $1.25 billion out of court settlement. Although the settlement had been reached, it still remained for the court to work out the details as to who would receive compensation. Those details were finalized last year. At the time the lawsuit had extended its original reach. In addition to Nazi victims with Swiss accounts, the settlement ultimately identified four other groups of potential beneficiaries, including those whose looted assets had found their way into Switzerland, slave laborers, and refugees who were turned away by Switzerland. Four Swiss insurers are adding $50 million to the settlement and over 35 Swiss companies, including food giant Nestle, whose wartime subsidiaries used slave labor are making financial contributions to the settlement fund, in the expectation it would cover any possible claims against them. During the summer of 1996 some prominent members of the British Parliament began taking an active interest in looted Nazi gold and they tasked the Foreign and Commonwealth Office with preparing a report on the Nazi looted gold. The report was quickly published and immediately it raised internationally questions about the gold, and indirectly, about the actions of Switzerland during the war. The British report would set in motion the U.S. Government getting involved, and providing political clout to the process of seeking the truth about the past and putting that information to work in the process of providing compensation to victims of Nazi persecution. During the late summer of 1996 Edgar Bronfman explained the Holocaust restitution issue to the President. Clinton agreed to help with the issue and to work with D'Amato. In early September 1996, the Clinton tasked Stuart E. Eizenstat, then Under Secretary of Commerce for International Trade, as well as Special Envoy of the Department of State on Property Restitution in Central and Eastern Europe, to prepare a report that would describe U.S. and Allied efforts to recover and restore gold the Nazis had looted from the central banks of occupied Europe, as well as gold taken from individual victims of Nazi persecution, and other assets stolen by Nazi Germany. To accomplish this task Eizenstat established in October an 11-agency Interagency Group on Nazi Assets. I was my agency's representative. Dr. William Z. Slany, the Department of State's Chief Historian, had the responsibility for drafting the group's report. He in turn asked me to prepare a finding aid to relevant records. Slany formed his research team, consisting of researchers from the Departments of Defense, Treasury, Justice, and State, the U.S. Holocaust Museum, the Central Intelligence Agency, and the Federal Reserve Board. They soon made the National Archives their home. In May 1997 the Interagency Group issued its report, with my 300-page finding aid serving as an appendix. The report, based primarily on NARA's holdings, focused on what U.S. officials knew about Nazi looting of gold and other assets and how the United States attempted to trace the movement of looted gold and other assets into neutral and non-belligerent nations, and to recover the assets from these nations as well as from occupied Europe. The report was quite critical of the Swiss and the other World War II neutrals. Within days of issuing its first report, the Inter Agency Group on Nazi Assets was asked to prepare another report dealing with the neutrals and their financial and economic dealings with the Axis. Thus, in the summer of 1997, its researchers from three federal agencies began to do their research again with NARA's assistance. As research was getting underway news stories, based on NARA's holdings, about the Vatican's Holocaust-Era assets involvement, particularly the assets stolen by the Croatian Utashi and sent to the Vatican, prompted President Clinton to direct Eizenstat, who at the time was the Under Secretary of State, to also study the fate of the assets seized by the Croatians. In late 1996, the TGC was in the process of deciding how to allocate the remaining $60 million worth of gold. The United States asked it to delay the final distribution until the non-monetary gold issue could be further studied, primarily to determine the degree which the monetary gold was tainted with non-monetary. At the London Gold Conference in December 1997, attended by representatives of 41 nations, countries that were entitled to the remaining TGC gold were asked not to take their final payment but instead donate it to a Nazi Persecutee Relief Fund. They were asked to do so because research at the National Archives and elsewhere had proven that some of the monetary gold was tainted with non-monetary gold, and thus should go to people rather than countries. Nazi victims who lived in the former Soviet Union, who are often referred to as "double victims," were the first to get aid from the fund because in many cases they did not get compensation that was paid to Holocaust survivors who lived in Western Europe. To get this fund established Eizenstat committed our Government to contribute $5 million even though the United States was not a TGC claimant. By the summer of 1998 some dozen countries had contributed their TGC share in the amount of over $50 million. The second Eizenstat report was issued in June 1998. The report provided a detailed analysis of the economic roles played by the neutral countries and the factors that shaped those roles. Prominent in the report was a focus on those countries' trading links with the Axis, as well as on their handling of looted assets, especially gold. Also addressed in the report was the fate of the Croatian Ustashi treasury and the Vatican's role during and immediately after the war. Also noted in the report was that the postwar negotiations that the Allies conducted with the wartime neutrals was protracted and failed to meet fully their original goals: restitution of the looted gold and the liquidation of German external assets to fund the reconstruction of postwar occupied Europe and to provide relief for Jews and other non-repatriable refugees. This resulted from the intransigence of the neutrals after the war, dissension within Allied ranks, and competing priorities stemming from the onset of the Cold War. Early 1997 witnessed a renewed interest in looted art, especially after Museums were being identified as possibly having and indeed having looted art. This interest prompted some museums, auction houses, and art dealers to undertake provenance research on their holdings. By the end of 1998, the search for looted art, according to two British authors, "had become the greatest treasure hunt in history." This may be an exaggeration but the search for looted art certainly became an important aspect of the art world. And several countries, including France, established commissions to look into the possibilities of looted art in their countries. In this country Congress during the spring of 1998 held hearings on the subject and in June the American Association of Museum Directors adopted guidelines calling for a review of their members' collections to identify works of art of dubious provenance. The international aspects of looted art and cultural property began in December 1998 with the four-day Washington Conference on Holocaust-Era Assets that was held at the Department of State. Attending the conference were over 400 representatives from 43 countries and a dozen non-governmental organizations. A dozen principals dealing with looted art and cultural property were adopted at the conference. To determine how well countries were following the principals the Council of Europe sponsored another conference. This conference was held last October at Vilnius, Lithuania, at which representatives of 37 nations and 17 non-governmental organizations met to discuss looted cultural property. The Forum adopted a declaration that had six sections dealing with the restitution of looted cultural property. Since 1997 looted art has been clearly identified in numerous countries including the United States and various settlements regarding the looted art have been made. In October 1998 an International Commission on Holocaust-Era Insurance Claims was established by Italian, German, French, and Swiss insurers, U.S. regulators and Jewish groups, to settle unpaid insurance policies. Former Secretary of State Lawrence Eagleburger heads the commission. To show their goodwill two of the major insurance companies Italy's Generali and Allianz of Germany set up a $150 million fund to cover claims. The Commission is working closely with the companies, and some progress has been made, such as last summer when Generali agreed to pay all valid claims from Holocaust survivors and heirs; to give the Commission access to its archives; and to post on its website names of the firm's policy holders. A lawsuit was initiated in March 1998 against Ford Motor Company for allegedly operating a slave labor operation at its Cologne plant during the war. During the course of 1998 and 1999 some 50 lawsuits were filed against more than 100 of German and Austrian companies for their slave labor practices. The plaintiffs in the suits asked for $20 billion in damages. The Swiss bank settlement prompted several top German firms to come forward and say they would set up a restitution fund and during 1999 the German government agreed to compensate Holocaust survivors in the former Soviet bloc, thereby reversing their Cold War policy against such compensation. To settle the various lawsuits, in July 2000 an agreement was signed by representatives from Germany, the United States, eastern Europe and Israel, and U.S. attorneys to provide former forced and slave laborers $4.8 billion, half from the German government and half from over 3,000 German companies. As a way of encouraging additional contributions the U.S. government agreed to give $10 million to the new slave labor fund. Between January 2000 and this past January France, Austria, Belgium, and the Netherlands agreed to pay some $1.5 billion dollars for compensation of various types, including seized property, forced labor, unpaid insurance policies, and seized bank accounts. Up to now I have spoken about the past and current efforts to right the wrongs of the past. The political, diplomatic, moral, economic, and legal pressures that have contributed to paying the victims of Nazi persecution and their heirs did not just happen in a vacuum. Without records and research, and NARA's assistance the progress that has been made to date and will continue to be made would not have happened. Since March 1996 the National Archives and Records Administration's Archives II Building in College Park, Maryland has been visited and/or contacted by well over one thousand researchers interested in records relating to Holocaust-Era assets. Many of those researchers have spent weeks, months, and even years at Archives II going through millions of documents. The high water point of Holocaust-Era assets researchers came on September 1, 1998, when there were 47 of them. Many of these researchers represented law firms engaged in litigation and many were foreigners. Foreign researchers and representatives of a dozen foreign commissions looking into their countries' handling of victim assets found NARA an important resource to supplement the information available in the archival records in their own countries. Representatives of foreign banks, governments, archives, and corporations have also come to do research. It started in 1996 with gold and Jewish bank accounts. In 1997 art works and insurance, and non-monetary gold [that is, victims' gold from the death camps, such as dental gold], and the role of the Vatican were added; in 1998 slave labor, alleged American and foreign bank misdeeds; looted archives and libraries; and Jewish communal and religious property were being studied. At the end of 1998 Lord Janner, who heads the London-based Holocaust Educational Trust, stated the "hunt for Nazi loot has turned into the greatest treasure hunt in history. We don't know where it will end." Since he made those remarks the research has broadened to encompass looted diamonds and securities, as well as the role of American corporations in their dealings with the Nazis. To assist researchers I began early in 1996 to prepare special finding aids to relevant records; first 3 pages; then 10 pages; then 125; and for the First Eizenstat Report in May 1997 a 300-page guide to the records. During the summer1997 as the research widened to more countries and more subjects, there was a great desire for an expanded finding aid to relevant records. I produced a 300-page supplemental finding aid in the fall of 1997. It was placed on the Department of State's website in November 1997. During the winter of 1997-1998 I prepared a revised and enlarged finding aid. This finding aid, some 750-pages, was placed on the United States Holocaust Memorial Museum's website in March 1998. In March 1999, NARA published my 1,100 page guide to some 15 million pages of records created or received by over 30 Federal agencies. To further assist researchers I was urged by the Department of State to have NARA hold a records and research-oriented conference the day after the Washington Conference on Holocaust-Era Assets ended. This one-day event, Symposium on Holocaust-Era Assets Records and Research, was held at Archives II on December 4, 1998. Over 400 people, including representatives of numerous foreign governments attended. Eizenstat gave the keynote address. He stated "It is truly remarkable to reflect on the sheer amount of research that is being conducted and the new archival sources that has been unearthed in just the past few years." Furthermore, he added "I am particularly proud to say that our country was a leader in this effort to advance the process of archival research…The National Archives…has become a focal point of research, scholarship, and remembrance into the issues surrounding Holocaust-era assets." He concluded his remarks by stating "The National Archives can be proud of the positive role it has played both in bringing justice, however belated, to the survivors and memory to the deceased." During the course of the day NARA launched it assets website. Growing out of the desire to declassify still-classified Government records Congress in October 1998 enacted the Nazi War Crimes Records Disclosure Act of 1998. This law required Federal agencies, including NARA, to review and recommend for declassification records relating to Nazi war crimes, Nazi war criminals, Nazi persecution, and Nazi looted assets. By the end of March over 3 million pages have been declassified and it is expected another 7 million pages will be declassified under the Act. By the summer of 1998, there were upwards of 20 national commissions looking at what had happened to assets in their respective countries. Many of those involved in the assets issue believed that the United States should have its own commission to look at Holocaust-Era assets that came into the control and/or custody of the United States Government. Congress reacted to this desire by enacting a law in July establishing the Presidential Advisory Commission on Holocaust-Era Assets in the United States and in October President Clinton appointed Edgar Bronfman to chair the group. Also serving as members of the Commission were Eizenstat and eight members of Congress. The Commission's research staff, numbering over 20 individuals spent considerable time at the National Archives between the spring of 1999 and the last fall doing research. The Commission presented its report and recommendations to President Clinton in mid-January of this year. Early on the importance of records and getting to the truth was recognized. The records NARA held and its staff assistance was, and has continually been appreciated. This began in May 1997 in the first Eizenstat report. The author, Dr. Slany, in his preface wrote "All of the research depended directly upon the unfailing support, assistance, and encouragement of the…National Archives and Records Administration. Our work simply could not have been carried out without this assistance…" Senator D'Amato, on the floor of the Senate in June expressed his appreciation, stating "The National Archives at College Park has been nothing less than amazing… Their help was indispensable in establishing, continuing and expanding the research of the Committee." Eizenstat in December 1998 speaking about archival openness took the opportunity to thank NARA for its work in helping his interagency group and the foreign commissions. "NARA archivists," he said, "continue to provide extraordinary assistance and information to the many governmental and private researchers who have traveled to the Archives to consult documents available nowhere else in the world." Later in his talk he stated "I cannot fail to mention the truly remarkable measures taken by my own government: making available and fully accessible to researchers by May 1997 at the National Archives more than 15 million pages of documents-….And the work has gone forward without pause at the National Archives with new and important files being found, described, and made available for research." By the end of 1998, the importance of archives as a result of Holocaust-Era Assets research had been clearly demonstrated. Reporter John Marks in the December 14, 1998, issue of the U.S. News & World Report wrote, "since 1996, when the Holocaust restitution effort gained new momentum" archival institutions "have become drivers of world events. Their contents have forced apologies from governments, opened long-dormant bank accounts, unlocked the secrets of art museums, and compelled corporations to defend their reputations." During the past five much has been accomplished towards bringing justice and compensation to victims of Nazi persecution. But those working so hard to achieve the financial settlements know that no amount of money could ever compensate for the atrocities of World War II. And they also know that much still needs to be done, and done quickly as the number of Holocaust survivors decreases every year. Many issues, both old and new, are still unresolved. Thus, undoubtedly, interest in Holocaust-Era assets issues will continue for years, if not decades. And just as certainly, archival research throughout the world will accompany the interest in the various asset-related issues. Archival research at NARA coupled with research undertaken elsewhere, have contributed immeasurably to countries, corporations, banks, and other institutions, being more capable of addressing their pasts and accepting their current responsibilities. Archives have the past five years and will in the future serve as important resources in the search for truth and justice, and as Stuart Eizenstat frequently says, turning history into justice.
<urn:uuid:a5c58b5e-54a1-4b13-b876-d657b19a820c>
CC-MAIN-2017-17
https://www.archives.gov/research/holocaust/articles-and-papers/turning-history-into-justice.html
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917127681.50/warc/CC-MAIN-20170423031207-00134-ip-10-145-167-34.ec2.internal.warc.gz
en
0.975427
5,433
3.078125
3
Digital storytelling Manual Download the Manual (PDF 2.9 MB) Digital stories are short videos that generally run for two to four minutes. They can use images, sound, music, narration, animation and video to tell a person’s story. A digital story is a fun and creative way of sharing a story with other people. Digital stories can be shown in several different ways. With the final digital file, the story can be watched on a DVD player, played back locally on a computer, be viewed online or even played on a mobile phone. As well as telling an interesting story, a digital story can give people in the future a way to look at how things were when the story was made. It can become a part of history. The State Library of Queensland is committed to helping people create and share their stories, while also acting as a repository, a safekeeping place where one generation’s story can be discovered by another. A finished digital story is composed of images and sound. The pictures can be still images or moving video, or a mixture of both. The sound is mainly the sound of the story being narrated, but may include background noises, sound effects and even music. If you use any images, sound or video in your story that does not belong to you, it is essential that you get written permission from the rightsholder. For copyright reasons, you cannot copy music from a popular CD without permission and put it in your story. There are many sites now that offer content under Creative Commons licenses. Under these licenses, creators put their works online and specify how other people can use them. If you interview anybody for a digital story, it is important that you get clearance to use the sound or video recording of them. Telling the Story Here are some ways of practicing telling a story which you can do with a group of people. You might find these techniques useful when you come to write a script for your digital story. Each person starts by saying ‘My name is …….and on my way here today…..’ They then tell a 2-3 minute story about what happened to them. It doesn’t even have to be true! Each person brings along a photo. Put the photo face up on the table and then everyone picks someone else’s photo. Write a made-up short story (half a page) about the photo you chose (doesn’t have to be true) and then read it to the group. It can also be good for the real owner of the photo to then tell the real story behind the photo as well. The Story Circle This is the beginning of writing the script for your digital story. Think about the pictures and video that you have got with you and write a first version of your story (write it by hand or type it into a computer and print it). Keep the story to one page or less if you can. This should take about an hour. When you’ve finished, sit in a circle and read your stories to each other. Everybody should make comments and suggestions about each story. This will help you to make the story better. The first version is often too long. Half a page, or 200-250 words is usually enough for a 2-3 minute digital story. The story should have a strong focus to make it interesting. This is a single idea about which the story is about. Stories often contain a problem or something to be overcome, such as ‘I wasn’t sure if I’d be able to complete the bushwalk, but I was going to give it my best shot’. This can give the story some drama and interest. There is nothing wrong with the story not containing a problem as long as there is a strong focus on one subject. It is very helpful to write one sentence what your story is about. Share this summary with the group before you read your story and they can help you judge how well you have got the point of the story across. The Equipment (hardware) You will need a computer to make a digital story. It can be either Windows or Macintosh. It doesn’t need to be the latest or greatest machine, but should have sufficient memory to store your media files and enough processing power to handle the video files you will be creating. This will make loading files quicker and mean you spend less time staring at a progress bar. The computer must be able to play sound, either through speakers or headphones. A digital camera is a great way to get pictures to use in your digital story. It doesn’t matter how many pixels your camera is, the pictures will still look good enough on a computer screen. If you don’t have a digital camera or prefer to use film, you will need to scan the prints to use them in your digital story. You can do this yourself with a scanner or a commercial photo shop can do this for you and put them on a CD. Digital video cameras make it fairly easy to get your video onto the computer just as long as the computer is fairly new and powerful. Older (non-digital) video cameras are more difficult because you need to use a capture card to get the video into the computer and the quality won’t be quite as good. Digital cameras can also be used to take videos. The quality is not usually as good as video cameras but can be good enough for a digital story. Even mobile phones can take still and moving images and it may be possible to put these in your story as well! The files from the phone will usually have to be converted using software from the Internet. A flat-bed scanner can be used to convert printed pictures into digital pictures for your story. Remember that the you must have permission to use any images that don’t belong to you. There are a few ways of recording sound into a computer. Some computers have a microphone socket that you can plug a small microphone in to. Digital sound recorders are designed specifically to capture audio. Specialised sound recorders (for example high-end oral history recorders) will yield higher sound quality. Many devices, including iPods and MP3 players will record sound which can then be copied to the computer, using a USB cord similar to what you use with a digital camera. The Software (computer programs) If you are recording sound directly into the computer you will need some recording software. Macs usually have a program called Garage Band which does the job. If you have a Windows PC you can download a great free program called Audacity from http://audacity.sourceforge.net/. Audacity will allow you to record directly into the computer using a microphone as long as your computer has a microphone socket. If you use a digital sound recorder you probably won’t need any audio software on the computer. GNU Image Manipulation Program (GIMP) Once you have your digital pictures loaded into the computer you may wish to resize, reshape and alter them in other ways. The best free program for doing this is called GIMP, available from www.gimp.org Adobe Photoshop Elements Another program for doing this is Adobe Photoshop Elements. This program comes free on CD sometimes when you buy a scanner or digital camera or to buy it from a shop costs about $100. This is the program used on the laptops in the State Library’s Mobile Multimedia Lab. There are dozens of other programs which can edit digital photos including Corel Photopaint. Digital Video (Putting the digital story together) Windows Movie Maker This program comes free with Windows XP and Vista. This program can be used to create the digital story. It is a very simple to use program that enables you to make basic digital stories. For many people, Windows Movie Maker satisfies all their digital storytelling needs. If you want more control of the look and feel of your story and want more options for how it is arranged, a higher-end program such as Adobe Premiere Elements may suit you better. Using Windows Movie Maker, you can put your digital photos, video and sound together and create the final video. Adobe Premiere Elements This program does much the same thing as Windows Movie Maker. It takes a little longer to learn but lets you be more creative with your movie. A license generally costs $100-125. It has far more features and once you get to know it, it is actually just as easy to use as Windows Movie Maker. However, both programs will make a perfectly good digital story! There are dozens of other good digital video programs, for examples include Final Cut and Adobe Premiere Standard. Putting the Digital story together This information is intended to help you put a digital story together using any equipment and software you choose to use. For specific help on using the software of your choice, don’t forget the help menu which comes with all programs. Also, the internet is an amazing source of help with just about any program. You can find lots of forums on popular video editing programs. This first bit is a bit boring, but very important (nearly as important as getting permission to use pictures or sound which belong to someone else in your digital story). If you don’t know where the files are on your computer, you will soon get into a terrible mess and maybe even have to start your digital story again! The following explanations refer to Windows but the same ideas apply to Macs. The Hard Disc The hard disc (or hard drive) is where files are stored on your computer. It is usually called the C: Drive but can be any letter. When creating a digital story you will have quite a lot of different files (pictures, sound and others) which make up the story. It is important to create a folder on your hard disc which is just for your digital story. It is a good idea to name the folder with the name of your story. If you share the computer with others perhaps name it like this: My Bushwalking Story by Joe Bloggs DO NOT DELETE ph. 8765 4321 Make a separate folder on the hard disc for every single digital story (don’t put more than one story in the folder). To create folders and subfolders in Windows: - Navigate to Explore by right clicking on the start icon (located bottom left of the screen and clicking on Explore. - Go to the hard drive where you plan to save your folder and subfolders (I am using the C: drive), highlight the drive, right click, and navigate down the menu to select New and then Folder. (The method of creating folders will differ depending on what operating system is running on your computer.) - A new folder is now created and can be named. - Type in the name you wish to use for the new folder. - You can create subfolders within this new folder by following the procedure outlined for creating folders. That is, highlight your new folder, right click, go to New and select Folder. Name your new subfolder. For more information on how to create and name folders see Windows Help. Inside the main folder create ‘sub’-folders where you can keep the different types of files tidy and separated (this makes them much easier to find). It doesn’t matter what these folders are called as long as they make sense to you. You may have the following subfolders: C: My Bushwalking Story by Joe Bloggs DO NOT DELETE ph. 8765 4321 - Images (containing the images used in your story) - Audio (containing recorded audio file of your narration) - Script (containing a transcript of your story) - Project files (containing files associated with the story created using your video software, eg. Adobe Premiere Elements) - Video (containing final AVI file of your rendered story) If you are using Windows, don’t let Windows tell you where to put your files. Don’t save things to My Photos, My Sounds, My Documents, etc., or your files will get very lost. Put everything in folders that are inside your main folder for the story. Remember to make a main folder for every separate digital story. Making the Story Write a short story that you will later narrate and record. As the movie will be between 2 and 3 minutes long, your story should be about 200 to 250 words. Create an electronic version of your script (using software such as Microsoft Word) and save it to the sub folder called “Script” that you created in your digital story folder on your hard drive. Once you are happy with your written story you will need to create an audio file which can be used in the digital story. It is good to save your narration as a .WAV file (some other audio files, such as MP3s and WMVs, may also work depending on your program). You will need to be in a fairly quiet space with your recording device. Be aware that background noise from cars, birds, air conditioners etc may appear in your sound recording. It is a good idea to test the sound recording with headphones before you record your script for real. This will ensure that the audio is actually being recorded and that there are no interfering background noises. You can record the whole script in one track, or you can record it in multiple tracks. Each way is fine. Be aware, though, that if you record a lot of audio, you may have to trawl through the recording to get the part you want. Speak clearly and fairly loudly (don’t be too close to the microphone or you’ll get distortion), and try to put a bit of life into your voice. Listen back to your recording and make sure it is nice and clear and not distorted. If it’s not right adjust the recording levels and maybe even your distance from the microphone. When you are happy with it, save the file or files to you-know-where (see File management). If you stumble or make a mistake in reading your script, stop, take a breath and start again from the beginning of the sentence. Mistakes can be edited out when you import the audio into your project. It is easier to edit out mistakes that are long and defined than editing out short snippets like “ums”. Any other sounds such as music or other sound effects should also be saved into a subfolder for later use. Getting the Pictures into the Computer You can download pictures directly from a digital camera into a subfolder (i.e. the subfolder you created called “Images”) or you can save the files from a scanner to the same place. Any changes you make to the pictures will be permanent so be sure to save a copy elsewhere if you’d like to keep the original, unchanged images (i.e. the subfolder you created called “Master images”). Digital video can be captured directly in to most video-editing programs. Older (non-digital) video cameras will have to be plugged into a capture card (a special video-capturing device) and the video file captured to the hard disc. Your pictures may also be on a CD, DVD or USB memory stick. To download from a digital camera either connect it directly to the computer using a USB cable or take the memory card out of the camera and insert it into a card-reader which is plugged into the computer. Don’t forget to put them all in the right place! The Pictures (still images and moving video) Once you have copied your digital photos, video or scanned photos into subfolders on your hard disc you will want to have a look through them and make sure you have enough (about 20-30 is a rough guide for use in a digital story that is between 2 and 3 minutes long). You may wish to edit the pictures to change the shape, the colours, or the brightness. Use one of the photo-editing programs to do this (don’t forget the help file). 4:3 and 16:9 4:3 and 16:9 are the two main shapes used for video these days. 4:3 is like the older style TVs and 16:9 is Widescreen. You can use either for your digital story. If you are including video which is already 4:3 in shape it is much easier to make your digital story in that shape (this is usually the shape of pictures from digital cameras too). If you would prefer the widescreen look you should reshape your pictures in the photo editing software to be 16:9, and you should only use video which is already 16:9 (some video cameras can take 16:9 shaped video). If that all seems too hard, stick with 4:3 which is pretty standard. Here is some technical detail for making sure your pictures are the right shape using your photo-editing software. 4:3 aspect ratio for Standard TV Width: 768 pixels Height: 576 pixels Resolution 72 dpi 16:9 aspect ratio for Widescreen TV Width: 1024 pixels Height: 576 pixels Resolution 72 dpi 72 dpi is the resolution of a computer screen so anything saved at higher resolution will not look any better. Make sure you do any zooming, cropping, editing, etc., before you resize your images to these sizes. You will find the Cropping tool very useful when it comes to resizing your pictures without distorting the shape. All good photo programs have a Cropping tool. Note: With Adobe Premiere it isn’t really necessary to resize the pictures first as you can do this when you are using the program. But with Windows Movie Maker it is a good idea to have the pictures the right shape to start with. Important Note: Whenever you plan to crop and edit pictures remember that unless you rename these pictures before you change them they will be changed forever if you copy pictures directly from your digital camera into the computer. So it’s always a good idea to be messing around with a copy of your digital files rather than the originals (unless you don’t mind). Editing photographs in Adobe Photoshop Elements - Open Adobe Photoshop Elements. - Select the crop tool (located mid way down the left hand vertical tool bar). This is the tool that you will use to resize your photographs. - Type in the ratio you have decided to use (i.e. standard TV or widescreen TV). This ratio is based on your personal preference. NOTE: If you crop your images to the standard TV ratio, these images will fill the entire screen when shown on a standard TV. If you crop your images using the widescreen ratio, these images will fill the entire screen when shown on a widescreen TV. Standard TV Width 768 px Height 576 px Resolution 72 Widescreen Width 1024 px Height 576 px Resolution 72 - Get the photographs into Photoshop Elements by going to File, and selecting Open. - Navigate to the subfolder where you have saved your photographs (i.e. Unedited photographs). Select a photograph by highlighting it, and click on Open. - Your image has now been opened in Photoshop Elements. Ensuring that the crop tool is still selected, drag the tool over the image, hold down the left side of the mouse and drag out the box to the desired shape. You know have a box on your image, with shading on the outside of the box. The area inside the box is the part of the image that will be left when your crop it. - You can resize the box by dragging it from the corners, and the box can be moved around on the image by using the up and down and side arrow keys. If you make a mistake press the ESC key located on the top right on your keyboard and start again. - Once you are happy with the size and position of your crop, double left click on your image. The image has now been cropped to the ratio you have elected to use in your movie. - Create a copy of the photograph you have just edited by going to File and choosing Save As. Save the edited photograph to a new sub folder (eg. Images - edited.) Making this new file and placing it in a different subfolder will ensure that you do not irrevocably change one of your original photographs. It also ensures that later, when you are importing your photos into the movie making software, that all your edited photographs are together in the one location. - Navigate to the subfolder where you plan to save your edited images. Rename the edited photo (if you wish) and click on Save. - Ensure that the Quality is set to Maximum and click on OK. Your edited photograph has now been saved. - You can edit your photographs in other ways. The most common effects are can be found in Enhance (located in the top toolbar). - For example, to adjust the colour contrasts click on Enhance, then Auto levels. - You can also manually adjust the colour, brightness, contrast, etc., of the images by going to Enhance, and then selecting either Adjust lighting, Adjust colour, Adjust brightness/contrast. - Once you have finished editing the photographs/images, go to File and do a Save As (as shown in steps 9 – 11), saving the image to the edited photos subfolder (i.e. Photos ready to use). Creating a black slug A black slug is used to create some black space at the very start and end of your movie (before the opening titles, and after the closing credits). This allows for a couple of seconds of black screen when you first start playing your movie, and after your movie ends. This is very important for stories you make with Windows Movie Maker, because you cannot have unfilled spaces between images. You can have spaces in Premiere Elements, so a black slug is not as necessary. - Open Adobe Photoshop Elements. (If Photoshop is already open, close all images so that you have a blank grey screen on which to work). - Go to File and click on New. - Enter the width, height and resolution and click on OK. The dimensions you enter will depend on whether your movie is to be in standard TV ratio or widescreen ratio. - You now have a white page in the ratio of standard TV or widescreen. Click on the very small square at the very bottom of your left hand horizontal toolbar (Default Foreground and Background Colours). - Click on Paint Bucket tool, then click on the white page you just created. The page should now be black. - Go to File and do a Save As (as shown in steps 9 – 11), saving the black slug to the edited photos subfolder (i.e. Photos ready to use). Putting the Sound and the Pictures together All digital video editing programs work in a similar way. You create a PROJECT file (give it the name of your story), which is simply a file which puts all the sound and pictures together on a TIMELINE which sorts them into a playing order. Important Note: The files are not actually copied into the project as they are too big. The project simply remembers where the files are on the hard disc. This is why it is so important to have your files all tidy and in the correct place, i.e., your folder and subfolders. Once you have imported your sounds and images into the project (see the Program help file), just drag them into the timeline in any order you like and start assembling your movie. Don’t forget to save regularly It is common to put the sound file/s onto the timeline and then match the pictures and video to it, but you can do the images first if you prefer. If using Windows Movie Maker beware that once you have a long timeline worked out, making changes to earlier parts of the project will move all the files around following the change. (Adobe Premiere doesn’t do this). With Windows Movie Maker work from left to right – get the beginning right first. With Adobe Premiere just start wherever you like – you can jump around within the timeline. At this stage you can add effects like dissolves which make pictures blend into one another. Experiment with the effects but be careful that they don’t spoil your movie by becoming a distraction. Dissolves are pretty safe to use but if you use any other effects at all, use them very sparingly. When you think you are happy with the way everything looks, you can add titles at the beginning of the timeline and credits at the end. Now it’s time to render your first draft of the movie. Creating your movie in Adobe Premiere Elements 7 - Open Premiere Elements and click on New Project. - Select the location for where you want to save your project (in the folder you’ve set up) and the name of your project. - Click Change Settings – select either PAL-DV-Widescreen or PAL-DVStandard. Click “OK” when you’ve selected your option. - Click “OK” again to save the file in your specified location. - Your Premiere Elements project has now been named and filed correctly. Any saving you do from now on while working on your movie will be backed up to the correct folder. - Import pictures into your project clicking the blue tab labelled “ORGANIZE”, then click Get Media below. You will be given the option of where you want to get your media from. You should be getting it from “PC Files and Folders” if you have saved your media to the hard drive, but you are also able to import media from camcorders, DVDs and other devices directly. Go to the location and your hard drive and select your pictures. They will appear in the righthand window. - Move the images you’ve selected onto the timeline by clicking and dragging them. Click on the image you want, hold down, and then release over where in the timeline you want the image to appear. - Continue to drag the photos on to the timeline in the order you’d like them to appear on the “Video 1” track. There are several tracks that you can add your media to: Video tracks 1, 2 and 3 (for images and video); Soundtrack; Narration; and Audio tracks 1, 2 and 3 (for sound). - Once you have all of your photos in the timeline, you can repeat the process outlined in step 7, to add the black slug to the end and beginning of your movie. As a default, images imported onto your timeline will appear on screen for 6 seconds. - Import narration by clicking on the Organize tab and selected Get Media, like you did in step 6. Select “PC Files and Folders” again, but this time select the sound file from the folder you have set up (eg. folder called Narration). - Highlight the audio file you just imported into the Media box and drag it into the timeline (Audio 1), holding down the left side of the mouse, and releasing it when you have it positioned correctly over the timeline. - You should now have an audio track on the timeline located below your video track. - When working in Premiere and organising your photos and audio in the position/order you want them in, you will find it useful to zoom in and zoom out (magnifying or reducing the view of the area you are working on). This enables you to better see what you are doing. Do this by clicking on the magnifying glass located on the top of the timeline. You can also press the “+” button to zoom in or the “-“ button to zoom out. - Click and drag your images around on your timeline. You can select the image, drag it, and move it wherever you want. Listen to your audio track and line this up with the images on your timeline. Press the space bar stops and starts the playback of your video. - You can extend the length of the image (i.e. how long the image will appear during your movie) by holding your cursor over the image until you see the red arrow. Drag the arrow in the direction you wish (either extending it or shortening it). - To add any extra media on to your timeline, click the “EDIT” tab and then the Project button. This will show all the media that you have imported – these are available to go into your story. They are your “ingredients list”. - Once you have your images roughly lined up in time with your narration, go to the EDIT tab and select Transitions. Make sure you have Video Transitions selected in the drop-down menu (you can also have Audio Transitions). Two of the most common effects are Cross Dissolve and Dip to Black. In general, Dip to Black is used for the first and last image, and Dissolves are used to make a slow, smooth transition change from one image to the next. - Click on the video effect you plan to use (i.e. Dissolve) and then drag it onto the bluey-grey part of the image at the point in the timeline you want the effect to appear. To drag the effect hold down the left side of the mouse, and then take your finger off the mouse once it is positioned in the place you want it. - These effects can be extended and shortened in the same way as you extended the images (i.e. by positioning your cursor over the effect, and then dragging on the red arrow with your cursor). - To add a basic default title to your story go to Title in the top menu bar. Then click New Title then Default Still. - You can then type the title of your story in the frame. This default Title will appear as white text on a black background if it is not played over the top of another image. If there is another image underneath the title, it will appear underneath on playback of the movie. - Edit the text and enter the title of your movie. Centre the title vertically and horizontally within the title page by clicking on Vertical Center, and Horizontal Center icons located to the right of the main image. You can also change the font type and font size on the right-hand side under “Text Options”. - Click and drag the Title page to the spot where you want it on your timeline. You can create space at the beginning of your timeline by selecting all of your images, clicking on them, holding and dragging them to the right of screen. You can either place the basic Title over an existing image, which it will appear on top of, or you can place it by itself so it appears over a black background. - To create a title from a template, go to EDIT, then click on Titles. - Click and drag the title you want on to your timeline. Usually the graphic of the template will be selected for you and all you have to do is type in your text. - You can make the title longer or short by clicking on the title, hovering over one end of the clip and then dragging to lengthen or shorten it. - Add video transition effects such as dissolves to the start and end of your title by dragging these effects from EDIT Transitions. - To create simple credits for your story, go to EDIT Titles then select a credit template you like. The template “generic1_credits” is fairly useful. Like with the title, if the credits appear over an images in the timeline, they will be layered on top. If there are no images where the credits have been imported, it will come up over a black background. - You can now type in the title of your story in the credits, along with any other text you want to appear at the end (eg. citing images you may have used from other sources). - Once you have typed in the text and you’re happy with it, you can determine how quickly the text rolls through. If you shorten the credit clip, it will run through faster. If you lengthen the amount of time the clip has to run, the text will run slower. - Edit the text in the template in the same way as you did when creating the title. - When you are happy with the layout of your images, and have finished editing the audio and adding all the effects, and the title and credits, you are ready to render your movie. Rendering to AVI (finishing the movie) The video-editing program will have an option to export as a movie. If given a choice, choose DV-AVI (PAL) uncompressed. What you want to end up with is an AVI file. When the program finishes exporting (which could take 10-15 minutes) the AVI file will probably be about 700-800 MB in size. Watch the AVI using movie viewing software (such as Windows Media Player) and see what you think of it. You will probably notice things you would like to change so it’s back to the project and the timeline to move things around. Remember to save regularly when playing with the timeline. When reviewing the finished AVI file check that the sound is nice and clear. Try checking it with speakers and/or headphones. Rendering your Adobe Premiere Elements movie - Go to the SHARE tab then Personal Computer. - Select DV AVI and then select DV PAL Standard or DV PAL Widescreen (depending on how you set up the project at the beginning). Then name your file and specify where you want it to be saved (in the appropriate folder on the hard drive). Then click “Save”. - The movie is now rendering and will take a few minutes to complete this process. - Once the movie has finished being rendered you can test the movie by watching it on your computer. Using Windows explorer, navigate to subfolder located on your hard drive where you saved the AVI file. - To play the movie, select the AVI file, right click and select Open with. If you have media player on your computer select media player. When you are happy with the result then the movie is finished! Almost time to relax and show it off to everyone you know. Firstly though, back up your files. If the size of your main folder is 700mb or less (right-click on the folder and check under properties) you can just save the whole folder to a blank CD with a CD burner. If it is larger than 700mb you will have to use a blank DVD and a DVD burner. If you have an external hard drive or USB stick, you can copy the whole folder to it. The finished AVI file can also be saved onto a DVD and played on a normal DVD player with a TV. If it is going to be put on the Internet, a copy of the AVI file will need to be supplied to someone who will alter it into an Internet video. Requirements for the State Library of Queensland’s collection A copy of the script will need to be supplied to the State Library of Queensland, along with a copy of one of the images from the story and the final AVI. For State Library’s purposes, each participant needs to provide: - Video file (DV.avi format)(.avi) - embedded audio narration; - PC compatible; - approximate run time of 2-4 minutes (including titles and credits); - PAL standard; - either 4:3 or 16:9 screen ratio. - A representative still image from your story – 400ppi tiff, 4,000 pixels along the widest dimension. If this is not possible please supply the highest resolution that is available as tiff format. (.tif) - A filled in copy of the “About my story” form (.xls) that helps us to describe your story and position it on the Queensland Stories website. - An electronic copy of your story transcript (.txt) or (.doc) format - A completed Deed of Gift form signed by the creator. Discover an eclectic range of books, gifts, reproduction prints and more at the Library Shop.
<urn:uuid:c1c42b76-f98d-46d3-9cf5-5f769aaf9336>
CC-MAIN-2017-17
http://www.slq.qld.gov.au/resources/queensland-stories/digital-storytelling-manual
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917119637.34/warc/CC-MAIN-20170423031159-00542-ip-10-145-167-34.ec2.internal.warc.gz
en
0.921054
7,540
3.59375
4
There is growing evidence of changes in the timing of important ecological events, such as flowering in plants and reproduction in animals, in response to climate change, with implications for population decline and biodiversity loss. Recent work has shown that the timing of breeding in wild birds is changing in response to climate change partly because individuals are remarkably flexible in their timing of breeding. Despite this work, our understanding of these processes in wild populations remains very limited and biased towards species from temperate regions. Here, we report the response to changing climate in a tropical wild bird population using a long-term dataset on a formerly critically endangered island endemic, the Mauritius kestrel. We show that the frequency of spring rainfall affects the timing of breeding, with birds breeding later in wetter springs. Delays in breeding have consequences in terms of reduced reproductive success as birds get exposed to risks associated with adverse climatic conditions later on in the breeding season, which reduce nesting success. These results, combined with the fact that frequency of spring rainfall has increased by about 60 per cent in our study area since 1962, imply that climate change is exposing birds to the stochastic risks of late reproduction by causing them to start breeding relatively late in the season. Climate change has potentially profound impacts on biodiversity—notably on population extinctions—and evidence is accumulating that such effects are already apparent in many systems [1–3]. The mechanisms through which climate change impacts on biodiversity are varied, but one potentially important mechanism concerns changes in phenology . There is an increasing number of examples now showing changes in the timing of important ecological events, such as flowering in plants and reproduction in animals, in response to climate change [4–9]. Wild bird populations are important model systems for exploring how phenology, particularly the timing of breeding, is responding to climate change [4,10–12]. Recent work has shown that individual birds are remarkably flexible in their timing of breeding in a changing climate, allowing them to closely track changes in the environment [12–14]. This plasticity represents the ability of a single genotype to alter its phenotype in response to changing environmental conditions. Phenotypic plasticity is ecologically very important because it can prevent a mismatch between breeding phenology and environmental conditions that would reduce individual fitness and ultimately population growth . It is therefore important to understand how climate change impacts the breeding phenology of individuals within a wild population in order to comprehend long-term population-level consequences. Current knowledge about climate impacts on breeding phenology is limited and biased towards Northern Hemisphere temperate species, which have been shown to vary their timing of breeding in relation to temperature and breed earlier in warmer springs [11,12,17]. Tropical species, especially island endemics that form an important component of global biodiversity, have yet to be explored in this context, with notable exceptions like the Darwin's finches . Here, we report the response to changing climate in a tropical wild bird population, which differs notably from the temperature-induced changes observed in temperate species. Using an extraordinarily detailed long-term dataset on a formerly critically endangered tropical bird, the Mauritius kestrel (Falco punctatus), we show that the timing of breeding is delayed in response to deteriorating climatic conditions, with detrimental consequences for reproductive success. (a) Study population and data The study was conducted in the east coast mountain range of Mauritius (20.3° S, 57.6° E). Our study area covers 163 km2, encompassing a predominantly forested mountainous area bordered by agricultural land cultivating primarily sugar cane . Our study population was extirpated by the 1960s, but reintroduced at the end of the 1980s as part of a recovery programme . Subsequent to its reintroduction, the population has grown and has now become stabilized at approximately 42–45 breeding pairs. Since reintroduction, the population has been intensively monitored. The majority (more than 90%) of individuals entering the population are individually colour-ringed in the nest , and the study area is a closed system—no colour-ringed immigrants have been recorded within the population, and no colour-ringed emigrants have been recovered or resited elsewhere. Mauritius kestrels have a socially monogamous, territory-based breeding system, and their breeding season spans the Southern Hemisphere spring/summer, with the earliest eggs (clutch size two to five) being laid in early September and the latest fledgelings (brood size one to four) leaving the nest in late February . Monitoring consists of locating all nesting attempts by checking all previously used sites and searching likely new areas . Each breeding season the majority (more than 90%) of all breeding pairs are located and identified, and their nesting attempts are monitored, providing data on the timing of breeding (date the first egg is laid), clutch size (the number of eggs laid) and the number of chicks that subsequently fledge . Individuals are marked (using a unique set of coloured leg rings) and are sexed in the nest using a biometrics-based method that has been validated by genetic analysis . Kestrels are single-brooded, although a second clutch is laid on occasions (usually only if the first clutch or brood is completely lost). This intensive monitoring programme means that we have, until 2005, complete, spatially referenced life histories for approximately 570 individual kestrels as the population has developed , with over 600 breeding attempts followed since 1987. Daily rainfall data (millimetres) are collected using a standard rain gauge from seven stations situated throughout our study area by local sugar estates for the purposes of crop management. For our analyses, we used data from a single station at Camizard because this is the longest available time series (from 1962 onwards). We show elsewhere that the time series trends in rainfall frequency recorded at Camizard are similar to those seen in the other stations , implying that changes in rainfall are occurring throughout our study area. (b) Rainfall and timing of breeding Previous work on our study population has shown that the frequency of rainfall in the July–September (spring) period is correlated with the timing of breeding, with birds breeding earlier in dryer springs . Further analysis and model selection using the Akaike information criterion (AIC; see electronic supplementary material) indicated that within the July–September period it was the frequency of rainfall in August that had a significant impact on the timing of breeding. Initially, to describe the relationship between timing of breeding and frequency of rainfall in August (R), a linear mixed model was constructed using the first egg date (i.e. the timing of breeding, TB) as the response variable, the number of rain days in August as a fixed effect and the female identity as a random effect: where subscripts 0, i and j refer to the structuring of the data, where Rij is the R value of measurement i from subject j, β0 is the intercept, β1 the slope of the regression between breeding time and rain days in August, u0j the random intercept and e0ij the error term . Male and female age groups were also used as fixed effects as these are known to influence the timing of breeding (the electronic supplementary material). The August rain days in this model, however, combines both the variance observed between individuals and that observed within an individual's life history into one predictor variable. To determine if variation seen in the timing of breeding within the population was owing to differences between individuals or owing to within-individual responses to frequency of rainfall, a methodology known as ‘within-subject centring’ was used [27,28]. Hence, a second model was constructed with two new predictor variables derived to describe the between-subject variance (βb) and within-subject variance (βw). The variable describing the between-subject variance was simply the mean number of rain days experienced by each individual and the variable describing the within-subject variance was the value obtained by subtracting the individual's mean rain value from each observation value. A third model was constructed to determine if the estimated effects of between- and within-subject variances were statistically different. This model combined the original predictor effect (August rain days) and the new predictor effect that only combined the between-subject variation. In this model, the between-subject effect actually represents the difference between the between- and within-subject effects in the second model . Finally, to determine if there was a substantial between-subject variation in the slopes of the within-subject effect, a fourth model was constructed by adding a random slope (within-subject effect, uwj) to the random intercept (female identity) of model 2. (c) Timing of breeding and number of fledgelings produced To determine the relationship between reproductive success and timing of breeding, the total number of fledgelings produced by a female in year t was used as the measure of her reproductive success in year t (i.e. including first clutches and relays). This was then included as the response variable in a model framework that was similar to models 1, 2 and 3, exploring the relationship between timing of breeding and rainfall. These models were generalized linear mixed models (GLMMs) with a Poisson distribution incorporating the identity of the individual female as the random effect and the timing of breeding (i.e. first egg date, between-subject effect and/or within-subject effects, depending on the model) as a fixed predictor. The season per year of breeding was also incorporated as a fixed effect to account for seasonal variation in fledgeling production. (d) Stochastic effects of rainfall on the seasonal decline in number of fledgelings produced We constructed a series of models to explore how between-year variation in rainfall affects the rate at which the number of fledgelings produced declines within seasons so we could gain a more detailed understanding of the potential costs of breeding relatively late in the season. We predicted that if rainfall modifies the seasonal decline in the number of fledgelings produced, we should detect a significant interaction between the timing of breeding and rainfall. To test this, we constructed a series of models with rainfall and rain day variables from different time periods from November–January, and identified plausible models from this candidate set using AIC. In addition, we wished to explore whether any impact of rainfall in the previous analysis occurred, because rainfall conditions interacted with the timing of breeding to affect nest survival rates. To examine this, we modelled the survival probability of eggs from laying to fledging as the response variable in a comparable model to that identified above for the number of fledgelings. Again, we predicted that if rainfall modifies the seasonal decline in egg survival, we should detect a significant interaction between the timing of breeding and rainfall. Models were constructed assuming binomial errors. (a) Rainfall and timing of breeding Our results indicate that individual females begin breeding progressively later as the number of rain days in August increases (figure 1). However, while there is significant evidence for within-individual response to rain (individual level plasticity), there is no statistical evidence to indicate that the population response arises from differences between individuals (table 1 and models 2 and 3); that is, the population-level pattern of plasticity only reflects individual responses. Furthermore, there is no evidence for significant differences between individuals in their within-subject slopes (table 1 and model 4), indicating that all individuals displayed similar responses to more rain days in August. This means that individual female kestrels delay breeding as the frequency of spring rainfall increases. (b) Timing of breeding and number of fledgelings produced The number of fledgelings produced by a female is significantly related to her timing of breeding (r2 = 0.07, p < 0.05; figure 2a,b), indicating that early-breeding females have a higher reproductive success. While there is strong evidence for a within-subject effect (table 2 and model 2; i.e. as females breed later they fledge fewer chicks), there is only weak evidence to suggest that this is different from a between-subject effect (table 2 and model 3). Thus, it is probable that both within- and between-subject differences in the timing of breeding affect the number of fledgelings produced (i.e. while within-subject delays in timing of breeding reduce the number of fledgelings produced, subjects breeding later on average also tend to have lower numbers of fledgelings, even after controlling for age differences). (c) Stochastic effects of rainfall on the seasonal decline in number of fledgelings The decline in number of fledgelings produced over time within a breeding season gets steeper as rainfall increases during the period chicks are in the nest (December, r2 = 0.63, p < 0.01; figure 2c and table 3). There is no evidence to suggest that the frequency of rainfall in spring (August) had a similar effect to December rainfall (i.e. August rain days do not have a significant impact on the number of fledgelings; r2 = 0.20, p = 0.65). The seasonal decline in number of fledgelings is therefore independent of rainfall conditions in spring that determine the timing of breeding. Similarly, when considering egg survival probability to fledging, there is also a significant negative impact of December rainfall (z = 2.36, p < 0.05), as well as an interaction between December rainfall and the timing of breeding (z = −2.72, p < 0.01; figure 2d). This implies that birds breeding relatively late in the season have fewer fledgelings than earlier-breeding individuals because of an increased risk of egg or chick mortality caused by rainfall associated with the start of the cyclone season. (d) Implications of changes in rainfall Our results have potential implications for our understanding of the impact of changes in climatic trends because the number of rain days in August has increased significantly since the 1960s (1962–2005, r2 = 0.31, p < 0.001; figure 3), implying that the timing of breeding should be getting progressively later. This change is not apparent in our data (1991–2005, r2 = 0.03, p = 0.57), but this is probably because no significant change in the number of August rain days occurred over the time period for which we have breeding data on the kestrels (1991–2005, r2 = 0.002, p = 0.23; figure 3, shaded area). However, the mean number of rain days in August is significantly greater during the period 1991–2005 than during the period 1962–1990, before kestrel data are available (1991–2005: 21.60 ± 4.19 days; 1962–1990: 17.28 ± 4.60 days; t1,42 = −3.04, p < 0.005). We found no evidence to suggest that the amount of rainfall in December has changed significantly over time (1962–2005, r2 = 0.03, p = 0.24), implying that any change in rainfall is unlikely to alter the within-season decline in number of fledgelings produced in kestrels. Consequently, while changing rainfall patterns have implications for the timing of breeding in kestrels, it seems to have a negligible impact on the numbers of fledgelings associated with the timing of breeding. Phenological shifts, such as changes in timing of breeding, are key processes affecting the impact of climate change on wild populations. Our study has explored the impacts of climatic conditions on the breeding phenology and reproductive success of a tropical island endemic bird, the Mauritius kestrel. Our results show that females begin breeding later as the frequency of rainfall in August (spring) increases, and all individual females within the population appear to respond to changing rainfall conditions in the same way. Birds breeding relatively late in the season have lower reproductive success (fledgeling production) compared with birds breeding earlier. This effect is apparent both within and between females. This appears to be because breeding later increases the risk that eggs and chicks in the nest are exposed to rainfall, which reduces their survival. These results are important in the context of climate change, because we also show that the frequency of spring rainfall has increased in our study area over the last 50 years . There are two important mechanistic questions that arise from these results. (i) Why do female Mauritius kestrels breed progressively later as the frequency of rainfall in spring increases? (ii) Why do delays in breeding result in reduced reproductive success? The most likely cause of breeding delays in relation to the frequency of rainfall is that hunting efficiency of males who provision the females might be reduced because of lower prey detection in wetter conditions, thereby reducing the rate at which females can acquire resources immediately prior to breeding. We have no data to test this possibility directly, but there has been evidence to show that certain tropical birds have experienced weather-induced risks to resource availability . Delayed breeding might cause reduced reproductive success in Mauritius kestrels owing to a mismatch between the timing of breeding and peak food abundance. It is typical for many seasonally breeding birds to time their egg laying to coincide with a subsequent peak in food abundance, which is important for chick-rearing and hence reproductive success [8,30]. While it is possible a seasonal mismatch plays a role, our data suggest a more direct mechanism: delayed breeding increases the risk that nests will be exposed to rainfall, which reduces egg survival to fledging. This reduced survival rate could occur for two main reasons—the hunting efficiency of breeding adults might be reduced because of lower prey detection in wetter conditions, or nest cavities might be flooded, increasing the risk of hypothermia in chicks [31,32]. Although we lack detailed data to distinguish between these possibilities, our results are consistent with other studies on raptor species, which show that relatively high reproductive success and the production of large broods are associated with periods of low precipitation [31–37]. Our results have implications in the context of climate change because we show that the frequency of spring rainfall has increased significantly since the 1960s in our study area. This implies that the timing of breeding in Mauritius kestrels should have become progressively later over this time period, but this pattern is not evident in our data (figure 3). This seems to be because there is no evidence of an increase in the frequency of spring rainfall within the time period (1990s onwards) covered by our data on the timing of breeding. The kestrel population is currently experiencing significantly more frequent spring rainfall than the study area experienced prior to the population being re-established. We suggest, therefore, that climate change has produced contemporary rainfall conditions in Mauritius that result in relatively late breeding and, consequently, an increased risk of breeding birds being exposed to adverse rainfall conditions during nesting. Our results provide an interesting contrast to results emerging from studies on northern temperate bird populations. These studies have mainly focused on the impacts of temperature-related changes, with several species showing a significant advance in their timing of breeding in response to increasing spring temperatures [4,10,13,14,38–40], although some species, including the sparrow hawk (Accipiter nisus), are yet to show any such response to these changes [11,40]. Interest has focused on the extent to which these changes in timing are adaptive or result in a mismatch between the timing of breeding and food supplies, thereby reducing fitness [10,13]. Our results raise the possibility that different mechanisms might operate in sub-tropical/tropical populations in which the timing of breeding is influenced by rainfall conditions rather than temperature. Furthermore, our results suggest that the fitness cost of breeding delays (reduced fledgeling production) can be explained by increased risks associated with rainfall later in the breeding season rather than by a mismatch between timing and the food supply. There is growing evidence of systematic changes in rainfall conditions in tropical regions [41–43], and it is well recognized that tropical ecosystems are hotspots of biodiversity . These general patterns and our results suggest that it is important to explore the ecological impact of climate change in wild tropical populations, and that it would be unwise to assume that these populations respond in a way that is comparable with temperate populations, which currently represent the majority of ‘model’ systems for exploring the ecology of climate change. The Mauritius kestrel recovery programme has been sponsored by The National Parks and Conservation Service, Government of Mauritius, The Peregrine Fund, The Mauritian Wildlife Foundation, and The Durrell Wildlife and Conservation Trust. This research was supported by the Dorothy Hodgkins Postgraduate Award. - Received January 31, 2011. - Accepted February 18, 2011. - This journal is © 2011 The Royal Society
<urn:uuid:60d21b4e-911d-4791-baae-7a7d5453e538>
CC-MAIN-2017-17
http://rspb.royalsocietypublishing.org/content/278/1722/3184
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917118713.1/warc/CC-MAIN-20170423031158-00306-ip-10-145-167-34.ec2.internal.warc.gz
en
0.95427
4,261
3.484375
3
What determines large-scale patterns of species richness remains one of the most controversial issues in ecology. Using the distribution maps of 11 405 woody species in China, we compared the effects of habitat heterogeneity, human activities and different aspects of climate, particularly environmental energy, water–energy dynamics and winter frost, and explored how biogeographic affinities (tropical versus temperate) influence richness–climate relationships. We found that the species richness of trees, shrubs, lianas and all woody plants strongly correlated with each other, and more strongly correlated with the species richness of tropical affinity than with that of temperate affinity. The mean temperature of the coldest quarter was the strongest predictor of species richness, and its explanatory power for species richness was significantly higher for tropical affinity than for temperate affinity. These results suggest that the patterns of woody species richness mainly result from the increasing intensity of frost filtering for tropical species from the equator/lowlands towards the poles/highlands, and hence support the freezing-tolerance hypothesis. A model based on these results was developed, which explained 76–85% of species richness variation in China, and reasonably predicted the species richness of woody plants in North America and the Northern Hemisphere. The mechanism underlying the large-scale patterns of species richness is one of the most controversial issues in ecology . In the past two decades, with the increasing availability of large-scale range maps of animals, considerable progress has been made for the continental and global patterns in species richness of mammals [2,3], birds [4–6] and amphibians . Although vascular plants are one of the most important components of terrestrial ecosystems, continental and global patterns of plant richness have been poorly investigated largely owing to the lack of precise large-scale range maps . To understand the mechanisms underlying the large-scale patterns of species richness, many hypotheses focusing on different aspects of contemporary climate have been proposed . For example, two energy hypotheses state that species richness is primarily determined by energy availability, but focus on different energy variables (i.e. thermal versus chemical energy) : (i) the species richness-productivity hypothesis, where energy is usually measured by net primary productivity (NPP) or annual evapotranspiration (AET) [11–13], and (ii) the ambient energy hypothesis, where energy is measured by mean annual temperature or potential evapotranspiration (PET) [2,14]. By contrast, the water–energy dynamics hypothesis proposed that species richness is determined by the combined effects of available water (measured by rainfall or water deficit (WD) in linear form) and environmental energy (measured by minimum monthly or annual PET in parabolic form) [16,17]. Two global models based on this hypothesis have been proposed: the Interim General Model (including IGM-1 and IGM-2), [15,17] and Francis & Currie's model (F&C's model for short). Besides, some other studies indicated that climatic seasonality influences species richness patterns by altering the allocation of energy use of individuals or the length of growing season for plants . Alternatively, according to freezing-tolerance hypothesis (or tropical conservatism hypothesis), species richness is primarily determined by winter coldness because most clades evolved in tropical-like climate and hence could hardly disperse into cold, temperate regions owing to their niche conservatism [20–22]. This hypothesis integrates the effects of contemporary climate with evolutionary history , which is one of the biggest challenges facing ecologists. Given this hypothesis, we can predict that winter coldness should affect the richness of the species with tropical affinities more strongly than those with temperate affinities. However, this prediction remains poorly tested. Another set of hypotheses predict that habitat heterogeneity determines the patterns of species richness by its influence on species turnover and (or) species diversification rates . Habitat heterogeneity in a region is generally represented by topographic relief (e.g. altitude range) and local climatic heterogeneity (i.e. the ranges of mean annual temperature (MAT) and mean annual precipitation (MAP) . Recently, rapidly growing intensity of human activities has become a potential driver of the large-scale patterns of species richness [26,27], and is increasingly attracting the attention of ecologists. In the North Hemisphere, China harbours a much richer flora and also a steeper species richness gradient than North America and Europe, which makes China immensely suitable for testing the hypotheses explaining the large-scale patterns of species richness. Here, using the distribution maps of woody plants in China [28,29], we: (i) explored the geographical patterns in species richness of all woody plants, trees, shrubs and woody lianas and their concordance, and identified the primary determinant for the richness patterns by comparing between the effects of factors representing different hypotheses; (ii) evaluated the effects of biogeographic affinities on species richness patterns and their relationships with environmental factors; and (iii) developed models and used them to predict the species richness patterns of woody plants in the Northern Hemisphere. 2. Data and methods (a) Species richness of woody plants The species distribution maps were from the Database of China's Woody Plants (http://www.ecology.pku.edu.cn/plants/woody/index.asp) [28,29], which contains 11 405 native woody species, including 3165 trees, 7205 shrubs and 1035 lianas (see the electronic supplementary material, table S1). Exotic species were excluded from the database. The taxonomy of this database was updated following the recently published Flora of China (http://www.efloras.org/) and Species2000 (Checklist 2008, http://www.sp2000.org/), where the taxonomy is current and comparable with that used in other regions. The species distributions in the database were compiled from all national-level floras published before 2008, including Flora Reipublicae Popularis Sinicae (126 issues of 80 volumes), Flora of China and Higher Plants of China (10 volumes) , more than 120 volumes of provincial floras, and a great number of local floras and inventory reports across the country. To improve the quality of species range maps in the database, 21 experts from different regions in China were invited to check and supplement the species distributions in every region. The database provides the species distribution maps at two spatial resolutions: counties with a median area of 2081 km2 (skewness = 9.93) and grids of 50 × 50 km. To eliminate the influence of area on the estimation of species richness , the maps based on equal-area grids were used. As the grids located on the borders or along coasts are usually incomplete, we excluded those with land area smaller than 1250 km2. A total of 3794 grids were finally used in our analyses. Species richness was estimated for four species groups: all woody plants, trees, shrubs and lianas. As spatial scale can potentially influence the relationships between species richness and environmental factors , we repeated all the analyses using grids of 100 × 100 km, and found that the results at the two spatial scales were consistent. Therefore, we reported only the results for grids of 50 × 50 km. To evaluate the effects of long-term evolution, we categorized the species within each species group into three biogeographic affinities. Based on the evolution of China's flora and its relationship with the floras of other major biogeographic regions, the regions where the genera and their families are believed to have diversified, and also the global distributions of the genera, Wu et al. divided the genera of China's vascular plants into three major biogeographic affinities: tropical, temperate and cosmopolitan . Following Harrison & Grace , we defined the biogeographic affinity of a species as that of its genus (Harrison & Grace used family), and finally recognized 5682 (49.8%) tropical species, 4895 (42.9%) temperate species and 666 (5.8%) cosmopolitan species (electronic supplementary material, table S1). Then the species richness patterns of the three biogeographic affinities were estimated for all woody plants and the three lifeforms, respectively, and were compared with the overall species richness of each group. The influence of biogeographic affinities on the relationships between species richness and climatic factors was investigated. In our analyses, we focused on the comparisons between the richness patterns of tropical and temperate species. (b) Environmental factors Climatic data with the resolution of 30 arc-second (ca 1 km at the equator) were obtained from the WorldClim website . The database includes mean monthly temperature (MMT) and MAT (in °C) and mean monthly precipitation (MMP) and MAP (in mm), the mean temperature of the coldest quarter (MTCQ, in °C) and the mean temperature of the warmest quarter (MTWQ, in °C), the precipitation of the driest quarter (PDQ, in mm), the annual range of temperature (ART, in °C) and the seasonality of temperature (TSN, defined as the standard deviation of MMT) and precipitation (PSN, defined as the coefficient of variation of MMP). Using MMT and MMP, the following variables were calculated: monthly/annual PET (mm) and AET (mm; calculated using the method of ), moisture index (Im), WD (mm), warmth index (WI, °C) and annual rainfall (RAIN, mm). PET is widely used as a measure of ambient (or thermal) energy . We used both annual and minimum monthly PET (PET and PETmin, respectively) to evaluate the water–energy dynamics hypothesis [15–17]. AET reflects the amount of water that plants can actually use, and is usually used as a surrogate of NPP . Im represents the environmental humidity , whereas WD represents the aridity and is defined as the difference between PET and AET . WI has been widely used to determine the distributions of species and vegetation in eastern Asia , and is defined as: where MMT is mean monthly temperature. RAIN is the sum of the MMP when MMT > 0°C [15,19]. All climatic variables were grouped into three categories: (i) environmental energy, including MAT, MTCQ, MTWQ, WI, PET and PETmin; (ii) water availability, including MAP, PDQ, RAIN, Im, AET and WD; and (iii) climatic seasonality, including ART, TSN and PSN. The value of a grid for each variable was estimated by averaging all cells in that grid. For comparison with previous studies [12,13], we included both MAP and RAIN in our analysis. Three variables were used to estimate habitat heterogeneity: the ranges of altitude (TOPO), MAT (RMAT) and MAP (RMAP) within grids. Altitudinal range was calculated as the difference between the maximum and minimum elevations of a grid using a GTOPO30 digital elevational model, and was used to represent topographic relief. RMAT (or RMAP) was calculated as the difference between the maximum and minimum MAT (or MAP) in a grid, and was used to represent the heterogeneity of climatic conditions. For comparison with previous studies [15,17,19,37], these variables were log-transformed. Finally, human population density (HPD), gross domestic product (GDP) and area of cropland per grid (CROP) were used to represent human activities. The average HPD and GDP of 2003–2005 in 2408 counties of China were from the China Statistical Yearbook for Regional Economy (2003–2005), and were interpolated into raster files of 5 × 5 km, which were used to calculate the mean HPD and GDP in each grid. CROP was extracted from the 1 : 1 000 000 vegetation atlas of China . The statistics of the above variables for China and the Northern Hemisphere, and the correlation coefficients of the variables against each other are presented in the electronic supplementary material, appendix S1. Most climatic variables in China cover ca three-quarters of their ranges in the entire Northern Hemisphere, suggesting that China is the proper representative for the Northern Hemisphere in terms of climate. (c) Data analysis and model development Correlation analyses were conducted to evaluate the concordance between the species richness patterns of all woody plants and the three lifeforms, and also between the overall species richness and the richness of species with tropical, temperate and cosmopolitan affinities within each species group. Bivariate regressions were used to evaluate the explanatory power of each predictor for the species richness of the four species groups. Additionally, following the water–energy dynamics hypothesis [15–17,19], we also evaluated the explanatory power of the water–energy dynamics functions combining optimal energy with linear water variables (i.e. PETmin − PETmin2 + RAIN and PET − PET2 − WD) using multiple regressions. Because species richness data usually satisfy Poisson distribution, generalized linear models (GLMs) with Poissonian residuals were used for all regression analyses . The coefficients of determination (r2) of the models were estimated as: r2 = 100 × (1 − residual variation/null variation). For multiple regressions, adjusted r2 was used. We conducted partial regressions to compare the effects of climate with habitat heterogeneity and human activity. By partial regressions, the total variation of species richness was partitioned into: (i) independent components; (ii) covarying component; and (iii) unexplained variation. Then, we developed combined models for the species richness of all woody plants and the three lifeforms using the following methods: (i) the best individual predictor for the species richness of a species group was kept in its model; (ii) to avoid the multi-collinearity between the predictors of the same environmental category, only one variable from each category was allowed to enter the model. Because the variables of human activities were not significant for most species groups (see §3), they were not included in the models; and (iii) all the possible combinations of predictors following the above criteria were examined, and the model with the lowest akaike information criterion (AIC) was selected for each species group. Adjusted r2 was given for the selected models. Variance inflation factors (VIFs) were also calculated for all models to evaluate the significance of multi-collinearity . Generally, if VIF is greater than five, the multi-collinearity is considered as significant. We tested the combined models using the species richness of trees in North America. First, the tree species richness in grids of 50 × 50 km was estimated using an Atlas of United States Trees , and predicted by our model using the environmental data of North America. It is noteworthy that the predicting method of GLMs with Poisson residuals is different from those of ordinary least-square linear regressions . Second, the predictions were plotted against the observations. The distances between the predictions and 1 : 1 line (a line with intercept = 0, slope = 1) were calculated, which represented the errors in the predictions and were used to evaluate the model performance: the distances of a better model should have a symmetric frequency distribution with the average closer to zero and smaller standard deviation. Finally, using our model, we predicted the species richness patterns of all woody plants and the three lifeforms in the Northern Hemisphere. For comparison, IGM2 and F&C's model were re-fitted using China's data, which were referred to as the water–energy dynamics model and refitted F&C's model, respectively, and were used to predict the species richness of North American trees. Preliminary analyses indicated significant spatial autocorrelations in the raw data of species richness (electronic supplementary material, figure S1), which can inflate type I errors and hence the significance level of models and correlations because of the dependency in samples . Therefore, in our analyses, we used a bootstrap method to test the significance of all correlation coefficients and models . To do this, we first randomly re-sampled ca 8 per cent of all grids for each species richness variable to perform the correlation (or GLM) analysis, and this was repeated 1000 times. Only if more than 95 per cent of the repeats were significant, the correlation (or GLM) was considered as significant. All statistical analyses were carried out using R (http://www.r-project.org/). (a) Geographical patterns of species richness Species richness of all woody plants ranged from 2 to 2837, with an average of 358 species per grid (electronic supplementary material, table S1). The average species richness per grid for trees, shrubs and lianas was 104 (1–1028), 224 (2–1528) and 38 (1–327), respectively. The species richness of the four species groups was all highly right-skewed (skewness > 1.5, see the electronic supplementary material, table S1 and figure S2). The patterns in the species richness of all woody plants, the three lifeforms and the three biogeographic affinities all matched the topographic structure in China: richness was high in mountains, but low in deserts, plains and basins (figure 1). The species richness patterns of all woody plants and the three lifeforms were highly concordant with each other (r > 0.93, p < 0.05; table 1), suggesting that they are plausibly determined by the same factors. For the four species groups, the correlation coefficients between the overall and tropical species richness were 0.91–0.94, and were consistently higher than those between the overall and temperate species richness (0.84–0.88; table 1). Moreover, within each group, the richness patterns of temperate and cosmopolitan species were strongly correlated with each other, but both were moderately correlated with the richness of tropical species (table 1), which suggests that the determinants of tropical species richness may be different from those of temperate/cosmopolitan species richness. (b) Relationships between species richness and environmental variables Among all variables, the MTCQ was consistently the strongest single predictor of species richness for all woody plants and the three lifeforms, and its r2 was 60–73%, which was 10–15% higher than that of MAT, 28–35% higher than PET (table 2). Additionally, the r2 of MTCQ was also 10–30% higher than the quadratic functions of PET and PETmin and the combined water–energy dynamics functions despite their higher numbers of predictors (table 2). Annual rainfall was the best single water-related predictor for the species richness of all woody plants and shrubs, while AET was the best one for trees and lianas. The r2 of annual rainfall was consistently higher than that of MAP, but the difference was small (table 2). The ART was the strongest predictor among the variables of climatic seasonality, while the range of annual precipitation was the strongest among the variables of habitat heterogeneity. By contrast, the variables of human activities were not significant for the species richness of most species groups (table 2). Partial regressions indicated that climate independently accounted for 35–48% of richness variation after the effects of habitat heterogeneity and human activities were controlled. By contrast, habitat heterogeneity and human activities independently explained much less (5–7%) when climatic effects were controlled (figure 2). The r2 of the predictors was substantially different between biogeographic affinities (table 2). For all woody plants, trees and lianas, MTCQ was consistently the strongest predictor of the richness of tropical species, whereas RMAP, annual rainfall and precipitation seasonality were the strongest predictors for the richness of temperate species, respectively. For shrubs, the ART and MTCQ were the strongest predictors for tropical species, and the difference in the r2 of the two variables was only 0.5 per cent. By contrast, RMAP was the strongest predictor for the richness of temperate shrub species. (c) Models for woody plant richness The combined GLMs for the species richness of all woody plants and the three lifeforms selected consistent climatic variables (table 3): MTCQ, WD and temperature seasonality, representing the effects of environmental energy, water availability and climatic seasonality, respectively. As an indicator of habitat heterogeneity, RMAT was selected in the models for all woody plants, trees and lianas, while elevational range (TOPO) was selected in the model for shrubs. As RMAT and TOPO were strongly correlated (r = 0.98, p < 0.001; see the electronic supplementary material, appendix S1), replacing TOPO by RMAT in the model for shrubs reduced the r2 only by 0.1 per cent (table 3). Therefore, we finally chose the model with RMAT for shrubs to get a consistent model for all woody plants and the three lifeforms: where a, b1–b4 were regression coefficients. The VIFs for the four variables in all models were smaller than five (electronic supplementary material, table S2), indicating insignificant multi-collinearity in the models. The models reasonably predicted the species richness of China's woody plants, and the r2 was 76–85% for all woody plants and the three lifeforms, which was 14–23% higher than those of the water–energy dynamics model and the refitted F&C's model (table 3; electronic supplementary material, figure S3 and table S3). The Moran's I of the richness patterns of the four species groups was all dramatically reduced by our models (electronic supplementary material, figure S1). In particular, the first-order Moran's I (distance = 200 km) declined from 0.71–0.76 for species richness to 0.18–0.21 for model residuals. At larger distances, the residual Moran's I was negligible and consistently lower than those of the other two models. Therefore, our model accounted for the major spatial structures in richness patterns. Our model successfully predicted the species richness patterns of trees in North America (figure 3). The predictions were 36 (range: 1–238) per grid averagely, which was very close to the observations (average: 32; range: 1–145) [12,28,37]. The frequency distribution of the distances between the predictions and 1 : 1 line was symmetric (skewness = 0.88) and centred at five (s.d. = 24.2; figure 3), suggesting an average error of five species in the predictions. A special case is the Florida peninsula, where our model greatly overestimated the species richness (figure 3c). In contrast to our model, the water–energy dynamics model and the refitted F&C's model strongly overestimated the tree species richness in North America (electronic supplementary material, figure S4): the averages of their predictions were 52 (range: 0–1334) and 43 (range: 2–371) per grid, respectively. The frequency distributions of the distances between the predictions of these two models and 1 : 1 lines were highly right skewed (skewness = 9.83 and 2.74, respectively), with the averages being 20 (s.d. = 43.3) and 38 (s.d. = 46.9), respectively. In the continental North Hemisphere, the species richness patterns of the four species groups predicted by our model were all consistent with the observed or predicted richness patterns of trees and vascular plants in previous studies [8,37,42]: species richness decreased from the equator to the North Pole, and was the highest in Central America, Southeast Asia and tropical Africa, but the lowest in Sahara, central Asia and boreal regions (figure 4). (a) Comparison among climate, habitat heterogeneity and human activities Strong correlations between species richness and climate have been widely observed [7–9,13,28]. In our analyses, partial regressions indicate that the independent explanatory power of habitat heterogeneity and human activities is only ca 1/10 of those of climate, suggesting that climate is potentially the primary driver of large-scale patterns of species richness. In particular, our analysis showed that less than 4 per cent of the variation in the species richness of woody plants could be accounted for by individual variables of human activities, and less than 5–7% by the multiple models with all the human-activity variables as predictors. Similar results have been observed in Canada where the butterfly species richness is weakly correlated with human activities . Such a low explanatory power of human activities may be owing to the influence of spatial scales. It has been suggested that the effects of human activities on species richness decrease with the expansion of study area . Although the multiple regressions involving all the variables of habitat heterogeneity provided considerable explanatory power (46–54%) for the spatial variation in woody species richness, the two aspects of habitat heterogeneity, i.e. topographical relief and climatic heterogeneity, contributed differently . Partial regressions indicate that local climatic heterogeneity (i.e. RMAP and RMAT) independently accounted for 31–35% of richness variation if the effects of elevational range were first controlled, whereas elevational range independently explained less than 1 per cent of richness variation when the effects of local climatic heterogeneity were first controlled. This suggests that local climatic heterogeneity has stronger explanatory power for species richness than topographic relief. Similarly, previous studies indicate that the bird richness patterns in the mountains of the western Americas primarily reflected the effects of altitudinal variation of climate . (b) Freezing-tolerance hypothesis Previous studies have indicated that energy, overwhelming water and climatic variability, is the most important climatic factor in determining large-scale patterns of species richness [2,11–14]. The hypotheses proposed to explain the mechanisms underlying the energy effects focus on different aspects of environmental energy, including ambient energy, chemical energy and winter coldness [2,11,20,21]. For example, Currie found that ambient energy, instead of chemical energy, was the primary determinant for the diversity patterns of vertebrates in North America, hence supporting the ambient energy hypothesis. However, he did not incorporate the variables of winter coldness. Hawkins et al. compared the effects of ambient energy, chemical energy and winter coldness on the global pattern of bird diversity, and found that chemical energy had stronger effects than the other two, which supported the species richness-productivity hypothesis. In contrast to previous studies, our analyses indicate that winter coldness (represented by MTCQ) accounted for much more variation in species richness than the other individual energy variables. In addition, the effects of winter coldness are stronger for species with tropical affinities than those with temperate affinities (table 2). More interestingly, although the two water–energy dynamics functions both have three variables, their effects on species richness are significantly lower than those of MTCQ for most species groups. One of the energy variables used in these combined functions, PETmin, is strongly correlated with MTCQ in the regions where both variables are above zero (the Northern Hemisphere: r = 0.85, p < 0.001; China: r = 0.93, p < 0.001), but is constantly zero in the regions where MTCQ is below zero (electronic supplementary material, figure S5). Specifically, PETmin is zero in 73 per cent of the terrestrial lands in China and 82.4 per cent in North America. Additionally, the two combined functions both strongly suffer from multi-collinearity (electronic supplementary material, appendix S1 and table S2) caused by the strong correlations between water and energy variables. These results may have reduced the performance of the water–energy dynamics functions in explaining species richness patterns . Therefore, our results strongly support the recently formalized freezing-tolerance hypothesis (or tropical conservatism hypothesis) [20–22]. The theoretical framework of this hypothesis combines the effects of winter coldness with the niche conservatism of species, and emphasizes the difference in the tolerance of species which have evolved in different climates [20–22]. Our results indicate that the overall species richness is more strongly associated with that of tropical than temperate species for all woody plants and different lifeforms, which suggests that the latitudinal gradient of species richness is mainly the results of the rapid decrease in the species richness with tropical affinities. This finding was confirmed by further comparisons between the patterns of overall, tropical and temperate species richness (figure 5). For all woody plants, with the increase of latitude and decrease of MTCQ, the richness of species with tropical affinity decreases much faster than that of the temperate-affinity species. Specifically, the overall and tropical species richness both dramatically decrease towards the north, while the richness of temperate species is the highest at latitudes ranging from 25° N to 30° N, and decreases towards the south and north. Additionally, the proportion of the species with tropical affinity in all species rapidly declines with the decrease of MTCQ, while the proportion of the species with temperate affinity rapidly increases (figure 5c). Generally, there are more tropical than temperate species in grids with MTCQ of >5°C, but more temperate ones in grids with MTCQ of <5°C. Similar patterns are also observed for trees, shrubs and lianas (figure 5). The opposite trends in the proportions of tropical and temperate species reflect their different tolerance of frost [20–22]. Most clades with tropical affinities have no traits of frost tolerance, and hence can be rapidly filtered out from local floras by the increasing winter coldness towards the poles. By contrast, the temperate clades are much less sensitive to winter coldness. For example, previous studies have indicated that evergreen broad-leaved trees in tropical and subtropical regions are very sensitive to winter temperature. Most evergreen broad-leaved trees in tropical rain forests cannot survive a temperature of <0°C, and those in subtropical forests would die at a temperature of < − 10°C [45,46]. By contrast, most of the tree species in temperate and boreal forests can tolerate the low temperature of −45°C to −60°C [45,46]. In summary, our results suggest that the changes in species richness are mainly the result of winter-coldness filtering for the species with tropical affinities that evolved in an ancient tropical-like climate and are sensitive to frost . The control of winter coldness on species and vegetation distributions has long been observed in eastern Asia [36,45]. (c) Models for predicting woody plant richness Our model for the species richness of woody plants has the same number of variables compared with the previously developed global model, IGM2 , and has one more variable than F&C's model . However, our model has much higher explanatory power than the other two models refitted in terms of China. In our model, MTCQ is the most important variable, which represents the winter coldness and individually accounts for 60–73% of richness variation (table 2). The other three variables, i.e. WD, temperature seasonality and the RMAT, improve the model's r2 by 10–20%. The effects of winter coldness root in the evolution of species [20–22]; therefore, our model combines the effects of contemporary climate with the long-term evolutionary history. This may explain why it, using the coefficients developed in China, can successfully predict the species richness of trees in continental North America and woody plants in the North Hemisphere in spite of the huge difference in evolutionary history between different continents. WD was selected to represent the effects of water availability on species richness, which is consistent with F&C's model for the global pattern of angiosperm family richness . However, the spatial correlograms showed that some spatial structures at small scales (distance < 200 km) can still be observed in the residuals of the model (electronic supplementary material, figure S1). These spatial structures may reflect the effects of other factors (e.g. soil), which are not included in this analysis. The overestimation of species richness in the Florida peninsula (figure 3c) may also reflect the effects of other factors. The low species richness of different taxa in the Florida peninsula has been observed for several decades, which may be caused by the peninsula effects . Moreover, in the future, testing the model performance in the South Hemisphere will be helpful. We thank X. Qiao, Y. Liu, X. Zhang, L. Li, Z. Guo, L. Tang, K. Tan, W. Zuo, X. Li and H. Hu for helping with database construction, B. Hawkins, B. Schmid, J.-S. He, P. Lundberg, E. O'Brien and two reviewers for helpful comments, discussion and suggestions, and J. Zhu for assistance in developing the website of the database. Twenty-one botanists checked the plant distribution maps. This study was supported by the National Natural Science Foundation of China (no. 40638039, 90711002, 30721140306 and 40871030). - Received September 7, 2010. - Accepted November 17, 2010. - This Journal is © 2010 The Royal Society
<urn:uuid:6147f446-4820-4528-9b2a-75bda41ecf50>
CC-MAIN-2017-17
http://rspb.royalsocietypublishing.org/content/early/2010/12/04/rspb.2010.1897
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917121355.9/warc/CC-MAIN-20170423031201-00013-ip-10-145-167-34.ec2.internal.warc.gz
en
0.936145
6,969
2.890625
3
|Inside one of the site's temporary sheds, a splint helps a waterlogged plank keep its shape as archeologists and students dismantle one of Yenikapi's 32 shipwrecks found to date. Outside, the brilliant June sunshine beats down mercilessly on Turkey’s largest city. But the shed is kept cool by a fine mist sprayed from suspended hoses; the mist keeps the exposed wood moist and prevents it from shrinking. Ever so gently, the five women and four men slide a three-meter (10'), L-shaped frame beneath a waterlogged plank too fragile to be lifted directly. One of them gives the go-ahead and they raise the plank in unison, then place it into a wooden case, where it rests on a pine support specially designed to ensure that the plank keeps its shape. Later, the case containing the plank and the support will be lowered into a concrete-lined pool of slowly circulating fresh water. Eventually, after conservation and reassembly, the ancient ship, one of 32 uncovered so far in the run-down Istanbul neighborhood of Yenikapi, will likely go on display in a new museum dedicated to what many experts are calling the greatest nautical archeological site ever discovered: a vast excavation covering more than 58,000 square meters (nearly 625,000 sq ft), the equivalent of 10 city blocks, on what was once the edge of medieval Constantinople. |Standing among piers of what was for nearly 900 years a hub of international shipping, Ufuk Kocabaş is field director of the nautical arche- ology team from Istanbul University. “It’s the most phenomenal ancient harbor in the world, and it’s absolutely revolutionizing our knowledge of ship construction during Byzantine times,” declares Sheila Matthews, who is unearthing and researching eight boats for the Institute of Nautical Archaeology at Texas A&M University. “There is no other place that has so many shipwrecks in context with one another.” From brick-transport vessels to round-hulled cargo boats 19 meters (60') long and small lighters used to off-load larger ships, Yenikapii is yielding up the full gamut of ships that once busied one of the most active harbors of the middle ages. Among the site’s astonishing prizes are the first Byzantine naval craft ever brought to light. Lost for more than 800 years, Yenikapii’s fourth-century port dates back to Theodosius i, the last emperor to rule over both the eastern and western portions of a unified Roman Empire, and it was active until around 1200. Trading ships converged here from the Mediterranean, the Danube River and the Black Sea. Spices, ivory and jewels came from India; silks from China; carpets, pearls, silk and woolen weaving from Persia; grains and cotton from Egypt; as well as gold, silver, fur, honey, beeswax and caviar from Russia. Marble, timber and brick were imported to build and furnish the booming Byzantine capital, while textiles, pottery, wine, fish, oil lamps and metal items were exported to finance the growth. Pilgrims passed through on their way to Makkah and Jerusalem. Transported in cargo ships, pirate captives and enemy prisoners from Africa, Central Europe and Russia arrived for sale in the lucrative local slave market. Among Yenikapi’s artifacts discovered to date are plates from the Aegean, oil lamps from the Balkans and amphorae from North Africa, along with a profusion of glass, metal, ivory and leather—all evocative remnants of a far-flung mercantile empire that had Constantinople at its center. The port was uncovered in November 2004 during excavations for a 78-kilometer (48-mi) rail and metro network that will ultimately link Europe and Asia via a tunnel under the Bosporus. Constructed of submersible sections, the tunnel will run beneath 56 meters (180') of water and 4.5 meters (15') of seabed, making it the deepest tunnel in the world. The need is acute: The two bridges currently crossing the Bosporus are jammed, and the existing subway, consisting of one line and six stations, is inadequate for a city of more than 12 million inhabitants. |The remains of the port were discovered in 2004 when excavation began for the $4-billion Marmaray urban transit system’s hub, which was then redesigned to accommodate the 10-square-block dig site. The ancient port today lies inland by about a kilometer—one of the reasons it lay undiscovered for so long. Amid this burgeoning megalopolis, Yenikapi is slated to become the biggest transportation hub in the entire country. Metro, light rail and passenger trains will converge here in a sprawling development of shopping malls, office towers and residential complexes rising next to an archeological park that contains the remains of a fifth- or sixth-century lighthouse and a 12th- or 13th-century Byzantine church. But until then, the site is a hive of activity as more than 800 archeologists, engineers and laborers in bright orange vests race to finish excavations. Despite pressure from the transit authorities to wrap up the dig, however, archeologists refuse to set a deadline for completion of their work. “Every construction site, be it for a small building or a multi-billion dollar megaproject like this one, is a window on the past that is opened only briefly,” explained Ismail Karamut, the head of the Istanbul Archaeological Museum, to Archaeology magazine. “A window of this size may not be open in Istanbul for many decades to come.” According to Ufuk Kocabaş, the archeologist directing the Istanbul University team, the excavations should be finished by early 2010. Documentation, conservation, and reconstruction of the ships will then continue for many years more, he predicts. A developing country like Turkey deserves a great deal of credit for putting archeology ahead of the urgently-needed transit project and sacrificing millions of dollars in delays, argues Cemal Pulak, a Turkish–American professor heading up the Texas A&M team. “Colleagues visiting from Europe and the us are amazed,” he remarks. “They tell me that in their countries they are handed a deadline and told to simply do the best they can.” |Above: Near the excavated quays is a stone wall, above, that Kocabaş and others believe was part of the earliest city wall laid out by Constantinople’s founder, the Roman emperor Constantine i, in the fourth century, more than 50 years before Theodosius i constructed the harbor. Below: Of the 32 shipwrecks found so far, 17 have been excavated, and some are as much as 40-percent intact. Despite his gratitude that Turkish cultural authorities are fighting hard to preserve the site, Pulak wishes that the operation—large as it is—had been expanded beyond the central part of the harbor to encompass more of the quays, granaries and storage buildings that he suspects lined its perimeter. Such an extension of the dig “would have required a lot of convincing and maneuvering,” he admits, “but it would have helped enormously in understanding the huge harbor and its impact on the economy and life of Constantinople.” The Yenikapi dig has drawn academics from around the world. In addition to the team from Texas A&M, scholars from Cornell University, Istanbul University, Hacetteppe University in Ankara and Tel Aviv University in Israel are contributing to the research and analysis of the finds. Turkish archeologists are consulting with ship museums in Denmark, Sweden, Germany, Holland, Spain and the uk about the creation of a new local museum. Next October, Istanbul will host an international symposium on nautical archeology and ancient ships. By that time, the Yenikapi excavation should be nearing completion, and the city’s new transit scheme will be moving into high gear. All told, the $4-billion Marmaray (a name that joins the Marmara Sea to ray, the Turkish word for “rail”) train and tunnel project and the coordinated metro lines will also rebuild 37 stations above ground and three new ones below ground. The network will be capable of transporting 75,000 passengers per hour. Engineers predict that when the system is completed in 2012—two years behind schedule—the percentage of trips by public transport will jump from an abysmal 3.6 percent at present to 27.7 percent, a figure that would put Istanbul at number three in the world in public transport, behind Tokyo (60 percent) and New York City (31 percent). As if juggling the port excavations and the tunnel-transit project were not enough, engineers also have to contend with the near-certainty of a major earthquake from the 1200-kilometer-long (745-mi) North Anatolian Fault, which runs in an east-west direction only a few kilometers south of the city. Since the year 342, a dozen massive tremors have each left more than 10,000 dead. In 1999, two together killed 18,000 people. Seismologists calculate that there’s a 77 percent probability of a quake of 7.0 magnitude or higher occurring in the next 30 years. Engineers insist that the tunnels will be able to withstand a 7.5 quake, bigger than the one that destroyed much of Kobe, Japan in 1995. Nonetheless, Geoffrey King, director of the tectonics lab at the Institut de Physique du Globe in Paris, told Wired magazine, “I wouldn’t like to be in such a tunnel during an earthquake.” About 2400 meters (1.5 mi) northeast of Yenikapi, the new metro tunnel runs beneath the city’s principal historic district, the Sultanahmet area, home to Topkapi Palace, where sultans ruled the Ottoman Empire for four centuries, the sixth-century Hagia Sofia museum (formerly a church, then a mosque), the Blue Mosque and other landmarks. Karamut insists that the tunnel will lie deep enough to avoid risk to the ancient sites. |Archeology students mark planks before they are removed for conservation and eventual reassembly and museum display. Like Rome and Athens, both ancient cities that have built subways in modern times despite frequent delays to explore buried antiquities, tunneling for the metro (and a parallel dig beneath the Four Seasons Hotel in Sultanahmet) in 2800-year-old Istanbul has unearthed numerous other treasures, including what is believed to have been the fifth-century main doorway of the Imperial Palace. This monumental bronze gate, some six meters (20') tall, was uncovered near the Blue Mosque, along with Byzantine mosaics, frescoes, and portions of a 16-meter (52') street, sewer system and hammam, or Turkish bath. So far, the later discoveries have not caused engineers to alter the subway tunnel route, however, and it is uncertain what will happen to the ruins that have recently emerged in Sultanahmet. Meanwhile, the gargantuan dig at Yenikapi continues to disgorge an eclectic mix of the marvelous and the mundane. Apart from the 32 watercraft dating from the seventh to the 11th centuries—including four naval galleys—archeologists have dug up more than 170 gold coins, hundreds of clay amphorae for wine and oil, ivory cosmetics cases, bronze weights and balance scales, finely-wrought wooden combs and exquisite porcelain bowls. They’ve recovered bones of camels, bears, ostriches, elephants and lions—probably imported from Africa for entertainments at the Hippodrome, suggests Kocabaş. Some 15 human skulls retrieved from a dry well may have belonged to executed criminals. Iron anchors have been recuperated, objects so highly prized in medieval Byzantium that they are noted in the dowry records of wealthy merchants’ daughters. The oldest find is an 8000-year-old Late Neolithic hut containing stone tools and ceramics—the earliest settlement ever located on the city’s historic peninsula. One particularly mind-boggling find, discovered aboard a ninth-century cargo ship, was a basket of 1200-year-old cherries nestled next to the ship’s captain’s ceramic kitchen utensils—a cooking grill, hot pot, pitcher and drinking cup—as if waiting for the ancient mariner’s “No, I didn’t taste them,” laughs Kocabaş. “But I did think about planting a few pits to see if they would sprout.” (Kocabaş rejected the notion when he realized both the fruit and the pits had turned to carbon.) The site’s ships, bones and artifacts (and cherries) were so unusually well preserved, he maintains, because silt from the Lykos River and sand from the Marmara Sea quickly covered over the wrecks. After a fortifying lunch of stuffed grape leaves and meat-filled eggplant at a busy local eatery, Kocabaş and Metin Gökçay, site chief from the Istanbul Archaeological Museum, take me along to explore the first portion of the harbor brought to light. It is also the oldest part of the port, a flashpoint alerting local archeologists to the unique historical significance of a site that had nearly been bulldozed. En route, we pass dozens of laborers pushing wheelbarrows of powdery, pale-brown dirt up wooden or earthen ramps crisscrossing the immense six-meter-deep (20') pit. Next to a cluster of modern-day shipping containers converted to field offices and conservation labs are hundreds of blue plastic milk crates stacked and loaded with amphorae, pottery fragments and animal bones. In the distance, several long white sheds shelter ships. Beyond tall metal fences enclosing the site stand rows of two-story shops backed by high-rise apartment blocks. Arriving at a quiet, overgrown area on the western fringes of the site, we push aside branches of fig and bamboo to inspect massive limestone blocks. “These were the original quays,” says Kocabaş. “You can see the notched holes hewn out of the rock for tying up the boats.” Next to the quays is a stone wall that Kocabaş, Gökçay and others believe was part of the earliest city wall laid out by Constantinople’s founder, the Roman Emperor Constantine i, in the fourth century, more than 50 years before Theodosius i constructed the harbor. Researchers at the dendrochronological laboratory at Cornell University have confirmed that wooden supports from the 53-meter (170') portion of the wall that has been dug out date from the fourth century, he explains. Even though the wall and quays lay only a meter (39") underground, they remained hidden and forgotten for centuries. Initially, the area was to be part of the train and metro station, but when the ancient remains were found four years ago, they were declared off-limits and plans for the station were changed so as to leave the historic monuments intact. |According to Metin Gökçay of the Istanbul Archaeological Museum, the site has to date yielded more than 16,000 “quality objects.” Each is first cleaned and catalogued on site. In the broiling heat, a merciful breeze flutters laundry hanging from the tenement windows overlooking the site as we clamber over the wall to survey the remains of tannery pits and a late Byzantine charnel house. “Look there,” directs Gökçay, as he points to a vaulted stone tunnel leading straight to the sea. “That’s a secret passageway so you could slip out of the harbor undetected.” The archeologist speculates that the tunnel led to a former palace on a hill behind the harbor and was also used in the other direction, to smuggle goods into the city to avoid customs duties. Later, Texas A&M researcher Matthews suggests, more prosaically, that the tunnel was used for sewage or drainage. From here, you can picture how the harbor took shape. A stone breakwater, now gone, led from the quays out into the sea, then curved east to form a barrier protecting the harbor, Gökçay explains. Sediment from the Lykos, which emptied into the port, was also caught by the break-water. But instead of flowing out to sea, the alluvial soil gradually backed up, silting up the harbor. By the 12th century, the port was so shallow it was only used by small fishing boats. Four centuries later, the once-bustling harbor was a memory. A 16th-century account by Pierre Gilles, a natural historian dispatched by the French king François i to acquire manuscripts in what had become the Ottoman capital of Istanbul, describes the former Byzantine port as a garden spot covered with vegetable plots watered by waterwheels known as norias. Leaving the western wall, we trudge across the kilometer-wide (1100-yd) site to the eastern edge of the port, to the lighthouse that dates to the fifth or sixth century—or rather to the five-meter (16') marble and limestone base of the lighthouse. On the way, we pass the vestiges of stone walls outlining a 12th- or 13th-century church, one of two churches found close to the edge of the filled-in harbor. All around the former lighthouse, earth has been scooped out to reveal its base and the ground beneath it, opening a cross-section of geological strata. Embedded in a lower zone is a thin black band running horizontally a foot or so above what had been the bottom of the ancient harbor. “That’s a tsunami line,” Kocabaş explains. “It shows that a major earthquake occurred here, probably—based on the objects we dated in the strata—around the middle of the sixth century.” A jumble of potsherds, wood pieces and other artifacts were churned up by the cataclysm, he says, adding that entire camel and horse skeletons lay crushed in the debris. According to geological evidence detected elsewhere, at least one more tsunami, or perhaps only a ragingly destructive tempest, occurred around the year 1000. Judging from the violent way some of the boats appear to have been hurled into one another, Kocabaş concludes that several ships were sunk in that storm. What was no doubt a tragedy at the time, however, has proven a boon to archeologists. Because the waves hit the port so quickly, anchors and cotton ropes sank in place and were quickly preserved beneath silt and sand. “It was an exceptional stroke of good fortune because it showed us for the first time exactly how Byzantine mariners rigged their anchors,” he observes. After Gökçay leaves to return to his site office, Kocabaş leads me to a nearby excavation shed. “You’re in luck,” he announces, opening the flap to reveal a magnificent wreck, a Byzantine galley with most of its original 30-meter (95') length and half its nine-meter (30') width remaining. “Finding longboats like this is extremely rare, and in fact, we just finished opening the surface today. Yesterday, half the ship was covered with sand.” He bends down to point out where the oars had been placed. “This ship had 50 oarsmen,” explains Kocabaş, “so it was in- credibly fast and light.” Despite its length, the narrow craft was nonetheless too small to engage in battle, so the archeologist speculates it was probably used to reconnoiter enemy ships. No dromons—Byzantine warships generally twice as long and with as many as 100 oarsmen—have so far been located at Yenikapi, according to Kocabaş. “Just feel how hard and well-preserved the wood is,” he continues, allowing me a brief touch. That nemesis of nautical archeologists, the rapacious Teredo navalis mollusk, bores holes into wrecks in the open sea, ultimately turning their planks and beams into crumbly sponge. Yet Teredo did little damage at Yenikapi because the fresh-water inflow from the Lykos river kept them away. Apart from the four galleys, archeologists have so far excavated only about 17 of the 32 ships that have been found. Some six ships, each shorter than 11 meters (35'), were used for fishing and moving goods locally. Around 10 boats between 11 and 19 meters long ranged greater distances, trading around the Sea of Marmara and the Black Sea. The larger of these boats also sailed the Mediterranean, bringing grain back from Egypt. The biggest ship that has appeared so far is 40 meters (130') long and dates from the sixth or seventh century. “We nicknamed it Titanic,” quips Kocabaş. Most of the vessels were hewn of oak, chestnut and pine from the Marmara region, and constructed with iron nails and wooden dowels, he says. Galleys were rigged with triangular lateen sails made of cotton, linen and hemp; cargo ships had square sails of similar material. To make the crafts seaworthy and stop leaks, their planks were caulked with a glue-like substance made of pine resin and oakum. None of the longboats and only a few of the longer cargo ships had decks, according to Kocabaş. Back in the shipping container that serves as Gökçay’s office, the pair run me through a computer presentation of some of Yenikapi’s greatest archeological hits. Apart from literally millions of ceramic shards, there are, notes Gökçay, some 16,000 “quality objects,” artifacts that illuminate Byzantine life and the expansive trade that made the harbor a thriving entrepôt for a good part of eight centuries. There’s a fourth-century marble statue of Apollo; a Roman copy of an original work by the Greek sculptor Praxiteles; a gold coin bearing the image of Aelia Pulcheria, sister and regent of fifth-century emperor Theodosius ii; a seventh-century ceramic oil lamp with a cross; an undated ivory carving of the Virgin Mary; an undated marble statue similar to figures on the Pergamon altar, a Hellenistic masterpiece removed from that ancient Greek city in northwest Anatolia to Berlin in the late 19th century. There are board games, dice, ceramic toy ships, 11th-century ceramic cups decorated with bas-relief images of faces with Mongolian features, perhaps from Central Asia, and an enigmatic lead tablet with Hebrew writing that Kocabaş theorizes was used to cast out evil spirits. |Top: Recording the team’s finds, Texas A&M graduate student Rebecca Ingram draws a life-size sketch of an oil lamp on a plastic sheet while a colleague photographs another artifact. Above: Ingram and nautical archeologist Sheila Matthews work on planks that are covered in plastic to prevent evaporation, which can crack the wood. Yenikapi, says Matthews, is “revolutionizing our knowledge of ship construction during Byzantine times.” The delicately-fashioned sole of a wooden shoe bears a Greek inscription on the instep that, according to Gökçay, roughly translates: “Wear this shoe in health, lady, and step into your happiness.” Intrigued by the handiwork, the museum archeologist scoured some 20 villages in Central Anatolia to seek out cobblers and carpenters using similar traditional woodworking methods. Among the craftsmen he encountered was an 81-year-old carpenter still turning out wooden forks, spoons, plates—and shoe-soles—employing techniques that have changed little since Byzantine times. None of the modern shoes, however, bore inscriptions. Among the tools unearthed are peculiar drills with iron bits set into wooden cylinders. Kocabaş explains that a horsehair bow-string was looped around the cylinder and the bow was moved rapidly side to side to turn the iron bit— another woodworking technique that can be seen in Turkey today. Once the documentation, conservation and reconstruction process is well under way, the archeologist plans on fabricating a replica of one of the Byzantine ships. “Building a replica, using saws, axes and other tools similar to the ones the Byzantines used, is the best way to get an authentic, hands-on notion of boat construction,” he says. How to make the ship symmetrical and correct mistakes; how to fit the frame and planks together; how to shape a keel that steadies the craft but doesn’t slow it down; how to seal the hull against leaks—all of these technologies will be revealed, he hopes. Then, the ultimate pay-off will be actually taking the replica out on the water. Visiting the waterfront Viking Ship Museum in Roskilde, Denmark in June 2007, Kocabaş accompanied a curator on a late-afternoon spin aboard a replica of a Viking ship, taking an exhilarating turn rowing beneath the billowing canvas sail. “It was fantastic,” he recalls. “A total dream.” Texas A&M’s Sheila Matthews similarly dreams of piecing together a functioning replica, a spanking new double of one of the waterlogged hulks she confronts daily at the dig site. But first, she says, comes the less glamorous reality. When I meet her under one of the site’s preservation sheds, the red-haired archeologist is ankle-deep in mud, carefully lifting a 120-centimeter (4') plank from a seventh-century cargo ship with the help of a pair of student-assistants. The boat lies alongside a small pond of opaque water that has formed from the mist sprayed by the overhead hoses. “Gently, gently,” Matthews coaxes, as the trio presses a board-and-foam support to the plank to ease it from the muck. “If this wood slips into the water, I won’t be the one to fish it out, I can promise you that!” Fortunately, they’ve all had ample practice in this sort of maneuver and shift the plank without incident to a nearby table for cataloguing. Later, seated on wooden steps descending from the shed entrance down to the boat, Matthews, who has been toiling over ships at Yenikapi for the past three years, reflects on why the finds here are so revealing. |A ceramic shard’s two-tone glaze remains almost entirely intact. Aegean plates, Balkan oil lamps, North African amphorae; glass, metal, ivory and leather—all evoke a widespread, long-lived mercantile empire centered on Constantinople. “It’s the amazing details,” she observes. “Just from examining the tool marks, we can tell if the planks were fitted first—the style of boatbuilding used in the seventh century—or if the frame came first, then the planks, a technique that didn’t become wide-spread until around the ninth century.” If the planks are joined by wooden tabs called tenons slotted into mortise notches, the ship was constructed around the seventh century, Matthews explains. If they’re joined by wooden dowels, it was built after the ninth century. “Exotic stuff, no?” she says with a smile and a shrug. “It’s what we nautical archeologists live for.” Even seemingly insignificant minutiae give clues to the extraordinary sophistication of Byzantine shipwrights. Analyzing the dowels used on different categories of vessels, archeobotanist Nili Liphschitz from Tel Aviv University determined that the pegs connecting planks on the cargo ships were hewn from the trunks of trees, whose rigidity kept the hulls from bending. She ascertained that similar dowels on lighter galleys were made from more supple tree branches to impart the flexibility needed to prevent the longer boats from snapping in two. According to Matthews, such principles of ship design were handed down from father to son or from master to apprentice. “You didn’t find the design written down anywhere,” she explains. “You just built with what you recalled.” As ships are dug up, the painstaking process of documentation and conservation begins. First, each one is meticulously photographed in close-ups which are then arranged in a computer photomontage of 100 to 150 images to depict the boat in its entirety. By zooming in on the photomontage, researchers can even detect cuts left in the wood from the various tools used to build the ship. Next, a three-dimensional computer model of each ship is created using a laser-like instrument called “total station” to map its contours. Essentially, the device records as many as 10,000 separate points on the boat’s surface and connects the dots to replicate its shape. This technological marvel is so accurate “it can copy the head of ant,” quips Matthews. |Top: Among the finds have been baskets of 1200-year-old fruit seeds, olives and even cherries nestled next to the ship’s captain’s ceramic kitchen utensils. Above: This delicately-fashioned sole of a wooden shoe bears a Greek inscription on the instep that roughly translates: “Wear this shoe in health, lady, and step into your happiness.” Once the computerized representation is complete, archeologists trace the vessel in detail on large sheets of clear plastic acetate, dismantle it piece by piece, make further acetate drawings and write exhaustive descriptions of the separate elements, then transport the planks to holding pools. Because the fragile cell walls of the wood are supported by water, the ship timbers cannot be allowed to dry out. Instead, they are immersed in stainless steel tanks of polyethylene glycol (peg), a wax-like ingredient used in such products as skin creams, lubricants, toothpaste and eye drops. Over a period of 18 months to two years (for soft tree species like pine) or up to three years for harder varieties such as oak and chestnut, the water inside the cell walls is replaced by peg, which solidifies and stabilizes the wood. Once the pieces are preserved with peg, archeologists re- assemble them to study how the boat is put together, then disassemble everything for storage. Eventually, some of the planks, frames and entire re-assembled ships will be displayed in a museum while preservation continues on other pieces. It’s an on-going process that is likely to take decades, says Matthews. “There’s a rule of thumb for underwater archeology,” she opines drily. “For every day of excavation, count on months in But instead of waiting years to put the ships on display, she suggests, why not turn the laboratory into a living museum? “You could have big rooms with glass windows and people could watch the researchers at work, examine the design plans on the walls and witness the boats taking shape,” Matthews enthuses. “It would be fabulous.” Wouldn’t the archeologists get distracted, I ask. “You get used to it,” she replies. “Our lab at Bodrum [on Turkey’s Aegean coast] was outside and people would talk to us all the time. Here, visitors wouldn’t get in the way if they were behind glass windows.” Such open labs exist at the Portsmouth (uk) museum dedicated to the 16th-century Tudor warship Mary Rose, she adds, so why not here in Istanbul? So far, local authorities have not decided what ships and artifacts will be in the museum or even where the museum will be located. One proposal is to incorporate some of the nautical relics into exhibition spaces inside the train and metro station complex. |Among the “millions” of ceramic sherds recovered, says Gökçay, not all are worth cataloging and conserving. Although digging will end in 2010, conservation and study will continue for years afterward. Kocabaş would prefer the main museum to be situated directly on the water, like the Vasa Museum in Stockholm, Roskilde’s Viking ships and others. “People could see the wrecks in a proper nautical context, rather than a kilometer away from the sea, as at Yenikapi,” he points out. The ideal location, he proposes, would be on the site of the former shipyards along the Golden Horn, closer to the historic district and thus likely to attract more visitors. But first, Kocabaş, Gökçay, Matthews, other archeologists, researchers, engineers and work crews have at least two more winters to contend with before the monster dig winds down. “Most of the time I’m glad not to have a desk job,” Matthews muses as we emerge from the cool shed into the late afternoon sunlight. “But in the winter here, standing in the mud, as the ice-cold water starts rising and your feet and fingers start freezing, snow flies through a hole in the plastic sheeting and you struggle to hold onto your pencil to record readings from the ‘total station’ mapping, the one thing that pops into my masochist’s mind is that I actually chose this job.” And is it all worth it, I ask. “Oh, yes,” she replies, without a moment’s hesitation. ||Richard Covington ([email protected]) writes about culture, history and science for Smithsonian, the International Herald Tribune, the Sunday Times and other publications from Paris. He is also a contributor to What Matters, a book of 18 essays and photojournalism on environmental, health and social issues (Sterling/Barnes & Noble, 2008). ||Lynsey Addario (www.lynseyaddario.com) is a freelance photojournalist based in Istanbul. This year she was awarded the Getty Images Grant for Editorial Photography for her continuing work in Darfur, Sudan.
<urn:uuid:a4b644ab-40a1-45fa-957b-87c9a4e5ba10>
CC-MAIN-2017-17
http://archive.aramcoworld.com/issue/200901/uncovering.yenikapi.htm
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917123530.18/warc/CC-MAIN-20170423031203-00428-ip-10-145-167-34.ec2.internal.warc.gz
en
0.949588
7,369
2.609375
3
Individual differences | Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology | Acute myocardial infarction (AMI or MI), commonly known as a heart attack, is a disease that occurs when the blood supply to a part of the heart is interrupted, causing death of heart tissue. It is the leading cause of death for both men and women all over the world. The term myocardial infarction is derived from myocardium (the heart muscle) and infarction (tissue death due to oxygen starvation or ischemia). The phrase "heart attack" sometimes refers to heart problems other than MI, such as unstable angina pectoris and sudden cardiac death. Acute myocardial infarction is usually characterized by varying degrees of chest pain, discomfort, sweating, weakness, nausea, vomiting, and arrhythmia, sometimes causing loss of consciousness and even sudden death. Chest pain is the most common symptom of acute myocardial infarction (MI) and is often described as a sensation of tightness, pressure, or squeezing. Pain radiates most often to the left arm, but may also radiate to the jaw, neck, right arm, back, and epigastrium. The patient may complain of shortness of breath (dyspnea) especially if the decrease in myocardial contractility due to the infarct is sufficient to cause left ventricular failure with pulmonary congestion or even pulmonary edema. Approximately half of all MI patients have experienced warning symptoms like angina pectoris prior to the infarction. Women often experience different symptoms than men. The most common symptoms of MI in women include dyspnea, weakness, and fatigue. Fatigue, sleep disturbances, and dyspnea have been reported as frequently occurring prodromal symptoms which may manifest as long as one month before the actual clinically manifested ischemic event. In women, chest pain may be less predictive of coronary ischemia than in men Myocardial infarctions vary greatly in severity. Many cases of myocardial infarction are identified by ambulance staff, emergency room doctors and cardiac specialist nurse practitioners quickly. Other, often smaller myocardial infarctions sometimes are not recognized by victims, never receive medical attention, and can result in heart weakness and other complications. Adequate diagnosis requires a medical history, an electrocardiogram, and blood tests for heart muscle cell damage. Other information, including results of myocardial perfusion tests (see stress tests) and echocardiograms can also help establish the diagnosis of MI. Electrocardiogram (ECG/EKG) findings suggestive of MI are elevations of the ST segment and changes in the T wave. After a myocardial infarction, changes can often be seen on the ECG called Q waves, representing scarred heart tissue. However, a normal ECG/EKG does not rule out a myocardial infarction. The ST segment elevation distinguishes between: - STEMI ("ST-Elevation Myocardial Infarction") - NSTEMI ("Non-ST-Elevation Myocardial Infarction") -- diagnosed when cardiac enzymes are elevated. The leads with abnormalities on the ECG may help identify the location: |Wall affected||Leads||Artery involved||Reciprocal changes| |Anterior||V2-V4||Left coronary artery, Left Anterior descending (LAD)||II, III, aVF| |Anterolateral||I, aVL, V3-V6||LAD and diagonal branches, circumflex and marginal branches||II, III, aVF| |Inferior||II, III, aVF||right coronary artery (RCA)||I, aVL| |Lateral||I, aVL, V5, V6||circumflex branch or left coronary artery||II, III, aVF| |Posterior||V8, V9||RCA or circumflex artery||V1-V4 (R greater than S in V1 & V2, ST-segment depression, elevated T wave)| Cardiac markers or cardiac enzymes are proteins from cardiac tissue found in the blood. Until the 1980s, the enzymes SGOT and LDH were used to assess cardiac injury. Then it was found that disproportional elevation of the MB subtype of the enzyme creatine phosphokinase (CPK) was very specific for myocardial injury. Current guidelines are generally in favor of troponin sub-units I or T, which are very specific for the myocardium and are thought to rise before permanent injury develops. A positive troponin in the setting of chest pain may accurately predict a high likelihood of a myocardial infarction in the near future. The diagnosis of myocardial infarction requires two out of three components (history, ECG, and enzymes) to be positive for MI. Currently the cardiac markers, namely the troponins have become so reliable that enzyme elevations alone are considered reliable measures of cardiac injury, with ECG serving to determine where in the heart the damage has occurred, and history serving to screen patients for further enzyme and ECG testing. In difficult cases or in situations where intervention to restore blood flow is appropriate, an angiogram can be done (see below for an image). Using a catheter inserted into an artery (usually the femoral artery), obstructed or narrowed vessels can be identified, and angioplasty applied as a therapeutic measure (see below). Angiography requires extensive skill, especially in emergency settings, and may not always be available out of hours. It is commonly performed by cardiologists. There is a very small risk of plaque and vessel rupture on balloon inflation; should this occur, then emergency open-chest cardiac surgery may be required. Patients commonly experience bruising at the catheter insertion point in the groin and occasionally a hematoma. Dissection (tearing) of the blood vessel is rare but usually managed with a local thrombotic injection. WHO criteria have classically been used to diagnose MI; a patient is diagnosed with myocardial infarction if two (probable) or three (definite) of the following criteria are satisfied: - Clinical history of ischaemic type chest pain lasting for more than 20 minutes - Changes in serial ECG tracings - Rise and fall of serum cardiac enzymes (biomarkers) such as creatine kinase, troponin I, and lactate dehydrogenase isozymes specific for the heart. The WHO criteria were refined in 2000 to give more prominence to cardiac biomarkers. According to the new guidelines, a cardiac troponin rise accompanied by either typical symptoms, pathological Q waves, ST elevation or depression or coronary intervention are diagnostic of MI. Ischemia and infarctionEditThe underlying mechanism of a heart attack is the destruction of heart muscle cells due to a lack of oxygen. If these cells are not supplied with sufficient oxygen by the coronary arteries to meet their metabolic demands, they die by a process called infarction. The decrease in blood supply has the following consequences: - Heart muscle which has lost blood flow long enough, e.g. 10-15 minutes, undergoes the ischemic cascade, dies (necrosis) and does not grow back. A collagen scar, which does not have the ability to contract, forms in its place. Thus the heart ends up permanently weaker as a pump for the remainder of the individual's life. - Injured, but still living, heart muscle conducts the electrical impulses which initiate each heart beat much more slowly. The speed can become so slow that the spreading impulse is preserved long enough for the uninjured muscle to complete contraction; now the slowed electrical signal, still traveling within the injured area, can re-enter and trigger the healthy muscle (termed re-entry) to beat again too soon for the heart to relax long enough and receive any blood return from the veins. If this re-entry process results in sustained heart rates in the 200 to over 400 beats per minute range called ventricular tachycardia (V-Tach) or ventricular fibrillation (V-Fib), then the rapid heart rate effectively stops heart pumping. Heart output and blood pressure falls to near zero and the individual quickly dies. This is the most common mechanism of the sudden death that can result from a myocardial infarction. The cardiac defibrillator device was specifically designed for stopping these too rapid heart rates. If used properly, it stops and resets the electrical impulses in all heart cells--in effect "rebooting" the heart--thereby stimulating the entire heart muscle to contract together in synchrony, hopefully stopping continuation of the re-entry process. If used within one minute of onset of V-Tach or V-Fib, the defibrillator has a high success rate in stopping these often fatal arrhythmias allowing a functional heart rhythm to return. - Myocardial rupture is most common three to five days after myocardial infarction, commonly of small degree, but may occur one day to three weeks later, in as many as 10% of all MIs. This may occur in the free walls of the ventricles, the septum between them, the papillary muscles, or less commonly the atria because of increased pressure against the weakened walls of the heart chambers due to heart muscle that cannot pump blood out as effectively. Rupture is usually a catastrophic event that results in pericardial tamponade (compression of the heart by blood pooling in the pericardium, the heart sac) and/or sudden death unless (or despite being) immediately treated. Histopathological examination of the heart shows that there is a circumscribed area of ischemic necrosis (coagulative necrosis). In the first 12-48 hours, myocardial fibers are still well delineated, with intense eosinophilic (pink) cytoplasm, but they lose their transversal striations and the nucleus. The interstitial space (the space between cells outside of blood vessels) may be infiltrated with red blood cells. When the healing has commenced (after 5 -10 days) the area of coagulative ischemic necrosis shows myocardial fibers with preservation of their contour, but the cytoplasm is intensely eosinophilic and transversal striations and nuclei are completely lost. The interstitium of the infarcted area is initially infiltrated with neutrophils, then with lymphocytes and macrophages, in order to phagocytose ("eat") the myocyte debris. The necrotic area is surrounded and progressively invaded by granulation tissue, which will replace the infarct with a fibrous (collagenous) scar. Atherosclerosis / other predisposing factorsEdit The most common cause of heart attack by far is atherosclerosis, a gradual buildup of cholesterol and fibrous tissue in plaques in the arterial wall, typically over decades. However plaques can become unstable, rupture, and additionally promote a thrombus (blood clot) that occludes the artery; this can occur in minutes. When a severe enough plaque rupture occurs in the coronary vasculature, it leads to myocardial infarction (necrosis of downstream myocardium). Risk factors for atherosclerosis may also be risk factors for ischemic heart disease: older age, smoking, hypercholesterolemia (more accurately hyperlipoproteinemia, especially high low density lipoprotein (LDL) and low high density lipoprotein (HDL)), diabetes (with or without insulin resistance), high blood pressure, and obesity. Many of these factors are modifiable. Other cardiac risk factors include elevated C-reactive protein, or a waist measurement of more than 35 inches for women or more than 40 for men. Age, smoking, and family history of early heart disease can increase the risks. Having increased blood pressure above 120 systolic may increase the risk of cardiovascular disease. Elevated triglycerides levels and small LDL particle size may also increase the risk of cardiovascular disease. The blood flow problem is nearly always a result of exposure of atheroma tissue within the wall of the artery to the blood flow inside the artery, atheroma being the primary lesion of the atherosclerotic process. The many blood stream column irregularities, visible in the single frame angiogram image to the right, reflects artery lumen changes as a result of decades of advancing atherosclerosis. Heart attacks rates are higher in association with intense exertion, be it stress or physical exertion, especially if the exertion is unusually more intense than the individual usually performs. Quantitatively, the period of intense exercise and subsequent recovery is associated with about a 6-fold higher myocardial infarction rate (compared with other more relaxed times frames) for people who are physically very fit. For those in poor physical condition, the rate differential is over 35-fold higher. One observed mechanism for this phenomenon is the increased arterial pulse pressure stretching and relaxation of arteries with each heart beat which, as has been observed with IVUS, increases mechanical "shear stress" on atheromas and the likelihood of plaque rupture. Increased spasm/contraction of coronary arteries and left ventricular hypertrophy in association with cocaine abuse can also precipitate myocardial infarction. Acute severe infecton, such as pneumonia, can trigger myocardial infarction. A more controversial link is that between Chlamydophila pneumoniae infection and atherosclerosis. While this intracellular organism has been demonstrated in atherosclerosic plaques, evidence is inconclusive as to whether it can be considered a causative factor. Treatment with antibiotics in patients with proven atherosclerosis has not demonstrated a decreased risk of heart attacks or other coronary vascular diseases. First aid Edit As myocardial infarction is a common medical emergency, the signs are often part of first aid courses. General management in the acute setting is: - Seek emergency medical assistance immediately. - Help the patient to rest in a position which minimises breathing difficulties. A half-sitting position with knees bent is often recommended. - Give access to more oxygen, e.g. by opening the window and widening the collar for easier breathing; but keep the patient warm, e.g. by a blanket or a jacket - Give aspirin, if the patient is not allergic to aspirin. Aspirin has an antiplatelet effect which inhibits formation of further thrombi (blood clots). - Non-enteric coated or soluble preparations are preferred. These should be chewed or dissolved, respectively, to facilitate quicker absorption. If the patient cannot swallow, the aspirin can be used sublingually. - U.S. guidelines recommend a dose of 160 – 325 mg. - Australian guidelines recommend a dose of 150 – 300 mg. - Give glyceryl trinitrate (nitroglycerin) sublingually (under the tongue) if it has been prescribed for the patient. - Monitor pulse, breathing, level of consciousness and, if possible, the blood pressure of the patient continually. - Administer cardiopulmonary resuscitation (CPR) if cardiac arrest occurs due to ventricular arrhythmia Automatic external defibrillation (AED)Edit Since the publication of data showing that the availability of automated external defibrillators (AEDs) in public places may significantly increase chances of survival, many of these have been installed in public buildings, public transport facilities, and in non-ambulance emergency vehicles (e.g. police cars and fire engines). AEDs are also becoming popular for use in the home, where most attacks occur. AEDs analyze the heart's rhythm and determine whether the rhythm is amenable to defibrillation ("shockable"), as in ventricular tachycardia and ventricular fibrillation. Emergency services may recommend the patient to take nitroglycerin tablets or patches, in case these are available, particularly if they had prior heart attacks or angina. In an ambulance, an intravenous line is established, and the patient is transported immediately if breathing and pulse are present. Oxygen first aid is provided and the patient is calmed. Close cardiac monitoring (with an electrocardiogram) is initiated if available. Recent attempts to reduce the damage to the heart from an acute myocardial infarction have resulted in studies of prehospital use of thrombolytics or clot busters. In rural areas or congested urban areas trained paramedics are giving thrombolytics to patients who meet specific rigid criteria. Determining the effectiveness of this treatment is done through various studies. Studies, like the TIMI-19, evaluate time of the onset of symptoms and time of administration of thrombolytics and the patients outcome. Studies have also been done comparing prehospital thrombolytics and inhospital administration of thrombolytics and interventional angioplasty. The specific medication utilized and the criteria the patient must meet are factors for each of several different studies. If the patient has lost breathing or circulation advanced cardiac life support (including defibrillation) may be necessary and (at the paramedic level) injection of medications may be given per protocol. CPR is performed if there is no satisfactory cardiac output. About 20% of patients die before they reach the hospital – the cause of death is often ventricular fibrillation. Wilderness first aidEdit In wilderness first aid, a possible heart attack justifies evacuation by the fastest available means, including MEDEVAC, even in the earliest or precursor stages. The patient will rapidly be incapable of further exertion and have to be carried out. A sublingual aspirin tablet may help. Doctors traveling by commercial aircraft may be able to assist an MI patient by using the on-board first aid kit, which contains some cardiac drugs used in advanced cardiac life support, and oxygen. Flight attendants are generally aware of the location of these materials. Pilots may divert the flight to land at a nearby airport. A heart attack, especially because of cardiac arrhythmias, is often a life-threatening medical emergency which demands both immediate attention and activation of the emergency medical services. Immediate termination of arrhythmias and transport by ambulance to a hospital where advanced cardiac life support (ACLS) is available can greatly improve both chances for survival and recovery. The more time that passes, even 1 – 2 minutes, before medical attention is available/sought, the more likely the occurrence of both (a) life threatening arrhythmias/death and (b) more severe and permanent heart damage. In the hospital, oxygen, aspirin, glyceryl trinitrate (nitroglycerin) and analgesia (usually morphine, hence the popular mnemonic MONA, morphine, oxygen, nitro, aspirin) are administered as soon as possible. In many areas, first responders can be trained to administer these prior to arrival at the hospital. The ultimate goal of the management in the acute phase of the disease is to salvage as much myocardium as possible and restore contractile function of heart chambers. This is achieved primarily with thrombolytic drugs, such as streptokinase, urokinase, alteplase (recombinant tissue plasminogen activator, rtPA) or reteplase. Heparin alone as an anticoagulant is ineffective. Aspirin is a standard therapy that is part of all reperfusion regimens. Because irreversible ischemic injury occurs within 2-4 hours of the infarction, there is a limited window of time available for reperfusion to work. Although clinical trials suggest better outcomes, angioplasty via cardiac catheterization as a first-line measure is probably still underused. This is largely dependent on the availability of an experienced interventional cardiologist on-site, or the availability of rapid transport to a referral centre. The goal of primary angioplasty is to open the artery within 90 minutes of the patient presenting to the emergency room. This time is referred to as the door-to-balloon time. If this door-to-balloon time exceeds the time required to administer a thrombolytic agent by > 60 minutes, then the administration of a thrombolytic agents is preferred. Emergency coronary surgery, in the form of coronary artery bypass surgery is another option, although this option is in decline since the development of primary angioplasty. The same limitations apply here: cardiothoracic surgery services are not available in many hospitals. Monitoring and follow-upEdit Additional objectives are to prevent life-threatening arrhythmias or conduction disturbances. This requires monitoring in a coronary care unit and protocolised administration of antiarrhythmic agents. Patients are discouraged from working and sexual activity for about two months, while they undergo cardiac rehabilitation training. Local authorities may place limitations on driving motorised vehicles. During a follow-up outpatient visit, or increasingly before discharge from hospital, further investigations are performed to objectivate coronary artery disease. If rescue angioplasty has not already been performed, a coronary angiogram (or alternatively a thallium scintigram or treadmill test) may be done to identify treatable causes, as this will decrease the risk of future myocardial infarction. Patients are usually commenced on several long-term medications post-MI, with the aim of preventing secondary cardiovascular events such as further myocardial infarctions or cerebrovascular accident (CVA). Unless contraindicated, such medications may include: - Antiplatelet drug therapy such as aspirin and/or clopidogrel should be continued to reduce the risk of thrombus formation. Aspirin is first-line, owing to its low cost and comparable efficacy, with clopidogrel reserved for patients intolerant of aspirin. The combination of clopidogrel and aspirin may further reduce risk of cardiovascular events, however the risk of hemorrhage is increased. - Beta blocker therapy such as bisoprolol or metoprolol should be commenced. These have been particularly beneficial in high-risk patients such as those with left ventricular dysfunction (LVD) and/or continuing cardiac ischaemia. β-Blockers decrease mortality and morbidity. They also improve symptoms of cardiac ischemia in NSTEMI. - ACE inhibitor therapy should be commenced 24 – 48 hours post-MI in hemodynamically-stable patients, particularly in patients with a history of MI, diabetes mellitus, hypertension, anterior location of infarct (as assessed by ECG), tachycardia, and/or evidence of left ventricular dysfunction. ACE inhibitors reduce mortality, the development of heart failure, and decrease ventricular remodelling post-MI. - Statin therapy has been shown to reduce mortality and morbidity post-MI, irrespective of the patient's cholesterol level. - The aldosterone antagonist agent eplerenone has been shown to further reduce risk of cardiovascular death post-MI in patients with heart failure and left ventricular dysfunction, when used in conjunction with standard therapies above. Patients' blood pressure is also treated to target, and lifestyle changes are suggested, chiefly smoking cessation, regular aerobic exercise, a sensible diet, and limitation of alcohol intake. - Congestive heart failure + - Recurrent infarction - Mitral regurgitation - especially right-sided MI - Papillary muscle rupture + - Arrhythmias (ventricular tachycardia, ventricular fibrillation, complete heart block) - Cardiac tamponade - Cardiogenic shock (Points labelled '+' can all lead to cardiogenic shock) A person who has suffered from a myocardial infarction may be prevented from participating in activity that puts other people's lives at risk, eg driving a car or airplane. - Angina pectoris - Dressler's syndrome - Coronary heart disease - Coronary thromboses - Hibernating myocardium - Stunned myocardium - Ventricular remodeling - Cardiac arrest - Diet and heart disease - ↑ World Health Organization. Annex Table 2: Deaths by cause, sex and mortality stratum in WHO regions, estimates for 2002 The world health report 2004. - ↑ McSweeney JC, Cody M, O'Sullivan P, Elberson K, Moser DK, Garvin BJ (2003). Women's early warning symptoms of acute myocardial infarction. Circulation 108 (21): 2619-23. PMID 14597589. - ↑ Spodick DH (2004). Decreased recognition of the post-myocardial infarction (Dressler) syndrome in the postinfarct setting: does it masquerade as "idiopathic pericarditis" following silent infarcts?. Chest 126 (5): 1410-1. PMID 15539705. - ↑ Davis TM, Fortun P, Mulder J, Davis WA, Bruce DG (2004). Silent myocardial infarction and its prognosis in a community-based cohort of Type 2 diabetic patients: the Fremantle Diabetes Study. Diabetologia 47 (3): 395-9. PMID 14963648. - ↑ Springhouse,. ECG Facts Made Incredibly Quick! (Incredibly Easy! Series), Hagerstwon, MD: Lippincott Williams & Wilkins. ISBN 1-58255-672-5. - ↑ Gillum RF, Fortmann SP, Prineas RJ, Kottke TE. International diagnostic criteria for acute myocardial infarction and acute stroke. Am Heart J 1984;108:150-8. PMID 6731265 - ↑ Alpert JS, Thygesen K, Antman E, Bassand JP. Myocardial infarction redefined--a consensus document of The Joint European Society of Cardiology/American College of Cardiology Committee for the redefinition of myocardial infarction. J Am Coll Cardiol 2000;36:959-69. PMID 10987628. - ↑ Beers MH, Berkow R, editors. The Merck Manual, 17th edition. Whitehouse Station (NJ): Merck Research Laboratories; 1999. ISBN 0-911910-10-7. - ↑ 9.0 9.1 Rossi S, editor. Australian Medicines Handbook 2006. Adelaide: Australian Medicines Handbook; 2006. ISBN 0-9757919-2-3. - ↑ Smith A, Aylward P, Campbell T, et al. Therapeutic Guidelines: Cardiovascular, 4th edition. North Melbourne: Therapeutic Guidelines; 2003. ISSN 1327-9513 - ↑ Yusuf S, Peto R, Lewis J, Collins R, Sleight P. Beta blockade during and after myocardial infarction: an overview of the randomized trials. Prog Cardiovasc Dis 1985;27:335-71. PMID 2858114 - ↑ Pitt B, Remme W, Zannad F, Neaton J, Martinez F, Roniker B, et al. Eplerenone, a selective aldosterone blocker, in patients with left ventricular dysfunction after myocardial infarction. N Engl J Med 2003;348(14):1309-21. PMID 12668699. <span class="FA" id="pl" style="display:none;" /> ar:احتشاء قلبي bg:Инфаркт на миокарда de:Myokardinfarkt es:Ataque cardiaco eu:Miokardio infartu akutu fr:Infarctus du myocarde ko:심근경색 ko:심근경색증 id:Serangan jantunghe:התקף לב ms:Sakit jantung nl:Hartaanvalno:Hjerteinfarktpt:Infarto agudo do miocárdio ru:Инфаркт миокарда sr:Срчани удар fi:Sydäninfarkti sv:Hjärtinfarkt vi:Nhồi máu cơ tim uk:Серцевий напад yi:הערץ אינפֿארקט zh:心肌梗塞 |This page uses Creative Commons Licensed content from Wikipedia (view authors).|
<urn:uuid:abef6803-0edc-48a2-a6ae-2a7cd70048b6>
CC-MAIN-2017-17
http://psychology.wikia.com/wiki/Infarction
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917126237.56/warc/CC-MAIN-20170423031206-00428-ip-10-145-167-34.ec2.internal.warc.gz
en
0.875517
6,115
3.53125
4
See “US Perspective on Off-label Use in Pediatrics” by Karesh and Mulberg on page 113. The improvement of children's access to necessary medicines as well as improved legal and scientific conditions for the use of medicines in pediatrics has been a subject of interest for many years. There have been several recent and valuable regulatory initiatives in the European Union to promote the availability of medicines specifically developed for children and to increase the number of medicines that are correctly evaluated in children and include recommendations for use in children in their officially approved conditions of use (1–4). Nevertheless, off-label use of medicines in children is still a relevant issue. New legislation on compassionate and off-label use of medicines has recently been adopted in Spain (5). The purpose of this national law is to facilitate access to necessary medicines by decreasing administrative burdens and delays, as well as to clarify the legality of off-label use, establishing principles such as physician and institutional responsibilities and the need for documented oral consent from the patient (or their parents). The aim of the present study is to describe the medicines prescribed to children attending our pediatric gastroenterology outpatient clinic and to outline the magnitude and characteristics of off-label drug use, identifying the medicines most commonly prescribed off-label. Additionally, the identification of possible actions to improve the quality of medical prescription has been considered. An observational, transversal, and descriptive drug use study was carried out in the pediatric gastroenterology outpatient clinic of a tertiary care university hospital, which incorporated pediatric care only 3 years ago and still has no pediatric surgery. All of the patients attended in the pediatric gastroenterology outpatient clinic from January 1, 2010 to October 31, 2010 were retrospectively reviewed using a structured questionnaire and the following information was collected for each patient from his or her medical records: date of birth; weight; diagnoses; and prescription details such as indication, dose, dose frequency, and route of administration. Medicines prescribed to children of up to 16 years were registered and assessed and their conditions of use were analyzed by comparing them with the authorized conditions set out in the official information of the Spanish Agency for Medicines and Health Products (AEMPS). Official (SPC) was obtained from the AEMPS online Medicines Information Center (CIMA), accessible at the AEMPS Web page (6). For some extremely old medicines, with no available official SPC, such as bismuth citrate, published recommendations of use by European or national scientific societies and expert groups were taken as the authorized conditions of use. We considered off-label use to be the use of a drug at an indication, dosage, frequency, or route of administration different from those specified in the SPC or in children outside the authorized age group. We also considered as off-label the use of a medicine with no specific information on pediatric use in its SPC. The study was conducted in line with national regulations and international ethics recommendations on biomedical investigation and was approved by the Research Ethics Committee of Puerta de Hierro Majadahonda University Hospital. Data were entered into a relational database (Microsoft Excel, Microsoft, Redmond, WA). A descriptive analysis of continuous variables was performed using mean, standard deviation, median, and range. The statistical analysis was performed with the Microsoft Office Excel Professional Edition 2003 package. A total of 695 children (367 boys and 328 girls; average age 4.3 ± 4.4 [22 days–15.6 years]) were included in the study. Patients were placed into 3 age groups, namely, infants (younger than 2 years) 48.2%, children (between 2 and 10 years) 39.7%, and adolescents (11 years or older) 12.1%. Of these, 207 children (29.8%) had received prescriptions (Table 1). The probability of receiving a medical prescription increases with age. The percentage of patients who received medicine ranges from 18.5% in infants or 36.6% in children to 52.4% in adolescents (Fig. 1A). The most common diagnoses in the total infant population were cow's-milk protein allergy (161 cases), gastroesophageal reflux disease (63 cases), failure to thrive (58 cases), and celiac disease (19 cases). In the children group, common diagnoses were constipation (62 cases), nonspecific abdominal pain (56 cases), celiac disease (52 cases), and nonspecific diarrhea (25 cases). For the adolescent group, diagnoses were nonspecific abdominal pain (30 cases), Helicobacter pylori infection (22 cases), celiac disease (10 cases), failure to thrive (8 cases), and constipation (6 cases). A total of 331 drug prescriptions involving 39 different active substances were analyzed (Table 2). The most frequent active substances were polyethylene glycol (76; 23.9%), ranitidine (47; 12.2%), esomeprazole (30; 9.1%), amoxicillin (24; 7.3%), clarithromycin (23; 6.9%), metronidazole (23; 6.9%), domperidone (20; 6.0%), omeprazole (16; 5.4%), and lansoprazole (15; 4.5%). The majority of medicines were administered orally (97.8%), with 1.2% by parenteral route, 0.6% by rectal route, and 0.4% by topical application. Of the 331 prescriptions recorded, 110 (33.2%) were off-label. The most representative pharmacotherapeutic groups involved in off-label use were antiacid (100% off-label use), anti-H2 (78.2%), proton pump inhibitors (58.0%), antibiotics (16.4%), and laxatives (14.3%). It is worth noting that there are some therapeutic groups (immunosuppressive agents, antiemetics), with extremely infrequent use (<5 prescriptions), but they were used as off-label in all of the cases. Table 3 identifies all of the active substances with any off-label prescription (by age group). The main reason for considering off-label use was that the age range was not covered by the SPC either by not being directly included in the indication wording or indirectly through the inclusion of specific age-adpated posology recommendations. The remaining off-label uses (17.3%) were related to the use of higher or lower than recommended doses (Table 4). It is noteworthy that most of these situations are detected in the group of adolescent patients, in whom clinical practice should sometimes overrule the established in the SPC according to the calculation of the dose depending on weight or age. Of a total of 207 children that received medicines, 47.3% received off-label drugs, with 89.8% of these receiving 1 off-label drug, 8.2% receiving 2 off-label drugs, and 2.0% receiving 3 off-label drugs. With regards to the analysis of off-label use according to age group, the most elevated percentage was that of the infant group (in which 85.5% received at least 1 off-label medicine) versus 28.7% and 36.4% for the children and adolescent groups, respectively. (Fig. 1B) With regard to obtaining parental informed consent in the case of off-label use, no documentation was found in medical records. Therefore, it was not possible to verify obtaining an oral parental informed consent, a practice that is now mandatory according to the recent Spanish law on off-label use of medicines. The use of off-label drugs in children has been extensively documented in different settings and countries (7–16). With regard to use in pediatric gastroenterology, Dick et al (16) analyzed the conditions of use of medicines in 2002 in the United Kingdom and found that 37.4% of 777 prescriptions were administered off-label, 26.7% of the cases due to a different indication, and 10.7% due to unauthorized age. The 5 most common clinical diagnoses were gastroesophageal reflux, constipation, inflammatory bowel disease, H pylori infection, and intestinal malabsorption. The off-label drugs most frequently prescribed were domperidone (19.6%), ranitidine (17.2%), omeprazole (12.8%), azathioprine (10.3%), tacrolimus (8.3%), metronidazole (7.2%), mesalazine (5.2%), and polyethylene glycol (3.8%). In our study, we analyze off-label use in relation to age and, as was expected, we found that off-label use accounted for 33.2% of all the prescriptions with a higher percentage at early ages. In children younger than 2 years, up to 85.5% of the prescriptions were off-label. In comparison with the study carried out by Dick et al (16), our study shows an increase in the off-label use of polyethylene glycol and proton pump inhibitors. These differences are probably not related to differences between the 2 units, but rather to the increase in use of these medicines as a consequence of pediatric therapeutic protocols and scientific evidence that have been developed during the time elapsed between both studies. It has been widely reported that information available for children in the SPC is inadequate, incomplete, and shows inconsistencies in many cases (17,18). There is usually no dosage recommendation according to weight, body surface, or age range, although it is widely recognized that extrapolation of pediatric dosages from the adult population is not always appropriate due to specific pharmacokinetics and pharmacodynamics in children (13,17). It is not infrequent to find that the widely accepted therapeutic recommendations or practice guidelines in children are not in accordance with the formally approved conditions of use in the SPC, even in cases in which good supporting scientific evidence is available. Another aspect that has been studied with respect to the use of off-label medicines is its safety because it is possible that adverse reactions are more frequent or severe than those related to medicines prescribed in line with the SPC. Off-label use implies that there are no adequately stated recommendations on dosage and other conditions for use. This could lead to medication errors, including dosage errors that are the cause of adverse reactions in children. Errors in drug administration were studied by Conroy (19) using a series of 158 errors occurring between 2004 and 2006, and it was concluded that mistakes were more common when prescribing off-label drugs or unlicensed drugs (not commercially available, such as magistral formulas that are sometimes required by pediatricians). A recent study carried out in Denmark (20) analyzed spontaneous reporting of adverse reactions in children from birth to 17 years during a 10-year period (1998–2007). Seventeen percent of adverse reactions were associated with off-label use and of these, 60% were severe. The populations most affected by these reactions were teenagers, particularly in the use of oral contraceptives and drugs for the treatment of severe acne. In contrast, Phan et al (15) found there was a rate of reported adverse reactions related to off-label use lower than the rate reported with approved uses. They had reviewed the medical records of all of the patients of 18 years or younger admitted to pediatric emergency department during a 5-month period. A total of 2191 patients with 6675 medications were revised and they found only 40 reported adverse reactions (0.6%). Only 12.5% were due to off-label drug use, which is half of those expected according to the frequency of off-label use. Nevertheless, underreporting appears to be an issue in the present study and we agree with other authors (19,20) in concluding that it is necessary to improve pharmacovigilance, monitoring, and documentation of adverse effects associated with off-label drug use. To change this situation and meet the therapeutic needs of a population that is still in inferiority with respect to drug use, the performance of clinical trials and safety studies in children should be encouraged by all of the agents involved (eg, pediatricians, parents, research ethics committees, health authorities, financing bodies). In addition, health authorities should ensure that the SPC of all of the authorized medicines are amended with regard to the scientific evidence available at any given time. The situation of the regulatory approval of pediatric use of medicines began to change in Europe a few years ago. In the same way as previous experience in the United States, the European Union issued pediatric regulation in 2007 with the objective of ensuring both high-quality research in the development of medicines for children and the availability of high-quality information about medicines used in children (1). At the same time, International Networks of Excellence, such us Task-force in Europe for Drug Development for the Young, comprising >300 pediatric experts from 11 countries, including Spain, have informed and advised the European Medicines Agencies Network on pediatric drug use, and important cooperative work has been carried out so as to identify unmet therapeutic needs in children (21). Therapeutic research in children has been specifically supported at both European and national level, and new research projects have been generated in Europe through Task-force in Europe for Drug Development for the Young and other interested parties. At the national level, several different initiatives have been started. One example is the Committee for Medicinal Products of the Spanish Pediatrics Association, a working group that has the aim adjusting the use of drugs in children in our country (21). As a result of new pediatric legislation, an exhaustive task reviewing pediatric evidence on use of medicines is presently being carried out by pharmaceutical companies and the European Medicines Agencies and recently, several new revised pan-European–approved conditions of use in children have been issued by the official European Medicines Agencies Network (4). Although this is an extremely positive achievement, the agreed wordings appear to be too factual and are lacking true endorsement of new indications. The example of omeprazole is shown in Table 5. In addition, there is some delay in the implementation of agreed changes into the official SPCs available in the different European Union countries and, more important, there are many pediatric conditions, which still lack scientific evidence on medicine use or need the development of new pharmaceutical formulations suitable for children. One such example is the nonavailability of a nonalcoholic commercial formulation of ranitidine. At the same time, a new regulation has been issued in Spain (5) with the aim of facilitating off-label use of medicines when they are well supported by pediatric evidence or established in practice guidelines, as well as clarifying the responsibilities in off-label use. Regulation establishes that in these situations, the physician should register off-label use in the medical record of the patient together with justification for its use, as well as informing parents about the benefits and potential risks arising from off-label use and obtaining their oral consent. It would be good practice for the physician to register in the patient's medical record that oral informed consent has been obtained, but this procedure has not yet been implemented in present practice. In our study, no documentation of an informed consent could be found in medical records nor was any reference made to the justification of off-label use. It is our belief that the oral informed consent was not obtained in most cases and it is likely that the pediatrician was not even aware that he/she was prescribing a drug not approved for children. Presently, there are 2 realities concerning the off-label use of drugs in pediatrics. On the one hand, there is an increasing awareness about how undesirable off-label use of medicines in children is and there are many initiatives to foster pediatric clinical trials as well as encourage Pharmaceutical Companies and Medicines Agencies to work together to face this challenge in children's health care. On the other hand, a great percentage of pediatricians continue to prescribe medicines without knowing whether the dose is adjusted to the approved SPC, and even whether the drug is formally indicated in children or in certain age groups. It is obvious that if the doctor does not know it is off-label use, then the family is also not informed. Sometimes off-label use is supported by well-accepted recommendations or therapeutic protocols. Meanwhile, there is a question that remains unanswered. Does off-label use really pose a risk for children? Or perhaps the question should be whether official SPC are really updated or do they fulfill their basic purpose of informing physicians about the use of a medicine with a positive benefit risk ratio? In any case, the present responsibility for prescribing an off-label drug belongs exclusively to the pediatrician, and therefore, clinical practice guidelines are of utmost importance as they could establish the use of medicines based on both the approved conditions and the best available evidence even in off-label use. Given this situation, several new initiatives have been implemented in our hospital. First, the need to inform pediatricians about whether the use is approved or not so as to ensure that, whenever possible, the approved medicine is used. Second, to always reflect off-label use of medicines in the clinical history, and to inform parents of such use. Third, the decision was made to identify common off-label use still not supported by guidelines and scientific evidence and to prospectively collect follow-up efficacy and safety data so as to provide evidence and support for future formal endorsement of pediatric use. We think that description of off-label use and identification and implementation of similar measures of those taken at our hospital could be of value also for other pediatric centers. 2. European Medicines Agency. Revised priority list for studies into off-patent paediatric medicinal products for the 5th Call 2011 of the 7th Framework Programme of the European Commission. http://www.ema.europa.eu . December 6, 2011. 4. Heads of Medicines Agencies. List of active substances and agreed SPC wordings—EU work sharing procedure in the assessment of paediatric data. http://www.hma.eu/99.html . Accessed December 6, 2011. 7. Baiardi P, Ceci A, Felisi M, et al. In-label and off-label use of respiratory drugs in the Italian paediatric population. Acta Paediatr 8. Pasquali SK, Hall M, Slonim AD, et al. Off-label use of cardiovascular medications in children hospitalized with congenital and acquired heart disease. Circ Cardiovasc Qual Outcomes 9. Bavdekar SB, Sadawarte PA, Gogtay NJ, et al. Off-label drug use in a pediatric intensive care unit. Indian J Pediatr 10. Dessi A, Salemi C, Fanos V, et al. Drug treatments in a neonatal setting: focus on the off-label use in the first month of life. Pharm World Sci 11. Doherty DR, Pascuet E, Ni A, et al. Off-label drug use in pediatric anesthesia and intensive care according to official and pediatric reference formularies. Can J Anaesth 12. López Martínez R, Cabañas Poy MJ, Oliveras Arenas M, et al. Drug use in a neonatal ICU: a prospective study. Farm Hosp 13. Medina Claros AF, Mellado Peña MJ, Baquero Artigao F. Basis for the clinical use of drugs in children. Current status of paediatric use of drugs in Spain. An Pediatr Contin 14. Morales-Carpi C, Estañ L, Rubio E, et al. Drug utilization and off-label drug use among Spanish emergency room paediatric patients. Eur J Clin Pharmacol 15. Phan H, Leder M, Fishley M, et al. Off-label and unlicensed medication use and associated adverse drug events in a pediatric emergency department. Pediatr Emerg Care 16. Dick A, Keady S, Mohamed F, et al. Use of unlicensed and off-label medications in paediatric gastroenterology with a review of the commonly used formularies in the UK. Aliment Pharmacol Ther 17. Campino Villegas A, López Herrera MC, Caballero MI, et al. Do neonates have the same pharmacotherapeutic opportunities as adults? An Pediatr (Barc) 18. Morales-Carpi C, Julve Chover N, Carpi Lobatón R, et al. Drugs used in paediatric outpatients: do we have enough information available? An Pediatr (Barc) 19. Conroy S. Association between licence status and medication errors. Arch Dis Child 20. Aagard L, Hansen EH. Prescribing of medicines in the Danish paediatric population outside the licensed age group: characteristics of adverse drug reactions. Br J Clin Pharmacol 21. Mellado Peña MJ, Piñeiro Pérez R, Medina Claros AF, et al. Use, implementation and impact of the TEDDY Network in Europe. Farm Hosp
<urn:uuid:04094884-d791-4b2f-929a-e39093ac1047>
CC-MAIN-2017-17
http://journals.lww.com/jpgn/Fulltext/2013/02000/Drug_Utilization_and_Off_label_Drug_Use_in_Spanish.13.aspx
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917122996.52/warc/CC-MAIN-20170423031202-00192-ip-10-145-167-34.ec2.internal.warc.gz
en
0.946385
4,355
2.6875
3
San Antonio Living History Association Texian Coordinator - 2003 Texian guide book Cover illustration by, and used with permission of, Gary S. Zaboly The groups of men who fought and died at the San Antonio de Valero Mission (what we today call the Alamo) were not the movie versions of "Mountain Men", or "Buckskinners", nor clones of Davy Crockett as portrayed by John Wayne and Fess Parker. The movies stereotypes of the men who fought and died at the Alamo are far removed from reality. The real David Crockett dressed in a buckskin outfit now and then, but only for certain occasions. The rest of the time, he preferred the dress of men accustomed to comfortable living (he was a member of the US Congress, and dressed appropriately). Some of the men who fought and died at the Alamo were farmers, many were townsfolk, and some had only recently arrived in the United States from countries like Germany, England, Scotland, etc. They were doctors, lawyers, clergymen, shopkeepers and tradesmen of all kinds. With the exception of the few men under Travis’ command, these men were all volunteers, answered to their own elected leader, and seldom paid any attention to anyone else. They were fiercely independent and did not take well to criticism or to following orders from anyone. They also were dedicated and loyal enough to stand with a friend, even to the point of giving up their lives if so called upon. They were a special breed of men who called themselves TEXIANS. Today these men would probably be Special Forces Rangers, Navy SEALS or United States Marines. Only a few of the nearly 200 men who died at the Alamo were native Texians. A few more were men who had moved to Texas prior to the start of hostilities and settled into one of the towns or communities, such as Gonzales or Nacogdoches. The vast majority of the men who were there had heeded the call to come to Texas to serve in the militia in exchange for grants of land, and a chance at a new life. There was no standardized "outfit" among the Texians, so it is very difficult to set any hard and fast rules as to how any individual Texian should dress. The following are some examples of what would be acceptable for anyone attempting to prepare a "costume". Please do not be put off by the use of the word costume as it simply means an assemblage of clothing worn to project a certain image – in this case, the image of a man living in 1836 in San Antonio de Bejar. Starting with footwear, remember that only leather was used to make footwear during this time period. The most common type of shoe worn during the 1830s was a brogan or a boot. Brogans generally had a squared toe, but round-toed boots were becoming popular. There was no boot polish available at the Alamo, so highly polished footwear would not be appropriate. A common method of blackening boots at that time was to make a mixture of lampblack and lard and rub it into the leather. Black was probably the most standard color, but brown would be acceptable. If you wear boots (NO COWBOY BOOTS), I would strongly recommend you try to find some made so that the rubber sole is as inconspicuous as possible. Boots and Brogans actually had leather soles, but leather tends to get very slippery when wet, so a rubber sole is safer. Unless it is really extreme, such as a vibram lug sole, most folks won’t notice. If you choose to wear boots, be aware that very few men wore their trousers tucked inside their boots. Normally your trouser legs would cover your boot-tops or brogan tops. If your persona is to be a Texas farmer, you may want to wear a set of leather or wool leggings over your trousers, from the knee down, to protect your lower legs from stickers, thorns, and snakes (all very common throughout Texas). Moccasins were quite common footwear on the frontier (Texas was definitely considered as a frontier area), but until you have become accustomed to wearing them, you may only be able to tolerate them for short periods at first. The men at the Alamo who would have worn moccasins were most likely those whose regular footwear had worn out, and since supplies of all kinds were desperately scarce, they would have resorted to making their own. Reliable reports exist of some of the men simply wrapping their feet in a piece of cowhide with the hair still on it. Whether this was a very crude form of moccasin or used over their regular footwear because of the very cold, wet conditions at the time I do not know, but in fact it did occur. The material most commonly used for trousers was probably wool, although some leather or cotton canvas and linen was used as well. Remember that all the events portrayed by the San Antonio Living History Association (SALHA) occurred during the winter of 1835-36. This was one of the coldest winters on record. Levis had not been invented yet (that occurred during the gold rush days in California and they were made of canvas, not denim), nor had any form of zippers! The most popular trousers were the "fall front", or buttoned flap (similar to the Navy trousers) with two or three buttons, made of horn or wood or metal (no plastic please!). Trousers were held up with suspenders. Normally these are non-elastic. Even though the English invented elastic in 1820, it was not yet in widespread use on the frontier. Suspenders were attached to the trousers with buttons, rather than clamps. If the trousers had pockets (many did not), they would only be in the front (sides). During warmer weather, men often wore trousers made of linen, cotton canvas, or muslin. Colors usually ranged from off-white to brown, to dark gray. Trousers did not have belt loops. If a man wore a belt, it was so he could carry a knife, an axe, a pistol, or maybe all three. Some men wore a sash rather than a belt, for the same purpose. Remember when putting together your outfit that everything men carried on their person had a use. Most men didn’t wear anything purely for ornamental reasons; so before you buy or make anything, ask yourself if the person you are portraying would have needed it. These men had traveled a long way to get to the Alamo, and most only arrived with what they had on their backs, which was very little, and by the time of the siege, their clothing was very badly worn and stained (tattered and torn, patched, and patched again would probably be a very accurate description). A shirt was more than just a shirt. The "tail" of a man’s shirt would extend downward to his knees and serve as a shirt during the day and a nightshirt to sleep in. How often it got washed depended upon how many shirts you owned (normally no more than two or three – often only one). This note on washing of clothing applied to all clothing items. One rarely had the time to devote to washing and drying clothing, nor were there usually any facilities for bathing. During this time period, you were more concerned about mere survival than you were about personal hygiene. The most common shirt materials are linen, osnaburg, or muslin. Although white was the most common color, it is easy to see that shirts didn’t stay white for very long. Some men dyed their shirts, not only to please their individual taste in clothing style, but also to hide the sweat and dirt stains. Remember there was no Ritz Dyes in 1836. If you want to dye a shirt, use pecan hulls or some other locally grown plant product as the basis for your dye. Boil the shells, etc. in an old pot until you have enough dye, then add the shirt and continue boiling. I would suggest at least a couple of washings before wearing a shirt that has been dyed in this manner, so that you don’t wind up dyeing yourself as well. Shirts did not have pockets, and they didn’t button up the front. You slipped the shirt over your head, and either left the neck open or closed the "collar" with the one or two buttons provided, or used a drawstring and tied it in a bow. Then you may or may not wear a "stock", which was the 1800’s version of a tie, and was worn when one wanted to "dress up". During warm weather, most men wore a waistcoat, or vest, as an outer garment. Many wore one at all times, except while sleeping. It has been said that many men would venture out in public without a hat before going out without their vest, as they considered not having something over their shirt/nightshirt like being out and about in their underwear. Waistcoats were generally made of wool, cotton canvas, linen, or leather. Buttons would be pewter, wood, or horn. No pockets. Dark brown, burgundy, red, black, and navy were the most commonly used colors, but those who could afford it occasionally used patterned materials. Coats came in a variety of styles and lengths, depending upon where the individual came from. But by far the most practical and commonly worn coat at the Alamo would have been a wool coat (again our events took place during winter). The most popular colors were black, dark brown, and navy, although men that came to the Alamo from the cities in the northeastern part of the United States would possibly wear more stylish, patterned materials. One or two men may have had a cloak to wear, or a capote, but these were not very commonly used during this period. Military officers and gentlemen who could afford them wore cloaks during the 1700s, 1800s, and even into the early 1900s. A capote is a hooded long coat, held together in the front with a sash. A capote is normally made of wool blanket material, and as such is very warm. The major drawback to these two styles is their expense and the fact that when wet they are very heavy. Wool short jackets were in use by sailors and laborers through the early 19th century. Frontiersmen in the south commonly wore hunting frocks of cotton canvas or linen more often than leather. If you decide to wear leather for any of your clothing items, be aware, leather is hot in the summer and cold in the winter, and when wet it takes a lot longer to dry than cloth. But most important to you is that leather is very expensive now. In the 1830s this was not the case. Fabrics of all kinds were hard to get in Texas, and very expensive when they were available. Cow hides and deer hides were easy to come by, and when properly tanned, wore well under the harsh frontier conditions. Mixed use of leather and fabric in a costume would be quite appropriate. If you’re a Scot, you probably will choose to wear a Highland Bonnet. If you’re French, you may want to wear a Voyageur’s Cap. Otherwise you’ll probably choose to top off you ensemble with a forage cap or a brimmed hat of some kind. The forage cap was very popular, especially for winter wear. It could be made of wool or leather and could be had with fur on the outside of the cap, or on the inside for additional warmth. Top Hats were a very popular form of headwear among townsfolk. Farmers and some preachers seemed to prefer the broader brimmed hats with a lower crown, either round or flat topped. Winter hats were primarily wool felt, while warm weather hats were of straw. Nearly all men wore a hat of some kind, primarily for protection from the elements. Any of the styles mentioned above are appropriate for an 1836 reenactor. You could choose an animal skin for a hat, but they are hot, and often smelly, and even David Crockett preferred other styles of headgear. All cannons, muskets, rifles, shotguns, and pistols in use during the 1830s used black powder, and this is the powder used by all reenactors of the SALHA. The ignition systems for all these weapons (except cannons) are either flintlock or percussion caps. The flintlock musket was possibly one of the most common weapons used during this time period. The most common musket in use during the 1830s was the "Brown Bess", a British military surplus weapon, which was quite easily and inexpensively obtained. This is a 75-caliber weapon; 54" long and weighs 9 lbs. The French Charleville (also military surplus), which is a 69-caliber weapon, is 59 ½" long, and weighs 10 ¼ lbs. Both are smooth bore (like a shotgun), have no rear sight and a very poor front sight that doubled as a locking lug for attaching a bayonet to the barrel. The effect of the large lead ball they fire was devastating to anyone within their effective range, which for both is approximately 70-80 yards. These long, cumbersome, heavy weapons are best suited to the close quarters, open field formation style of combat developed in England and Europe, and adopted by Santa Anna’s army. Their primary weapon was the Brown Bess. Another weapon appropriate for an 1836 reenactor is the U.S. Model 1803 or 1816 Flintlock Musket, made by Springfield and Harpers Ferry Arsenals. This is a 69-caliber weapon, 57" long, and weighs 9 ¾ lbs. Loading a musket is simple and quick because of its large, smooth bore. Cleaning is easy for the same reason. All black powder weapons must be kept as clean and dry as possible to ensure reliable functioning. A dirty weapon will be hard to load and misfires will be a common occurrence. A reenactor must treat his weapon as if he has to bet his life on it firing every time he pulls the trigger. This is the way the people we are emulating took care of their weapons. We must do the same for every weapon we own. The rifle was usually the first choice of men who depended upon their weapon to protect themselves and their families, and to put meat on the table. The rifle is much more accurate than the musket because it has a rifled barrel and much better sights. Some, depending upon the ability of the shooter and the quality of the weapon, are accurate to 200 yards. There are some 54-caliber rifles available, but 50 or 45 caliber rifles are the most common now. Even 40, 36, and 32-caliber rifles were made, but these smaller bore rifles were considered more for women and young boys, although some men used them for hunting small game, such as squirrels and rabbits. This assortment of rifles could be any length, from 43" to 57" and weigh anywhere from 6 ½ to 9 ¼ lbs. Although the most common ignition system on the frontier was the flintlock, the percussion cap was becoming quite popular in much of the U.S. by the 1830s, so it is appropriate for our 1836 reenacting. Not all rifles were single shot weapons – double-barreled rifles were in use at that time, and highly favored by cavalry units. If you can find one of these rifles, expect to pay dearly for it. When you get ready to purchase a rifle, pay close attention to the details. More important than the caliber, length, ignition system, or even the style (Kentucky, Pennsylvania, Harpers Ferry, etc.) of the rifle is the type of sights and butt plate it has. Modern, adjustable sights did not exist in 1836, and neither did rubber or plastic butt plates. These two requirements eliminate many of the "reproduction" style weapons available today. The front sight should normally be a simple blade of brass, and often the butt plate, trigger guard, and other parts of the rifle will be brass also. Brass was a very popular metal because it could be easily shaped, but was tough enough to withstand a reasonable amount of abuse. It also does not rust. The primary finish for rifle barrels at that time was "browning", rather than the "bluing" that we use today. And wood was oiled, rather than lacquered, or coated with polyurethane, so try to find a rifle that has a good dull finish. Before you rush into buying, shop around at gun shows, gun dealers, and pawnshops. I own three guns. I bought all of them used, and the most expensive one was just $200.00. A new, shiny rifle will usually cost $500 or more. A shotgun was a very popular weapon with men who hunted for meat to supplement their diet, as it could be used with either shot or a single ball. It was, and still is, an outstanding close range defense weapon, and was the last weapon Col. Travis fired just before his death at the Alamo north wall. The favored configuration was a double-barreled gun. The shotgun has the advantage over a musket by providing two shots before you have to reload, with nearly the same effective range – if a lead ball is used. The use of lead shot shortens the effective range by approximately half, but in a combat situation it is especially lethal. At close range the blast from a shotgun can literally tear a man in half. If you decide to buy a shotgun, the same cautions about paying attention to details on rifles apply. Pedersoli, of Italy, makes an excellent period-correct shotgun (one that can be used without modifications). This can be a 10, 12, or 20-gauge, browned, flintlock or percussion firearm, and can be had in either long or short-barreled configuration. It is not cheap (expect to pay $500+ for one), and cheaper models can be had. The price range of shotguns currently starts at $325 and goes as high as $3,250. The pistols used during the 1830s were primarily single-shot guns. Although a few multiple barreled weapons were made, they were not very commonly used. Unlike rifles and shotguns, pistols were purely as a tool of self-defense. Once again, not everyone owned a pistol. If you owned a firearm (and most men did), it was either for protection or for hunting, and in most cases a musket, rifle, or shotgun is far superior to a handgun because you can stop "the enemy" before he is close enough to hurt you – IF you fire first, or he misses, and you do not. Effective range of a handgun is very limited – no more than 30 yards in the hands of an experienced shooter (less for the average man). For those who want to own a pistol, they have the same ignition systems as rifles – either flintlock or percussion cap. Revolvers were being developed at the time, but were not yet available. If you decide you need a handgun, consider how you will carry it before you buy. Holsters were not readily available. As always, pay close attention to the details. Stay away from adjustable sights, blued metal, and lacquered wood. A tomahawk is an extremely useful tool, and was carried by many men who lived on the frontier. In addition to its ability to cut wood for fires and building temporary shelters, quartering large game animals when hunting, and a variety of other useful tasks, it was a very effective weapon. Sometimes, since a tomahawk and a large belt knife were fairly interchangeable on the jobs they could perform, a man might carry one or the other. However, men were more prone to carry both, and more than one knife. Choices of tomahawks is relatively simple, as most produced today are suitable for reenacting. Once again, pay attention to details, and buy one that "looks old". Hand forged tomahawk heads are inexpensive and plentiful. As simple as buying a tomahawk can be, the purchase of a knife can become very complicated. Although the "Bowie Knife" (several styles are available) was very popular by 1835, and was the belt knife issued to the New Orleans Grays, not everyone owned one. And even those who did often owned other knives as well. The "Arkansas Toothpick" was also a very popular style of knife at the time, as was the "butcher knife". Jim Bowie used a butcher knife before owning a "Bowie Knife". Back to details: stainless steel did not exist in 1835. Shun it like the plague. Knives were hand forged of either Damascus or carbon steel. Davy Crockett owned a Damascus steel knife, as did others. They are beautiful knives and available today, but are rather expensive. As mentioned above, men often carried several knives. A real necessity was the small "patch knives", often carried in a sheath on the powder horn carry strap, or hung around the neck. Folding knives were popular, as a small knife like this can do a variety of tasks, including patch cutting. Many men carried more than one large knife for defensive reasons, just as they might have more than one pistol tucked into their belt or sash. The variety of knives available then and now is virtually unlimited, and you will only be limited in your selection of these weapons by the size of your bank account. One final note on knives – a sword or saber is considered a long knife. Some military men wore them. Travis had one. You can strap one on and be correct in your presentation, but they are costly, cumbersome, and not something that many of the civilian defenders would have had, unless it was a battle trophy. This section will be dedicated to all the little "extras" you will need to be successful in your role as a reenactor. First off, in the 1830s you would have needed a powder horn or flask to carry your personal supply of powder. As a SALHA reenactor you must be aware that powder horns and flasks will NOT be allowed to contain any powder during reenactments because of safety concerns. We use paper cartridges during all reenactments, so there is no need to have powder in your horn or flask anyway. One safety note to remember is that you must NEVER pour powder from a horn or flask directly into a weapon! The results could be disastrous! In the 1830s, you may have carried a very small, additional container (horn or flask) filled with fine grain powder, purely for priming the pan on your flintlock. I have never employed this technique since I have found that all I need for priming my pan is a small amount of powder saved from my "cartridge" after I charge my weapon. In fact, I don’t even own a priming horn or flask. While we are discussing extra items for your weapons, you need: As you progress in your career as a reenactor, you will discover a lot more "extras" that you need, but you will need the items listed above immediately. One other item I consider essential is a good cleaning kit. You will have to clean your weapon thoroughly each time it is fired – unless you are really fond of rust. The cleaning kit normally stays back in camp or at home. EXCEPTION: When SALHA does more than one reenactment in a day, bring your cleaning supplies and clean your weapon between the "acts". This will help you avoid misfires. Personal items you will need include, but are not limited to: This is a listing of some of the companies that carry goods you will need to assemble your Texian outfit. This is certainly not an all-inclusive list, nor is the list in any order of preference, but it should provide a good starting point. Guns and general merchandise. Huge catalog available for $5. Tents and general merchandise. Primarily for their tinware. Later on, you may want to add a sewing kit (for emergency clothing repairs, or to make yourself a new pair of moccasins); a fishing kit (just hooks and line) to have some variety in your menu when possible, and to make your jerky last longer; a net hammock (to get you off the ground, and away from things that crawl, and sting or bite, while sleeping (if there are any trees available). Just remember one thing – you must be able to carry every part of your reenacting kit wherever you go. That’s the way it was done in the 1830s. IF you were fortunate enough to travel on horseback rather than "shanks mare" (on foot), you could carry more "stuff", but not much. Did you ever wonder how the cowboys in the movies managed to carry a coffee pot, a large iron skillet, plate, knife, fork, spoon, cup, beans, bacon, flour, metal grate for the fire, and don’t forget the coffee – all in one side of their saddlebag? It all had to fit in one side because the other side was filled with their extra clothes, and at least 1000 rounds of ammunition. If you believe any of this actually happened…. have I got a deal for you! Texian Legacy Association Texas Revolution Basic Reading List (Click here for links to purchase these books) By Charles M. Yates I've been asked several times to recommend books on the subject of the Texas Revolution which would be helpful for reënactors and living historians. In order to present the best historical interpretations possible, reading about and studying the period are mandatory. The problem is that in today's hectic world, it's hard to wade through the abundance of books and articles on the subject available without some sort of starting place. I should point out that this is not a list of the only books necessary to read to understand the period. It is not an ending point; it is a beginning point. A great deal happened in Texas during the fourth decade of the 19th century and it is well to remember that there is no one book or set of books that can give the reader a complete and total understanding of the subject. The learning process is neverending. I, also, realize that many fine books have been left off of this list and, no doubt, one of your favorites is among them. It is not an insult to you or the author that your book isn't on the list, so don't send me nasty emails before you read the criteria listed below! The books on the following list have been selected with a specific set of criteria in mind. The first criteria requirement is that the list is limited to non-fiction books concerning Texas from, roughly 1830 to 1840. Some of the finest books on this era in Texas history have been written fairly recently, so it is easy to establish the later half of the 20th Century as a second criteria requirement. In addition to these two criteria, the books need to be of general interest, readable, accurate and provide a variety of perspectives. After all, I think we've all had enough of the tortuous, dry, boring history taught in public schools to last us a lifetime. It's time to have our interests piqued; to question our beliefs and to exercise the ol' gray matter a little. Number 11: The Alamo Remembered. Tejano Accounts and Perspectives, by Timothy M. Matovina, 1994, University of Texas Press. Many times we forget that during the Battle of the Alamo there were people living in San Antonio. In fact, the population of Béxar was about 2300 prior to the onset of hostilities in late 1835. It was a predominately Hispanic population and most of the population had wisely fled to the countryside prior to Santa Anna's arrival in 1836. This book is a compilation of accounts left by some of the people who stayed in San Antonio during the siege of the Alamo or returned shortly thereafter. It is a fascinating book and well worth reading. Number 10: The Magnificent Barbarians. Little Told Tales of the Texas Revolution, by Bill and Marjorie Walraven, 1993, Eakin Press. I want to say "Buy this book for your kids.", and it would, indeed, be a great book for them read. The only problem is that it's also a great book for adults. The "Little Told Tales of the Texas Revolution" are presented as stand alone essays, so you can literally pick up the book and start reading anywhere. It is well written and entertaining, but what sets the book apart is that it's very well researched. The Walravens did their homework and it shows. I read this book years ago and I still refer to it's bibliography, every now and then, when doing research. Number 9: A Revolution Remembered. The Memoirs and Selected Correspondence of Juan N. Seguín, edited by Jesús F. de la Teja. This is a wonderful book to help understand what the long established Tejano families went through during the turbulent years of 1835 to 1846. The whole story of Juan Seguín is seldom told and this book goes a long way to correcting that. Number 8: The Texas Revolutionary Experience: A Political and Social History, 1835-1836, by Paul D. Lack, 1992, Texas A&M University Press. OK, this is probably the most "academic" book in the list. It is a bit on the dry side and at times it will be a little slow for some readers, but it was written to provide a different perspective of the Texas Revolution. It is also controversial in some places. Lack discusses issues that were being discussed at the time and have long since been forgotten. He also provides statistics which alter the traditional view of the revolution, as a whole. If you want a book to challenge your beliefs and to really exercise the mind, this is it. Number 7: Texans in Revolt: The Battle for San Antonio, 1835, by Alwyn Barr 1990, University of Texas Press. Sometimes we forget that the Texians had to defeat the Mexican military to get the Alamo in the first place, so that they could defend it against Santa Anna three months later. This is a wonderful book about the first major battle of the Texas Revolution and the events that led up to it. Number 6: With Santa Anna in Texas: A Personal Narrative of the Revolution, by Jose Enrique de la Peña, 1997, Texas A&M University Press. De la Peña provides us with a unique view inside Santa Anna's army. He is not hesitant in his praise or condemnation of his fellow officers and his analysis of the Texas Campaign. He, also, describes in detail the beauty of the land and farms as well as the sufferings of the average Mexican soldado. There were many facets to the Texas Revolution and this account helps clarify a few of the lesser known or visited facets. Number 5: The Day of San Jacinto, by Frank X. Tolbert, 1959, McGraw-Hill Book Company, Inc. Frank Tolbert was a newspaper man and this book is written as a newspaper man would write it: as a story. It's well researched and accurate for its time. It's a first rate, fun read for young people or adults. This book is out of print, but should be available through any major library. Number 4. A Time to Stand by Walter Lord, 1978, Univ of Nebraska Press. This is one of the first of a genre of books written by eminent historians for popular consumption. It broke ground in researching the Siege and Battle of the Alamo and was written in a style that made it immensely popular to the general public. Even today, 38 years after it was published, it is still used as a benchmark of Texas History. Number 3. Blood of Noble Men: The Alamo Siege and Battle, by Alan C. Huffines and Gary S. Zaboly, 1999, Eakin Press. This book is a "must have" for any study of the Battle of the Alamo. It is a day by day description of the seige and battle of the Alamo as written by people who were there. Included are the wonderful drawings of Gary Zaboly and a wealth of information on dress, equipment and the village of San Antonio at the time. Alan and Gary did a bangup job on this book. Number 2. Three Roads to the Alamo, by William C. Davis, 1998, Harper Collins Publishers. William Davis is primarily a writer of Civil War books, but he brought his skills as a researcher and writer to Texas history with stunning affect. Three Roads to the Alamo is a biography of William B Travis, James Bowie and David Crockett and a must read for anyone who is interested in Texas history. Serious students of Texas history will find the notes and bibliography invaluable. Number 1. The Texian Iliad, by Stephen L. Hardin, 1994, University of Texas Press. While Davis' book is a close second, Hardin's Texian Iliad is the best overall book on the military aspect of the Texas Revolution ever written. It is not only a wonderfully written book and wonderfully illustrated by Gary Zaboly, but is also Dr. Hardin's dissertation, which attests to it's accuracy. If you could only read one book on Texas history, this is, quite simply, the one. TLA Review Many new discoveries concerning Texas history have been made in recent years and many more will be made as researchers continue digging through long forgotten records and documents. Again, these books are not meant to be the sole or terminating sources on the subject, but as a starting place for the continuing study of our Texian past. Be forewarned, though; history, particularly Texas history, can become a wonderfully satisfying addiction. Sic Semper Texanus Reading List Copyright © 1998 Texian Legacy Association
<urn:uuid:3cf2a6b0-5903-4c6f-9f9b-460c72873959>
CC-MAIN-2017-17
http://www.texianlegacy.com/txguide.html
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917120349.46/warc/CC-MAIN-20170423031200-00425-ip-10-145-167-34.ec2.internal.warc.gz
en
0.979376
7,076
2.765625
3
Edward Ainslie Braithwaite, M.D., L.M., C.C. PIONEER DOCTOR AND FREEMASON OF THE WEST O. P. Thomas P.D.D.G.M. When contemplating the history of Western Canada, one of the features that seem to stand out so definitely is the accomplishments that have been achieved as the result of the efforts and initiative of certain individuals. This brief history is based on the life of one of the outstanding pioneers of the west, particularly, Alberta, Dr. Edward Ainslie Braithwaite. A little over three hundred years ago, King Charles II granted a charter to a group of men interested in the fur trade. The articles of incorporation were drawn up on April 18, 1670, and the charter was granted on May 2, 1670. It was entitled "An Incorporation of Prince Rupert, Duke of Albemarle, Earl of Graven ...... into one body politique by the name of Governors and Adventurers trading into Hudson Baye." When this company was given its charter, in addition to getting the right to trade into this country, they agreed to endeavour to find the North-West Passage, and, also to discover as much as possible, the nature of the country which was included in the original Charter. One of the things that is noted, in the first eighty-four years of the Hudson Bay Company, there was only one man who struck out into the interior. Each year the Governor of the Company received an annual instruction from London: "choose out from among our Servants such as are best qualified with Strength of Body and the Country Language, to travel and to penetrate the country... For their encouragement we shall plentifully reward them." It is doubtful whether any servant of the Company would have been encouraged to venture into this unknown land, unless a threat to their trading volume had not entered into the picture. The French had been sending traders and Cover de Bois out for many years, from New France. These hardy men had gone to the Indians and done their trading with them directly. The Hudson Bay Company, on the other hand, had established forts or "Factories," usually at the mouths of the rivers which emptied into Hudson Bay. They encouraged the Indians to bring their furs to them. Now, however, the opposition were going to the Indians and encouraging them to trade nearer their homes. This made great inroads in the volume of trade with the Hudson Bay Company. So, in an attempt to remedy the situation, and, at the same time, follow the instructions from the Head Office in London, Henry Kelsey set forth in 1690 to see what lay beyond the margin of the Hudson By, and to encourage the Indians there to come down to the Bay to trade. After travelling through the timber country in which there were many rivers and lakes, he came, at last, to look upon the seemingly limitless plain land dotted with innumerable shaggy animals, the Bison, or "Buffalo" of the vast prairie land which form a large part of the present Canada. Thus, as the result of this man's work the Great Prairie Land of Western Canada became known. After the second decade of the 18th century the Hudson Bay Company was finding more of its trade threatened with the opposition of the French traders, who had followed in the footsteps of La Verendrye, in 1731. Eventually they had built forts inland to get the trade from the Indians. During the period from 1754 to 1774 they had sent inland sixty expeditions. In 1754, Anthony Henday went across the prairies by way of the Carrot River as far as they could paddle, then across to near where is now Saskatoon and on across the South Saskatchewan River, on towards the North Saskatchewan, then beside it to the Battle River, and thence along the Battle River to strike mostly west until some miles west of what is now Innisfail. Here he beheld the Shining Mountain, which we now call the Rockie Mountains. Before this he had been received by the Blood Indians, a branch of the Blackfoot Confederacy, with whom the Hudson Bay Company traders wanted to do business. Here, he found that these Indians used horses for transportation and scorned the use of the canoe. When Henday tried to interest them in coming down to the Hudson Bay, by canoe, with their furs, they refused the suggestion. So, again, one man contributed much to the opening up of this country. As a result of the work of Kelsey, and also the intrusion of the "carpet-bagger" traders from the St. Lawrence River area; and, now, by the information Henday was able to give to the Company, trading posts were established in more and more areas of the West. Cumberland House was built by the Hudson Bay Company on the Saskatchewan River in 1774. There had been other smaller posts by the opposition traders. Individuals like Samuel Hearne spent a great deal of time and suffered many privations so that more could be known of this country, particularly the Artic area. Alexander Mackenzie, after many trials and disappointments, showed how it was possible to get across the Rocky Mountains, and to the Pacific Coast by an inland route. Simon Fraser in following the river named after him, showed another route to the Pacific, and was the fore-runner of the great railway routes we have over this rugged terrain. Of course, even these results could not have been obtained if it had been for the painstaking work of another individual who showed a short route to the Columbia River and the Pacific, but who above all made accurate and detailed maps of this vast country, David Thompson. From these works the fur trading companies established centres of trade and, later, of population throughout this seemingly boundless country. Another individual who had a tremendous influence on the economic condition of this unknown country was Lord Selkirk. When he saw the condition in which crofters had been placed, in his home land of Scotland, as a result of the Enclosures, and the Industrial Revolution he could visualize these industrious farmers on the plains of the West, seeding and reaping great harvests and being able to live their lives in the independent way that had always desired. Against a great deal of criticism within his own Directors and that of the fur traders both with the Hudson Bay Company and the North West Trading Company he preserved and the Red River Colony came about. This, of course, opened new economic opportunities in the West, as well as causing a change in the conditions of life among these people. While these individuals had led to the country becoming known and later, being settled, a great change took place in the way the population who had been here before lived. To add to the troubles, across the boundary to the South, a great expansion was taking place as the theory of Manifest Destiny was applied. The attitude of traders and settlers below the 49th parallel and to the north of it was quite different. Whiskey traders made their way into the prairies of what is now Alberta. They caused a considerable trouble to the traders who had been here for such a log time. They attempted to denude the prairies of the buffalo for their own benefit. At the same time, they supplied a great deal of liquor to the Indians and when they had tried to degrade them in this way, used this as an excuse to attempt to exterminate them. The attitude of many of these nefarious traders was that "the only good Indian is a dead one." From this attitude, the massacre near where Fort Walsh was afterward located took place. This was probably the main cause of the coming into being of the North West Mounted Police. When this Force was organized it was largely because of the excellent choice of leaders that they wore not only able to establish law and order, but to make them in their troubles. Troubles they had, of course, because of the influx of white people into the prairie country, with the resultant decimation of their main source of food, the buffalo, and the fur-bearing animals being pushed farther back when these new people started to farm the land. It is rather difficult to single out all the leaders who helped so much in this work, but men like Commissioner French, Assistant Commissioner J.F.Mcleod, Inspector W.D.Jarvis are a few, It was into this country that Edward Ainslie Braithwaite came, from England, when a young man, and it was in this country that he remained and dedicated his life. Edward Ainslie Braithwaite was born in Alne, Yorkshire, into a somewhat typical clergyman's family of those Victorian days. One member or the family won fame as a military leader, another became a canon in the Anglican Church, another became a professional man—a well-known doctor in Western Canada, and—yes—there was a "black sheep" in the family who went to the United States when he grew up, Edward Ainslie Braithwaite was born on February 16, 1862. His father, Reverend William Braithwaite was an Anglican clergyman. His mother, Laura Elizabeth, nee Pioou, had been born in St. Helier, Island of Jersey, Channel Islands. When he was eleven years of age, his father died, in Yorkshire, His mother lived until 1916, when she died in Winchester, Hants., England, His brother, Sir Walter Braithwaite predeceased him, after becoming a high ranking officer in the British Army. Edward was educated at King's College, Bruton, Sommerset, at Victoria College or Jersey and at the United Services College at Westward Ho School in Bideford, Devonshire, where he shared a study with Rudyard Kipling, After this he went on into the study of medicine at King's College Hospital, London England. For reasons of health he was not able to complete his work there. It was thought that he would be in better health in a drier climate. So we find him in the year 1884, coming to Canada, and enlisting in the North-West Mounted Police, in Winnipeg, with the regimental number 1025. He was sent to Headquarters at Regina. Here, he was drilled as any other recruit, and, when his time came, he did the dishes the same as the rest. Breakages were not too frequent, though, as they were made or tin, he used to remark. He was on fatigue duty, helping to rivet the bridge that connected Government house with the Barracks. In September, 1884, he was made an Acting Hospital Sergeant, and in December of this year he was confirmed in this rank. In March, 1885, the Senior Sergeant told him he was sending him in Medical Charge or Commissioner A.G. Irvine's Column in the historical trek from Regina to Prince Albert, during the Riel Rebellion. Dr. Braithwaite recalled the event: "I was neither competent or qualified. Col, Irvine replied 'Then I must send another doctor', There were only about twelve doctors in the N.W.Territories, and I knew the only man he could get was a man who never drew a sober breath if he could help it. I thought, 'What a man to leave my comrades to,' so I said, 'if you will trust me, I will go out and do my best.' So I went. "On the journey up from Regina to Prince Albert I had twenty-two men snow blind and one frozen from the knees down. I placed his feet in a hose bucket full of water and covered him with a horse blanket in the sleigh. His legs were saved he lost all his toes on both feet. The snow-blinded men were treated with tea leaves. At Humboldt, there was only one house. I took my cripples to it. Just as I got there I heard a voice say, 'You can't go in there, that is for the Commissioner.' "I replied, 'This is for the Hospital.' "A voice called out, 'You are quite right Braithwaite, Carruthers (his man) pitch every tent.'" The next morning they were told that they had to cross at Clark's Crossing, where half breeds had dug a lot of concealed rifle pits, and it would be very dangerous. They started out and as they went along courier after courier came to them telling them to go to Prince Albert, where there were about 3,000 people, He goes on, in his reminiscences: "After we had gone eight or ten miles we turned off and went to Prince Albert, where we were received by bonfires and cheers. We rested there one day. On the way up, we camped after dark, had breakfast, and waited for daylight to see we had not left anything. We lost one rifle on our way up." They left Prince Albert for Fort Carleton the next day, with about two hundred volunteers. Arriving at Carleton his sleigh nearly upset at the gates. Whilst standing there, a man came up and asked him if he was the hospital sergeant. When he replied that he was, he was directed to the guardroom, where his improvised hospital was over the main gate. It is interesting to note in J.P.Turner's "The North-West Mounted Police" he has this to say: "The wounded men, two of whom were beyond aid other than to make them as comfortable as possible, required immediate attention, and S/Sgt. E.A.Braithwaite improvised a hospital in an upper room above the main gate. Orders were given to pack as many stores as possible in the sleighs, the balance to be destroyed. Beds of hay were made in other sleighs for the wounded." A number or years ago, Dr. Braithwaite recalled that he had had to pull his instruments in a sleigh on the trip from Regina to Fort Carleton. He also had the following recollections of those days: "The men from Duck Lake (fight) had just arrived when we got there, eight wounded men. I never had my clothes off for three days and nights, On the third day it was decided to evacuate Carleton. Whilst getting ready, in taking the hay out of the mattresses, some got too near the stove and set the place on fire. In carrying Corporal Gilchrist out, I had the feet, Sgt. Major Dan the head and shoulders. Dan gave a warning shout and in pulling me out, jerked, and the leg came out of its setting. It was set again when we got to Prince Albert. "One man had been shot in the ribs and could not get out of bed, I told him, 'Get out or get burnt.' "When we got to Prince Albert it was found to be a round 'trade' bullet that, luckily, had run round the rib." On the trip to Prince Albert they had quite a difficult journey, because of the transportation of the wounded and the hill leading to Prince Albert. They remained in this centre for about three weeks, when they were sent to Hudson's Bay Crossing to bring in some wounded. From here he went to Batoche at the time the last battle was being fought. He arrived for about the last half hour of the fighting. After placing the wounded on a steamer to be taken to the Base Hospital, which was located where Saskatoon is now, he saw Riel, accompanied by an interpreter, and was told it was Riel's cook. After going back to Hudson's Bay Crossing and Fort Carleton, he was ordered back to Regina. On the way, near Touchwood, one of the horses went lame and they had to substitute an ox. At Qu'Appelle they got a replacement for the horse, and, once again, started for Regina, They noticed a large number or Indians, and thinking, at first, that they were going for horses, were not too happy when they found that they were not going for horses but were on the warpath. They had to go very cautiously. Upon his arrival at Regina he found there were about 500 there, instead of the 19 he had left. Among his anecdotes of that time he told of a time when the men got "rambunctious" and he was continually having to repair their injuries after these fights in the barracks. He put one on charge and the man got three months in jail, When S/Sgt, Braithwaite was ordered to go to Wood Mountain and Lethbridge he found that his Head Teamster was the same person he had caused to be incarcerated. While he wondered at first what might happen, he found this man to be the most loyal assistant he could have had, and their friendship continued as long as they both lived. His remark following this is worth repeating: "This was the spirit of the N.W.M.Police. No matter how tough a man was, he was decent at heart." On his return to Regina he was sent to Maple Creek, as Doctor Haultain was off on his honeymoon. After three months he was returned to Regina where he was put in Medical Charge of the Flying Patrol (K Division). After a time at Lethbridge, in 1886, Dr. Mewburn arrived as the Coal Company Doctor. S/Sgt. Braithwaite had been serving in Lethbridge at this time. K Division was transferred from Battleford to Fort Mcleod and he was stationed there, during which time he was the victim of typhoid fever, when an epidemic struck the station. In 1887, he was transferred to Fort Saskatchewan, northeast of Edmonton. As far as Edmonton, they were a full Division: "....to take part in the Queen's Jubilee. We camped below the Big House which was the Hudson Bay Factor's dwelling. On Sunday, we were marched to the English Church for Service .... "The next day was the Jubilee. I was appointed Officer Commanding Orderly. My own trooper was taken from me and I got a horse that would not go in the ranks. When the firing started my 'beautiful' steed bolted. After almost a half a mile I got him back, Major Griesbach called out to me, "'Look out you will kill someone, (not me) with that horse,' "When they gave three cheers for the Queen he tried it again but I had him in hand. "He (Griesbach) started off, (to fire a 21 gun salute) and suddenly stopped to speak to some ladies, I shot past him like I was racing. Finally we arrived at the camp, the Old Hudson's Bay Fort. The Veterinary Sergeant came up to ask me how I liked my mount. I answered him in the language or the day and said I would never ride him again. "No!' he said, 'I would not if I was you, He killed a man in Calgary." While stationed here the duties they were called upon to perform took place over a large territory. On one occasion they had to go to Grouard, on Lesser Slave Lake, about three hundred miles north west of Fort Saskatchewan, to bring in two prisoners, An Indian women had become insane, and, according to Indian rules she had to be killed by her husband and son. They went by team to Athabasca, about a hundred miles. From here they were pulled in boats up the rivers to the Lesser Slave Lake. This lake is about 90 miles long, and is subject to very violent storms. One of these almost cost them their lives. In addition to this hazard, they were stranded one night on a sandbar, on their return. Thus, the difficulties of duty in this area can be seen. While he was stationed at Fort Saskatchewan he used to ride into Edmonton every other day, attending patients in an office that he had in the Queen's Hotel. While serving in the N.W.M.P. he continued his medical studies at the Manitoba Medical College, which was affiliated with the University of Manitoba. He was admitted to the degree of Doctor of Medicine, by the University of Manitoba, in 1890. He took his discharge from the N.W.M.P. on May 6, 1892, with the rank of Staff Sergeant and came to live in Edmonton, where he went into practice as a Physician and Surgeon. He was appointed acting surgeon to attend to the personnel of the Northwest Mounted Police detachment at Edmonton. He was made the Health Officer of the Town of Edmonton, and, later, the City of Edmonton, in 1892. He was also a Coroner for the North West Territories at Edmonton, and, upon the formation of the Province of Alberta in 1905, he continued in this capacity, becoming the Chief Coroner and Medical Inspector for the Province of Alberta, in 1932. He retired from this office a year before his death, in 1948. His record of nearly fifty-two years as a coroner is unequalled in Canada. He presided at more than eightthousand inquests. The office or coroner and medical inspector has always been a highly responsible one, and, in the early days, with long trips in the most inclement of weather, as well as the dangers or poor roads and the possibility of becoming lost, a highly hazardous one. This can be realized more if you take into consideration the poor conditions for travel in the large area to the north of Edmonton. It is due in a large measure to the indefatigable work of Dr. Braithwaite that this important branch of medical supervision was established so soundly in the Province of Alberta. While he was a contract doctor with the N.W.N.P. from his retirement from active service, he was appointed full Honourary Surgeon in the Royal North West Mounted Police with all the rights of that Office, in September, 1911. He served with the N.W.M.P., the R.N,W.M.P. and the R.C.M.P. for almost forty-eight years, having been awarded the Long Service Medal in 1927. His association with the R.C.M.P. extended for a period of 65 years. In 1892 he entered into Private Practice in Edmonton. It is interesting to note that among the many patients that he had in this city, the first native-born (that is, born in Alberta) Grand Master of the Grand Lodge of Alberta, A.F. & A.M., first saw the light of day with the assistance of Dr. Braithwaite. When this boy grew up he was Master of Edmonton Lodge #7, G.R.A., and had the pleasure and honour of presenting Dr. Braithwaite with his 50-year Jewel. In the early days, with Dr. Whitelaw, who later became the Health Officer for the City of Edmonton when he took over from Dr. Braithwaite, and Dr. Blais, who later became a Senator from Alberta, he used to go to St. Albert, where the first hospital was opened. There was no hospital in Edmonton, itself, for sometime. When the General Hospital was opened in Edmonton he had the first patient who admitted to it. When the rush to the Klondike took place many started out from Edmonton to go there. As the result of this a railway was started to go from Edmonton to the Pacific by way of the Yukon. It was called the Edmonton, Yukon and Pacific. When they started to build it from Strathcona to Edmonton he was appointed Medical Officer. At the time that the Canadian Northern Railway built into Edmonton, in 1905, they decided to buy the E.Y. & P. so as to make a quicker route to Calgary for their passenger service. At the same time, they appointed Dr. Braithwaite as their Medical Officer in Edmonton and he continued in this work until about the time of the First Great War. He was made the first Commissioner of the St. John's Ambulance for the Province. While he had been a coroner for the Province of Alberta, in 1932 he was made Chief Coroner for the Province, as well as Medical Inspector of Hospitals. Because of his work in the medical field, and his interest in the Dominion Medical Council he was chosen to represent Alberta on this Council. He was active in the Canadian Medical Association, being the President for a term. He enlisted in the Canadian Army Medical Corps at the beginning of the First Great War but was injured shortly afterwards and resumed his practice in Edmonton. During this War period he made it a policy of his not to accept any fees from the family of any enlisted man who came to him for medical services, if this man was overseas. In 1892 Dr. Braithwaite married Jennie E. Anderson, daughter of an Edmonton old-timer, T.A. Anderson, on November 30th. Unfortunately she died in 1914. When the Royal Alexandria Hospital was opened in Edmonton as the City Hospital, many of the furnishings for one of the wards were made by Mr. Braithwaite. He re-married on June 2, 1915, Ruth Somersall of Viking, Alberta. She survived him, and retired after his death to British Columbia. While his chief interest was Medicine, with the R.C.M.P. running a close second, he took a little interest in politics, being a Conservative, and he was very interested in the Anglican Church, particularly All Saints Cathedral. His work in this regard was seen in the activity he took in this Cathedral. In 1895 he helped lay the foundation of a cathedral on the very site of the church in which his funeral was held. In Masonry he became one of the chief craftsmen, in several branches of the work. In tribute to his services in the R.N.W.M.P. and in Medicine he was awarded the King's Jubilee Medal in 1935. He had a long distinguished career in Freemasonry. When he arrived in Edmonton the only Lodge was Edmonton #53, G.R.M. Freemasonry in Edmonton had had a rather hesitant beginning. Saskatchewan Lodge #17 under the Grand Lodge of Manitoba, which took in all the area that is now Manitoba, Saskatchewan and Alberta, had been started before the Riel Rebellion. As the result of this Rebellion and the unsettled conditions around Edmonton they had had to surrender their Charter. When things became more settled, and a steady growth started to take place in Edmonton, another Lodge was formed and is in existence to the present time. This was Edmonton Lodge #53, G.R.M.. In January, 1897, another Lodge was formed on the south bank of the North Saskatchewan River, in Strathcona, a town that had sprung up as the result of the Canadian Pacific Railway running trains into it. This Lodge was also under the Grand Lodge of Manitoba and with the assistance of the members of Edmonton Lodge #53 became Acacia Lodge #66 under the Grand Lodge of Manitoba. It was into Edmonton Lodge #53, G.R.M. that Edward Ainslie Braithwaite was initiated on May 19th, 1893, passed on July 7, 1893 and received his Third Degree on September 1, 1893. The interest that he showed in Freemasonry in those days abided with him as long as he lived. He was made Master of Edmonton Lodge #53 G.R.M. for the year 1898. In 1899 he was the Grand Steward of the Grand Lodge of Manitoba and was elected the Grand Registrar in 1900. In 1901 he was elected Grand Senior Warden, Deputy Grand Master in 1902, and Grand Master in 1903. He affiliated with Northern Light Lodge #10 in Winnipeg, on November 15, 1906, from Edmonton Lodge #7, G.R.A.. When the Grand Lodge of Alberta was formed in 1905, the year Alberta became a Province, he was the Senior Grand Master of the Grand Lodge of Alberta. He also took an active interest in the Scottish Rite Freemasonry. He had become a member of the Scottish Rite in the Valley of Winnipeg previous to 1904. In 1904 he was a charter member, and the first Thrice Puissant Master of the Lodge of Perfection of the Valley of Edmonton. He was also a charter member of the Mizpah Chapter of the Rose Croix in 1907. In addition to this he was instrumental in the formation of the Alberta Consistory and was the first Commander-in-Chief, in 1910. For his outstanding service to the Scottish Rite he was coroneted 33 degree Honourary Inspector-General at Winnipeg in 1911. He was elected to Active Membership in the Supreme Council at Hamilton in 1918 and on October 25, 1917 was appointed Illustrious Deputy for the Province of Alberta. He held this office until 1945, when he retired because of ill health. At this time he was retired to Past Active Rank. When he passed away, in 1949 he was the oldest member of the Supreme Council for the Dominion of Canada. He was also a member of Al Azhar Temple of the A.A.O.N.M.S.. The message of M.Worshipful Brother Edward Ainslie Braithwaite gave to the Grand Lodge of Manitoba at the Grand Session in 1904 is just as timely to-day as it was then: "...We find with every rising sun fresh evidence of settlement and of growth; mercantile and financial interests are striving to keep pace with the heavy demand, and the material as well as the spiritual forces in our beloved West are taxed to the utmost of their endeavour. What shall Masonry do for the betterment of the West in this, its magnificent opportunity? Shall not the influence of the members of our Order be for the ever-lasting good till thousands rise with one sound to sing its praise?" On December 7, 1949, M.Worshipful Brother Dr. Edward Ainslie Braithwaite passed to the Grand Lodge Above, after a long illness, and in spite of the kind ministrations of his beloved wife. The funeral service was held on Saturday, December 10, 1949, at All Saints Cathedral. The Very Reverend A.M.Trendell, Dean of Edmonton, officiated and interment followed in the family plot in the Edmonton Cemetery. There was a large attendance of his Masonic Brethren and a guard of honour was also formed by members of the R.C.M.P., as well as by members of the Masonic Order. Dean Trendell paid a special tribute to his memory, stating that "Doctor Braithwaite made a great and outstanding contribution to the history of Western Canada." His widow survived him, and after living for sometime in Vancouver, is now in Winnipeg. When we look back over the life of this gentleman and Mason we are struck by the fact that he was truly the personification of brotherly love, relief and truth. In his duty he was meticulous, sympathetic and had a warm sense of humour. An incident comes to mind of the writer as told by the late Medical Officer for the C.N.R. in Edmonton, Dr. Alexander. One Sunday afternoon a passenger train arrived in Edmonton during the day. On this train was a person who had been taken ill. One of the employees of the railway went to the Medical Officer's office to get some help. In this office was a list of the different Medical Officers who had held that position in Edmonton. The employee thought it was a list for emergency calls. At the top of the list was Dr. E.A. Braithwaite. He got the telephone number and called. He did not know that the doctor was over 85 years of age and had long since retired from that work. However, when he called, Dr. Braithwaite called a taxi and went to the station where he administered to the sick person. In the present way of carrying on the practice of medicine, when everyone is sent to the Emergency Ward, this example of attachment to duty is almost astonishing. Such was the way Dr. Braithwaite carried on his duties. In the field of law and order in the new West his life was exemplary. Yet, there was always the feeling that the "velvet scabbard held a sword of steel." To-day, when we look at the vast organization of the Hospitals in Alberta, at the wonderful progress that has been made and is being made in Medicine, we can get a little glimpse of the problems he had to meet in helping to get these fields organized in such a vast country with so much change that came about in its settlement. It was the whole- hearted effort that he put into improving these things that his real worth is seen. There were times of when he was quite well- off with worldly goods, but his habit of helping anyone who could bring a plausible story cost him much of this. The encouragement he brought to the ill, and the sympathy to the sorrowing will never be forgotten by those who knew him well. While any movement that was for the good of his neighbours or the country as a whole would always demand his attention and assistance. Such you will find in the Order of St. John's Ambulance, Canadian Medical Association, and, above all in Freemasonry, particularly in Western Canada. As Kelsey, the individualist brought a knowledge of the Prairies, Henday, a knowledge of the Mountains in the West, Hearne, a knowledge of the Artic Regions, Mackenzie, Fraser and Thompson a knowledge of the routes by which the West were opened, Lord Selkirk a knowledge of the value of this land to our economy, so it is true that Dr. Braithwaite brought a knowledge of Materialistic Medicine and spiritualistic Masonry to this West. He was an individual to whom the West, and particularly Alberta is indebted. We, of the present generation, and those who come after are the richer for Dr. Braithwaite's unselfish service. It can be truly said, with the Supreme Council: "He was a friend whose heart was good, Who walked with men and understood; His was the voice that spoke to cheer, And fell like music on the ear. His was a hand that asked no fee For friendliness or kindness done. And now that he has journeyed on, His is a fame that never ends; He leaves behind uncounted friends."
<urn:uuid:c15552b2-29ec-49c9-8494-587bd2288fa3>
CC-MAIN-2017-17
http://www.skirret.com/papers/canada/edward_braithwaite.html
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917123484.45/warc/CC-MAIN-20170423031203-00369-ip-10-145-167-34.ec2.internal.warc.gz
en
0.989396
7,115
2.953125
3
THE MINISTRY OF HIGHER AND SECONDARY SPECIAL EDUCATION OF THE REPUBLIC OF UZBEKISTAN THE UZBEK STATE WORLD LANGUAGES UNIVERSITY The first English philology faculty Master’s degree department Title: Cognitive aspects of lexicon in the light of the language picture of the world Done by: Tursunova Aziza Checked by: Tukhtakhojayeva Z.T Tashkent - 2011 Information access and exchange play a major role in our globalized world. Hence, building resources (lexica, thesauri, ontologies or annotated corpora) and providing access to words become an important goal. The lexicon is a vital resource for building applications. It is also a crucial element in the study of human language processing. The spirit of this workshop multidisciplinary, the goal being to gather experts with various backgrounds and to allow them to exchange ideas, to compare their methodologies and theoretical perspectives, to create synergy, and to encourage future collaborations. In sum, the participants will be discussing questions concerning the cognitive aspects of the lexicon, and their answers should guide the design of on-line dictionaries. While completeness is a virtue, the quality of a dictionary depends not only on coverage (number of entries) and granularity, but also on accessibility of information. Access strategies vary with the task (text understanding vs. text production) and the knowledge available at the moment of consultation (word, concept, sound). Unlike readers, who look for meanings, writers start from them, searching for the ’right’ words. While paper dictionaries are static, permitting only limited strategies for accessing information, their electronic counterparts promise dynamic, proactive search via multiple criteria (meaning, sound, related word) and via diverse access routes. Navigation takes place in a huge conceptual-lexical space, and the results are displayable in a multitude of forms (as trees, as lists, as graphs, or sorted alphabetically, by topic, by frequency). Many lexicographers work nowadays with huge digital corpora, using language technology to build and to maintain the resource. But access to the potential wealth in dictionaries remains limited for the common user. Yet, the new possibilities of electronic media in terms of comfort, speed and flexibility (multiple inputs, polymorph outputs) are enormous and probably beyond our imagination. More than just allowing electronic versions of paper-bound dictionaries, computers provide a freedom for rethinking dictionaries, thesauri, encyclopedia, etc., a distinction necessary in the past for economical reasons, but not justified anymore. The goal of this workshop is to perform the groundwork for the next generation of electronic dictionaries, that is, to study the possibility of integrating the different resources, as well as to explore the feasibility of taking the users’ needs, knowledge and access strategies into account. To reach this goal we have asked authors to address one or more of the following: 1. Conceptual input of a dictionary user: what is present in speaker’s/writer’s minds when they are generating a message and looking for a (target) word? Does the user have in mind conceptual primitives, semantically related words, some type of partial definition, something like synsets, or something completely different? 2. Access, navigation and search strategies: how can search be supported by taking into account prior, i.e. available knowledge? Entries should be accessible in many ways: by word forms, by meaning, by sounds (syllables), or in a combined form, and this even if input is given in an incomplete, imprecise or degraded form. The more precise the conceptual input, the less navigation should be needed and vice versa. How can we create manageable search spaces, and provide a user with the tools for navigating within them? 3. Indexing words and organizing the lexicon: Words and concepts can be organized in many ways, varying according to typology and conceptual systems. For example, words are traditionally organized alphabetically in Western languages, but by semantic radicals and stroke counts in Chinese. The way words and concepts are organized affects indexing and access. Indexing must robustly allow for multiple ways of navigation and access. What efficient organizational principles allow the greatest flexibility for access? What about lexical entry standardization? Are universal definitions possible? What about efforts such as the Lexical Markup Framework (LMF) and other global structures for the lexicon? Can ontologies be combined with standards for the lexicon? 4. NLP Applications: Contributors can also address the issue of how such enhanced dictionaries, once embedded in existing NLP applications, can boost performance and help solve lexical and textual-entailment problems such as those evaluated in SEMEVAL 2007, or, more generally, generation problems encountered in the context of summarization, question-answering, interactive paraphrasing or translation. We’ve received 18 papers, of which 6 were accepted as full papers, while 8 were chosen as poster presentations. While we did not get papers on all the issues mentioned in our call, we did get a quite rich panel on ideas as divers as use of ontologies; sense extraction; computation of associative responses to multi-word stimuli; saliency relations; lexical relationships within collocations and word association norms; cognitive organization of dictionaries; user-adapted views on a lexicographic database; access based on conceptual input; search in onomasiological dictionaries, access based on underspecified input; dictionary use for authoring aids or MT, use of feature vectors, corpora and machine learning, etc.. It was also interesting to see the variety of languages in which these issues are addressed. The proposals range from Japanese, English, German, Russian, Dutch, Bulgarian, Romanian, Spanish, to French and Chinese. In sum, the community working on dictionaries is dynamic, and there seems to be a growing awareness of the importance of some of the problems presented in our call for papers. We would like to express here our sincerest thanks to all the specialists who have assisted us to assure a good selection of papers, despite the very tight schedule. Their reviews were helpful not only for us as decision makers, but also for the authors, helping them to improve their work. In the hope that the results will inspire you, provoke fruitful discussions and result in future collaborations. Cognitively Salient Relations for Multilingual Lexicography Providing sets of semantically related words in the lexical entries of an electronic dictionary should help language learners quickly understand the meaning of the tar- get words. Relational information might also improve memorization, by allowing the generation of structured vocabulary study lists. However, an open issue is which semantic relations are cognitively most salient, and should therefore be used for dictionary construction. In this paper, we present a concept description elicitation experiment conducted with German and Italian speakers. The analysis of the experimental data suggests that there is a small set of concept-class–dependent relation types that are stable across languagesand robust enough to allow discrimination across broad concept domains. Our further research will focus on harvesting instantiations of these classes from corpora. In electronic dictionaries, lexical entries can be enriched with hyperlinks to semantically related words. In particular, we focus here on those related words that can be seen as systematic properties of the target entry, i. e., the basic concepts that would be used to define the entry in relation to its super ordinate category and coordinate concepts. So, for example, for animals the most salient relations would be notions such as “parts” and “typical behavior”. For a horse, salient properties will include the mane and hooves as parts, and neighing as behaviour. Sets of relevant and salient properties allow the user to collocate a word within its so-called “word field” and to distinguish it more clearly from neighbour concepts, since the meaning of a word is not defined in isolation, but in contrast to related words in its word field (Geckeler, 2002). Moreover, knowing the typical relations of concepts in different domains might help pedagogical lexicography to produce structured networks where, from each word, the learner can naturally access entries for other words that represent properties which are salient and distinctive for the target concept class (parts of animals, functions of tools, etc.). We envisage a natural application of this in the automated creation of structured vocabulary study lists. Finally, this knowledge might be used as a basis to populate lexical networks by building models of concepts in terms of “relation sketches” based on salient typed properties (when an animal is added to our lexicon, we know that we will have to search a corpus to extract its parts, behaviour, etc., whereas for a tool the function would be the most important property to mine). This paper provides a first step in the direction of dictionaries enriched with cognitively salient property descriptions by eliciting concept descriptions from subjects speaking different languages, and analysing the general patterns emerging from these data. It is worth distinguishing our approach to enriching connections in a lexical resource from the one based on free association, such as has been recently pursued, e. g., within the WordNet project (Boyd- Graber et al., 2006). While we do not dispute the usefulness of free associates, they are irrelevant to our purposes, since we want to generate systematic, structured descriptions of concepts, in terms of the relation types that are most salient for their semantic fields. Knowing that the word Holland is “evoked” by the word tulip might be useful for other reasons, but it does not allow us to harvest systematic properties of flowers in order to populate their relation sketch: we rather want to find out that tulips, being flowers, will have color as a salient property type. As a location property of tulips, we would prefer something like garden instead of the name of a country or individual associations. To minimize free association, we asked participants in our experiments to produce concept descriptions in terms of characteristic properties of the target concepts (although we are not aware of systematic studies comparing free associates to concept description tasks, the latter methodology is fairly standard in cognitive science: see section. To our knowledge, this sort of approach has not been proposed in lexicography, yet. Cognitive scientists focus on “concepts”, glossing over the fact that what subjects will produce are (strings of) words, and as such they will be, at least to a certain extent, language-dependent. For lexicographic applications, this aspect cannot, of course, be ignored, in particular if the goal is to produce lexical entries for language learners (so that both their first and their second languages should be taken into account). We face this issue directly in the elicitation experiment we present here, in which salient relations for a set of 50 concepts from 10 different categories are collected from comparable groups of German and Italian speakers. In particular, we collected data from high school students in South Tyrol, a region situated in Northern Italy, inhabited by both German and Italian speakers. Both German and Italian schools exist, where the respective non-native language is taught. It is important to stress that the two communities are relatively separated, and most speakers are not from bilingual families or bilingual social environments: They study the other language as an intensively taught L2 in school. Thus, we move in an ideal scenario to test possible language-driven differences in property descriptions, among speakers that have a very similar cultural background. South Tyrol also provides the concrete applicative goal of our project. In public administration and service, employees need to master both languages up to a certain standardized level (they have to pass a “bilingual” proficiency exam). Therefore, there is a big need for language learning materials. The practical outcome of our research will be an extension of ELDIT1, an electronic learner’s dictionary for German and Italian (Abel and Weber, 2000). Lexicographic projects providing semantic relations and experimental research on property generation are the basis for our research. information access lexicography In most paper-based general and learners’ dictionaries only some information about synonyms and sometimes antonyms is presented. Newer dictionaries, such as the “Longman Language Activator” (Summers, 1999), are providing lists of related words. While these will be useful to learners, information about the kind of semantic relation is usually missing. Semantic relations are often available in electronic resources, most famously in WordNet (Fellbaum, 1998) and related projects like Kirrkirr (Jansz et al., 1999), ALEXIA (Chanier and Selva, 1998), or as described in Fontenelle (1997). However, these resources tend to include few relation types (hypernymy, meronymy, antonymy, etc.). The salience of the relations chosen is not verified experimentally, and the same set of relation types is used for all words that share the same part-of-speech. Our results below, as well as work by Vinson et al. (2008), indicate that different concept classes should, instead, be characterized by different relation types (e. g., function is very salient for tools, but not at all for animals). Work in Cognitive Sciences Several projects addressed the collection of property generation data to provide the community with feature norms to be used in different psycholinguistic experiments and other analyses: Garrard et al. (2001) instructed subjects to complete phrases (“concept is/has/can. . . ”), thus restricting the set of producible feature types. McRae etal. (2005) instructed their subjects to list concept properties without such restrictions, but providing them with some examples. Vinson et al. (2008) gave similar instructions, but explicitly asked subjects not to freely associate. However, these norms have been collected for the English language. It remains to be explored if concept representations in general and semantic relations for our specific investigations have the same properties across languages. After choosing the concept classes and appropriate concepts for the production experiment, concept descriptions were collected from participants. These were transcribed, normalized, and annotated with semantic relation types. The stimuli for the experiment consisted of 50 concrete concepts from 10 different classes (i. e., 5 concepts for each of the classes): mammal (dog, horse, rabbit, bear, monkey), bird (seagull, sparrow, woodpecker, owl, goose), fruit (apple, orange, pear, pineapple, cherry), vegetable (corn, onion, spinach, peas, potato), body part (eye, finger, head, leg, hand), clothing (chemise, jacket, sweater, shoes, socks), manipulability tool (comb, broom, sword, paintbrush, tongs), vehicle (bus, ship, air-plane, train, truck), furniture (table, bed, chair, closet, armchair), and building (garage, bridge, skyscraper, church, tower). They were mainly taken from Garrard et al. (2001) and McRae et al. (2005). The concepts were chosen so that they had unambiguous, reasonably monosemic lexical realizations in both target languages. The words representing these concepts were translated into the two target languages, German and Italian. A statistical analysis (using Tukey’s honestly significant difference test as implemented in the R toolkit 2) of word length distributions (within and across categories) showed no significant differences in either language. There were instead significant differences in the frequency of target words, as collected from the German, Italian and English WaCky corpora3. In particular, words of the class body part had significantly larger frequencies across languages than the words of the other classes (not surprisingly, the words eye, head and hand appear much more often in corpora than the other words in the stimuli list). The participants in the concept description experiment were students attending the last 3 years of a German or Italian high school and reported to be native speakers of the respective languages. 73 German and 69 Italian students participated in the experiment, with ages ranging between 15 and 19. The average age was 16.7 (standard deviation 0.92) for Germans and 16.8 (s.d. 0.70) for Italians. The experiment was conducted group-wise in schools. Each participant was provided with a random set of 25 concepts, each presented on a separate sheet of paper. To have an equal number of participants describing each concept, for each randomly matched subject pair the whole set of concepts was randomised and divided into 2 subsets. Each subject saw the target stimuli in his/her subset in a different random order (due to technical problems, the split was not always different across subject pairs). Short instructions were provided orally before the experiment, and repeated in written format on the front cover of the questionnaire booklet distributed to each subject. To make the concept description task more natural, we suggested that participants should imagine a group of alien visitors, to each of which a particular word for a concrete object was unknown and thus had to be described. Participants should assume that each alien visitor knew all other words of the language apart from the unknown (target) word. Participants were asked to enter a descriptive phrase per line (not necessarily a whole sentence) and to try and write at least 4 phrases per word. They were given a maximum of one minute per concept, and they were not allowed to go back to the previous pages. Before the real experiment, subjects were presented an example concept (not in the target list) and were encouraged to describe it while asking clarifications about the task. All subjects returned the questionnaire so that for a concept we obtained, on average, descriptions by German subjects Transcription and Normalization The collected data were digitally transcribed and responses were manually checked to make sure that phrases denoting different properties had been properly split. We tried to systematically apply the criterion that, if at least one participant produced 2 properties on separate lines, then the properties would always be split in the rest of the data set. However, this approach was not always equally applicable in both languages. For example, Trans-portmittel (German) and mezzo di trasporto (Italian) both are compounds used as hyponyms for what English speakers would probably rather classify as vehicles. In contrast to Transportmittel, mezzo di trasporto is splittable as mezzo, that can also be used on its own to refer to a kind of vehicle (and is defined more specifically by adding the fact that it is used for transportation). The German compound word also refers to the function of transportation, but -mittel has a rather general meaning, and would not be used alone to refer to a vehicle. Hence, Transportmittel was kept as a whole and the Italian quasi-equivalent was split, possibly creating a bias between the two data sets (if the Italian string is split into mezzo and trasporto, these will be later classified as hypernym and functional features, respectively; if the German word is not split, it will only receive one of these type labels). More in general, note that in German compounds are written as single orthographic words, whereas in Italian the equivalent concepts are often expressed by several words. This could also create further bias in the data annotation and hence in the analysis. Data were then normalized and transcribed into English, before annotating the type of semantic relation. Normalization was done in accordance with McRae et al. (2005), using their feature norms as guidelines, and it included leaving habitual words like “normally,”, “often”, “most” etc. out, as they just express the typicality of the concept description, which is the implicit task. Mapping to Relation Types Normalized and translated phrases were subsequently labeled for relation types following McRae et al.’s criteria and using a subset of the semantic relation types described in Wu and Barsalou (2004): see section 4.1 below for the list of relations used in the current analysis. Trying to adapt the annotation style to that of McRae et al., we encountered some dubious cases. For example, in the McRae et al.’s norms, carnivore is classified as a hypernym, but eats meat as a behavior, whereas they seem to us to convey essentially the same information. In this case, we decided to map both to eats meat (behavior). Among other surprising choices, the normalized phrase used for cargo is seen by McRae et al. as a function, but used by passengers is classified as denoting the participants in a situation. In this case, we followed their policy. While we tried to be consistent in relation labelling within and across languages, it is likely that our own normalization and type mapping also include a number of inconsistencies, and our results must be interpreted by keeping this important caveat in mind. The average number of normalized phrases obtained for a concept presented is 5.24 (s.d. 1.82) for the German participants and 4.96 (s.d. 1.86) for the Italian participants; in total, for a concept in our set, the following number of phrases was obtained on average: 191.28 (German, s.d. 25.96) and 170.42 (Italian, s.d. 25.49). The distribution of property types is analyzed both class-independently and within each class (separately for German and Italian), and an unsupervised clustering analysis based on property types is conducted. We first look at the issue of how comparable the German and Italian data are, starting with a check of the overlap at the level of specific properties. There are 226 concept–property pairs that were produced by at least 10 German subjects; 260 pairs were produced by at least 10 Italians. Among these common pairs, 156 (i. e., 69% of the total German pairs, and 60% of the Italian pairs) are shared across the 2 languages. This suggests that the two sets are quite similar, since the overlap of specific pairs is strongly affected by small differences in normalization (e. g., has a fur, has fur and is hairy count as completely different properties). Of greater interest to us is to check to what extent property types vary across languages and across concept classes. In order to focus on the main patterns emerging from the data, we limit our analysis to the 6 most common property types in the whole data set (that are also the top 6 types in the two languages separately), accounting for 69% of the overall responses. These types are: • (external) part (WB code: ece; “dog has 4 legs”) • (external) quality (WB code: ese; “apple is green”) • behaviour (WB code: eb; “dog barks”) • function (WB code: sf ; “broom is for sweeping”) • location (WB code: sl; “skyscraper is found in cities”) Figure 1 compares the distribution of property types in the two languages via a mosaic plot (Meyer et al., 2006), where rectangles have areas proportional to observed frequencies in the corresponding cells. The overall distribution is very similar. The only significant differences pertain to category and location types: Both differences are significant at the level p < 0.0001, according to a Pearson residual test (Zeileis et al., 2005). For the difference in location, no clear pattern emerges from a qualitative analysis of German and Italian location properties. Regarding the difference in (superordinate) categories, we find, interestingly, a small set of more or less abstract hypernyms that are frequently produced by Italians, but never by Germans: construction (72), object (36), structure (16). In the these cases, the Italian translations have subtle shades of meaning that make them more likely to be used than their German counterparts. For example, the Italian word oggetto (“object”) is used somewhat more concretely than the extremely abstract German word Objekt (or English “object”, for that matter) – in Italian, the word might carry more of an “artifact, man-made item” meaning. At the same time, oggetto is less colloquial than German Sache, and thus more amenable to be entered in a written definition. In addition, among others, the category vehicle was more frequent in the Italian than in the German data set (for which one reason could be the difference between the German and Italian equivalents, which was discussed in section 3.3). Differences of this sort remind us that property elicitation is first and foremost a verbal task, and as such it is constrained by language-specific usages. It is left to future research to test to what extent linguistic constraints also affect deeper conceptual representations (would Italians be faster than Germans type at recognizing super ordinate properties of concepts when they are expressed non-verbally?). Despite the differences we just discussed, the main trend emerging is one of essential agreement between the two languages, and indicates that, with some caveats, salient property types may be cross-linguistically robust. We, thus, turn to the issue of how such types are distributed across concepts of different classes. This question is visually answered by the association plots on the following page. Each plot illustrates, through rectangle heights, how much each cell deviates from the value expected given the overall contingency tables (in our case, the reference contingency tables are the language-specific distributions). The sign of the deviation is coded by direction with respect to the baseline. For example, the first row of the left plot tells us, among other things, that in German behavior properties are strongly over-represented in mammals, whereas function properties are under-represented within this class. The first observation we can make about figure 2 is how, for both languages, a large proportion of cells show a significant departure from the overall distribution. This confirms what has already been observed and reported in the literature on English norms – see, in particular, Vinson et. al. (2008): property types are highly distinctive characteristics of concept classes. The class-specific distributions are extremely similar in German and Italian. There is no single case in which the same cell is deviating significantly but in opposite directions in the two languages; and the most common pattern by far is the one in which the two languages show the same deviation profile across cells, often with very similar effect sizes (compare, e. g., the behaviour and function columns). These results suggest that property types are not much affected by linguistic factors, an intrinsically interesting finding that also supports our idea of structuring relation-based navigation in a multi-lingual dictionary using concept-class–specific property types. The type patterns associated with specific concept classes are not particularly surprising, and they have been already observed in previous studies (Vinson and Vigliocco, 2008; Baroni and Lenci, 2008). In particular, living things (animals and plants) are characterized by paucity of functional features, that instead characterise all man-made concepts. Within the living things, animals are characterised by typical behaviours (they bark, fly, etc.) and, to a lesser extent, parts (they have legs, wings, etc.), whereas plants are characterised by a wealth of qualities (they are sweet, yellow, etc.) Differences are less pronounced within man-made objects, but we can observe parts as typical of tool and furniture descriptions. Finally, location is a more typical definitional characteristic of buildings (for clothing, nothing stands out, if not, perhaps, the pronounced lack of association with typical locations). Body parts, interestingly, have a type profile that is very similar to the one of (manipulable) tools – manipulable objects are, after all, extensions of our bodies. Clustering by Property Types The distributional analysis presented in the previous section confirmed our main hypotheses – that property types are salient properties of concepts that differ from a concept class to the other, but are robust across languages. However, we did not take skewing effects associated to specific concepts into account (e. g., it could be that, say, the property profile we observe for body parts in figure 2 is really a deceiving average of completely opposite patterns associated to, say, heads and hands). Moreover, our analysis already assumed a division into classes – but the type patterns, e. g., of mammals and birds are very similar, suggesting that a higher-level “animal” class would be more appropriate when structuring concepts in terms of type profiles. We tackled both issues in an unsupervised clustering analysis of our 50 target concepts based on their property types. If the postulated classes are not internally coherent, they will not form coherent clusters. If some classes should be merged, they will cluster together. Concepts were represented as 6-dimensional vectors, with each dimension corresponding to one of the 6 common types discussed above, and the value on a dimension given by the number of times that concept triggered a response of the relevant type. We used the CLUTO toolkit 4, selecting the rbr method and setting all other clustering parameters to their default values. We explored partitions into 2 to 10 clusters, manually evaluating the out-put of each solution. Both in Italian and in German, the best results were obtained with a 3-way partition, neatly corresponding to the division into animals (mammals and birds), plants (vegetables and fruits) and objects plus body parts (that, as we observed above, have a distribution of types very similar to the one of tools). The 2-way solution resulted in merging two of the classes animals and plants both in German and in Italian. The 4-way solution led to an arbitrary partition among objects and body parts (and not, as one could have expected, in separating objects from body parts). Similarly, the 5-to 10-way solutions involve increasingly granular but still arbitrary partitions within the objects/body parts class. However, one notable aspect is that in most cases almost all concepts of mammals and birds, and vegetables and fruits are clustered together (both in German and Italian), expressing their strong similarity in terms of property types as compared to the other classes as defined here. Looking at the 3-way solution in more detail, in Italian, the concept horse is in the same cluster with objects and body parts (as opposed to German, where the solution is perfect). The misclassification results mainly from the fact that for horse a lot of functional properties were obtained (which is a feature of objects), but none of them for the other animals in the Italian data. In German, some functional properties were assigned to both horse and dog, which might explain why it was not misclassified there. To conclude, the type profiles associated with animals, vegetables and objects/body parts have enough internal coherence that they robustly identify these macro-classes in both languages. Interestingly, a 3-way distinction of this sort – excluding body parts – is seen as fundamental on the basis of neuro-cognitive data by Caramazza and Shelton (1998). On the other hand, we did not find evidence that more granular distinctions could be made based on the few (6) and very general types we used. We plan to explore the distribution across the remaining types in the future (preliminary clustering experiments show that much more nuanced discriminations, even among all 10 categories, can be made if we use all types). However, for our applied purposes, it is sensible to focus on relatively coarse but well-defined classes, and on just a few common relation types (alternatively, we plan to combine types into superordinate ones, e. g. external and internal quality). This should simplify both the automatic harvesting of corpus-based properties of the target types and the structuring of the dictionary relational interface. Finally, the peculiar object-like behaviour of body parts on the one hand, and the special nature of horse, on the other, should remind us of how concept classification is not a trivial task, once we try to go beyond the most obvious categories typically studied by cognitive scientists – animals, plants, manipulable tools. In a lexicographic perspective, this problem cannot be avoided, and, indeed, the proposed approach should scale in difficulties to even trickier domains, such as those of actions or emotions. This research is part of a project that aims to investigate the cognitive salience of semantic relations for (pedagogical) lexicographic purposes. The resulting most salient relations are to be used for revising and adding to the word field entries of a multilingual electronic dictionary in a language learning environment. We presented a multi-lingual concept description experiment. Participants produced different semantic relation type patterns across concept classes. Moreover, these patterns were robust across the two native languages studied in the experiment – even though a closer look at the data suggested that linguistic constraints might affect (verbalisations of) conceptual representations (and thus, to a certain extent, which properties are produced). This is a promising result to be used for automatically harvesting semantically related words for a given lexical entry of a concept class. However, the granularity of concept classes has to be defined. In addition, to yield a larger number of usable data for the analysis, a re-mapping of the rare semantic relation types occurring in the actual data set should be conducted. Moreover, the stimuli set will have to be expanded to include, e. g., abstract concepts – although we hope to mine some abstract concept classes on the basis of the properties of our concept set (colors, for example, could be characterized by the concrete objects of which they are typical). To complement the production experiment results, we aim to conduct an experiment which investigates the perceptual salience of the produced semantic relations (and possibly additional ones), in order to detect inconsistencies between generation and retrieval of salient properties. If, as we hope, we will find that essentially the same properties are salient for each class across languages and both in production and perception, we will then have a pretty strong argument to suggest that these are the relations one should focus on when populating multi-lingual dictionaries. Of course, the ultimate test of our approach will come from empirical evidence of the usefulness of our relation links to the language learner. This is, however, beyond the scope of the current project.
<urn:uuid:77073f40-cac7-47f3-8417-eacd8a2a02ee>
CC-MAIN-2017-17
http://www.bestreferat.ru/referat-209447.html
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917122739.53/warc/CC-MAIN-20170423031202-00072-ip-10-145-167-34.ec2.internal.warc.gz
en
0.943742
7,484
3.0625
3
Treatment plan in complete Edentulous arches /certified fixed orthodontic courses by Indian dental academy DIAGNOSIS AND TREATMENT PLANNING COMPLETELY EDENTULOUS ARCHES INDIAN DENTAL ACADEMY Leader in continuing dental Definition Of Diagnosis & Treatment General Introduction Of the Clinical history taking Clinical examination—Intra oral Examination of existing dentures. Other investigation for systemic Diagnosis Comprises of evaluation of patients health with respect to his/her physical,mental&social health, and these diagnostic findings decide treatment plan. Treatment planning is the most important milestone which depends on the diagnosis.So accurate diagnosis plays a very important role in ensuring predictable results of the treatment.prognosis depends on both diagnosis and treatment planning. Definition of diagnosis & • Diagnosis is defind as determination of nature • Treatment planning is defind as the sequence of procedures planned for the treatment of a patient after diagnosis • Boucher –diagnosis consists of planned observation to determine & evaluate the existing conditions, which lead to decision making based on the condition observed. • Treatment plans should be developed to best serve the needs of each individual patient. diagnosis is the examination of physical status, evaluation of mental or psychological make up, &understanding of needs of each pt to ensure a predictable Treatment planning means developing sequence of procedures planned for the treatment of a patient after diagnosis. General introduction of the patient & The first appointment imp for the development of mutual understanding, trust b/n pt dentist. Pt should be addressed by name Dentist should verify the personnel information collected by the receptionist. Observation of the patients motor skills,level of coordination steadiness while walking. Unusual gait –Parkinson`s disease, neurological disorder, disease of the joint. EVALUATION OF MENTAL ATTITUDE The successful prosthodontic treatment depends on both technical skill &p mgt according to mental attitude. Neurosis– chr. Anxiety state at phy .State --increases alters neuromuscular co ordination. Dr. M .M. House cl of mental attitudes Philosophical-ideal, co-operative, optimistic .Prognosis good. Indifferent- least concerned about their oral health not co-operative, avoid treatment. Critical-not satisfied with previous dentures &dentist. Skeptical—poor gen health, unfavorable biomechanical condition,pessimistic. Pt motivation & education. Hysterical-poor health, nervous, unrealistic expectation, poor prognosis education & motivation. CLINICAL HISTORY TAKING Diagnosis & treatment planning depends upon accurate data collection & record maintenance. Name:patient identification, for addressing. Sex: patient expectations in the denture differ with sex. AGE:diseases related to age,as age advances decrease in adaptability &neuromuscular coordination,learning ability. Oral&facial tissues loose Socio-economic status : Chief complaint:difficulty in speech >Cause for the tooth loss >Period of edentulousness >Problems with existing denture >expectations in new denture H/o systemic diseases. Previous medical records. Date & reason for the last visit to physician. Physician tel .ph no. • - Impaired carbohydrate metabolism because of insulin deficiency or resistance. Pt should be for the h/o DM, rule out for DM. Drug history Insulin,OADA,diet Pt suffering from DM will show-- 1)Osteoporosis. 2) Residual alv bone resorption Patient education regarding maintenance of denture cleanliness oral hygiene. Need for regular check up Mucostatic impression technique. Avoid surgical intervention. Angina pectoris: it is a severe ischeamic pain aggravates on exertion relieved with rest. Avoid anxiety, exertion Physician consultation . Pt with h/o MI avoid treatment for 6 mts. Physician consultation & reassurance of pt to reduce Infective bacterial endocarditis: Pt with artificial heart valves, valvular heart disease prone Prophylactic Ab therapy prior to surgical procedures. Anaemia: level of Hb in the blood below normal.(14- Types of Anaemia: Iron def. Anaemia:increased loss of iron, increased physiological requirement, malabsorbtion of iron as in hypochlorhydria. Oral Manifestations:atrophic mucous membrane, loss of normal keratinization. Megaloblastic anaemia: deficiency of vit B-12 & Oral Manifestations:angular chelitis Pernicious anaemia: It is autoimmune disorder.atrophic gastric mucosa with loss of parietal cells so def of IF,decreased vitB-12 absorption Bald tongue atrophy of papilla Burning sensation in the mouth. Sickel cell anaemia .:hereditary type of chr. Hemolytic anemia transmitted as non sex linked Radiographic features reveal-mild to sever gen osteoporosis, loss trabaculation of jaw bones with large irregular marrow spaces, coarse trabaculaton. DISEASE INVOLVING WBC`S Leukopenia:decrease in no. WBC`s. Agranulocytosis: serious disease with decrease in number of granulocytes. Oral manifestations: necrotizing ulcers, excessive Leukemias:characterized by progressive over production of WBC`s, appear in circulating blood in O.m—petechiae, ulceration of mucosa, purpuric DISEASES OF PLATELETS purpura:decrease in circulating blood platelets autoimmune disorder. Thrombocythemia: increase in circulating blood platelets. Oral manifestations: petechiae on the oral mucosa,bleeding tendencies. Prevent cross contamination Self precaution &protection of assistant Disinfections of impression DISEASES OF BONE & JOINTS Affects elderly above 45 yrs of age M:F ratio 2:1(age related degenerative joint disease less frequently affects TMJ),weight bearing joints Characterized by deteriorations of articular cartilage remodeling of underlying bone. C/f:- pain &crepitaion during mandibular muscles of mastication tender. Advanced stage jt disability & atrophy of Difficulty in wearing and cleaning of denture. Impression making,jaw relation recording difficult. Frequent occlusal corrections should be made. Inflammatory disease affecting joints. C/f –Affects small joints of hands,feet symmetrically first followed by wrists, elbows, ankles,knees. TMJ-pain ,crepitations, limited movements, stiffness, anterior open bite, vertical facial height increased. C/f— chronic disease,pt above 40 yr & older age bone pain ,head ache, deafness compression of cochlear n,blindness involvement of optic n, dizziness , facial paralysis, weakness & mental disturbance. -maxilla progressive enlargement,alv ridge widened, Ed pt c/o inability to wear dentures. Disturbance of endochondral bone formation resulting in dwarfism. Hereditary condition transmitted as autosomal C/f dwarf below 1.4mt,brachycephalic skull,bowed legs,small hands ,stubby fingers, lumbar lordosis. O/m—Retruded maxilla with relative mandibular prognathism resulting in jaw discrepancies in size & CENTRAL NERVOUS SYSTEM Mild anxiety to anxiety neurosis, depression,phobias,disoriented. Severe cases psychiatric consultation. Patient motivation & reassurance. Require longer appointments Epilepsy: drug history ,h/o last attack, precipitating factors, frequency, duration of . In such pts avoid flickering lights ,instruments which can cause harm. Facial .n palsy because of cold,trauma, injection of L.A drugs,nerve impingement ,injury of the n during the parotid C/f :-unilat facial paralysis. -Mask like face,drooping of mouth corner. -inability to close eyes. -loss of forehead wrinkles . Difficulty in making impression . Difficulty in eating & speech. To avoid cheek biting over contouring denture base on the affected side. Excessive horizontal overlap in posteriors. It is a degenerating disease affecting basal ganglia, decreased dopaminergic output so inhibitory action on sub thalamic nucleus decreased. C/f –expressionless face with staring look -soft rapid speech,fixed posture,impaired balance,altered gait,muscle rigidity,impaired fine movements,tremors in Difficulty in making impression , jaw relation recording Pt should be educated about the difficulty in eating,speech &retaining mandibular denture. Disease involving the ns supplying the face,teeth,jaws &associated structures. C/f –searing,stabbing ,lancinating type of pain initiated on touching trigger zone. In such pts prosthodontic treatment becomes Pts should be first treated for Trigeminal neuralgia then continued with prosthodontic Change in bodily functions occurs during specific Affects both male& females In females menopause is the period Post menopausal syndrome: Gen osteoporosis, inability to adjust, burning tongue& tendency to gag DISEASES OF SKIN WITH ORAL O.m:white or grey velvety thread like papules in a leniar,annular, retiform arrangement forming typical lacy,reticular patches, rings , streakes over the buccal mucosa, lesser extent on tongue &palate(Wickham’ s striae) Erosive (premalignant), vesicular or bullous forms also causes burning sensation Erythema multiformae: concentric ring like vesiculo bullos lesions(bull’s eye) Hyperemic macules,papules,vesicles become eroded or ulcerated bleed freely Tongue, palate, buccal mucosa ,gingiva commonly Lip may show ulceration/bloody crusting Pemphigus: auto immune disease Intercellular antibodies in epithelium of skin,oral Serious chr disease appearance of vesicles, Isolated vesiculo bullos lesions ruptures to Oral lesions with rugged borders covered by white blood tinged exudate follows by crusting Severe pain,burning sensation. Inability to eat Pt informed about existing condition and advised not to wear the dentures continuously. Characterized by indurations of skin & fixation of epidermis to the deeper subcutaneous tissue O/m:mucosa thin ,pale due to loss of vascularity and elasticity. Tongue stiff board like, restricted movements. Lips thin rigid partially fixed Decrease in mouth opening Distortion of buccal and labial vestibules Difficulty in impression making & jaw relation recording Post insertion probs: soreness, ulceration require constant adjustments & even remaking Sjogren`s syndrome : Auto immune disease characterized by keratoconjunctivitis sicca, xerostomia, O/m-xerostomia, burning sensation in the Contact dermatitis -Lesions occur on skin &mucous membrane at a localized site after a repeated contact with causative agent. Indicate systemic disease,adverse reaction affecting oral antiparkinson`s,antidepressants, atropine cause xerostomia. antihypertensives,antidepressants,centrally acting skeletal Drug induced Parkinson like syndrome by tricyclic Behavioral changes &confusionantidepressants,corticosteroids,antiparkinson`s, DIAGNOSIS AND TREATMENT PLANNING COMPLETELY EDENTULOUS ARCHES Definition Of Diagnosis & Treatment Planning General Introduction Of the Clinical history taking Clinical examination—Intra oral Examination of existing dentures. Other investigation for systemic disease . TONE FACIAL TISSUES It depends on the age & health of the patient Acc to house classified--ClassI—Normal tone & placement of facial muscles of mastication & expression. -ClassII_ Displays normal function but slightly -ClassIII_ Decreased muscle tone function. Acc to house Muscle tone for denture retention. Normal tone &development required for ease of Skin of face—Dark Black, brown, blond. __Blue ,gray, brown, Black. The color of the skin guides in shade selection of the Lips examined for cracks , fissures, ulcers Adequate support is achieved by proper positioning of upper anterior tooth Un supported-collapsed appearance, wrinkles around lip. Long—hides denture & most of tooth Short---teeth& denture base exposed. Vertical face length: Decreased vertical dimension---Collapsed appearance with wrinkles ,false prognathic relation. Increased vertical dimension—taut ,strained Pain on opening/ closing movements of mandible. Clicking sound, crepitations Deviation of mandible on opening Limitation of mandibular movement The centric relation depends upon structural & functional harmony of osseous structures ,the intra articular tissues , Examination of the lymphnodes Intra oral examination Oral mucous membrane: Examined for inflammatory lesions , pathological lesions like precancerous lesions ,oral malignancies ,papillary hyperplasia ,epulis fissuratum,ulcers. Evaluation of residual alveolar ridge: The size of the maxilla &mandible determines the amount denture bearing available. Discrepancy in jaw size. Arch size –Large ideal Disharmony in jaw size Maxillary may be larger than mandibular or reverse because of the resorption pattern, disturbance in growth & Occlusion should be planned similar to disharmony. According to house cl---Square RESIDUAL RIDGE FORM Classified as-High with parallel ridge slopes & well rounded ,broad in width. High in height & average in width. High in height & thin in width. Because of resorption ridge assumes. Average height broad in width. Average height & width. Low in height & broad in width. In severe resorption the ridge assumes V shape Unfavorable for retention. – High V shaped . – Average V shaped. – Low V shaped. In severe resorption ridge becomes knife edge shaped. High knife edge. Average knife edge. Low knife edge Ridge can be classified as. High well rounded Low well rounded Refers to relative parallelism between planes of the ridge. Class I-Both ridges are parallel to occlusal plane. Class II-Mandibular plane diverts from the occlusal plane Class III-Either the maxillary ridge diverts from occlusalplane anterioly or both ridges divert. INTER ARCH SPACE adequate for the Excessive inter arch SAGITTAL PROFILE OF RESIDUAL It is important to locate from where the mandibular ridge slopes up towards retromolar pad & ramus because occlusal contacts immediately above the the incline at the back part of the residual alveolar ridge will cause denture to slide forward. The bony undercuts do not play any role in retention of the Bony irregularities – presence of sharp bony spicules , SOFT TISSUE EXAMINATION Mucosal thickness : According to house classified as. ClassI—Normal uniform thickness approximately 1mm. Class II—Soft tissue with thin investing membrane & mucous membrane maybe twice the normal thickness. ClassIII—Soft tissue with excessively thick investing membrane with Muscle & Frenal attachments: Examined in relation to the crest of the ridge because it can interfere with denture extension &border seal. House cl border attachments-ClassI-At least 0.5inches distance between attachment & ridge crest.s -ClassII- distance between attachment & ridge crest 0.25 to 0.5inches. Away from the crest. Nearer to the crest. At the crest. Floor of the mouth: Neil cl as EXAMINATION OF THE TONGUE House classified.—ClassI-Normal ,development ,function. __ClassII-Change in form & function. Tongue size can be ---Hypertrophic. Tongue position:Wrights classified as ClassI—the tongue lies in the floor of the mouth with tip forward &slightly below the incisal edgsse of mandibular ClassII—The tongue flattened & broadened but tip is a Class III-retracted depressed into floor of mouth with the tip curled upward into the body of the tongue. Class I position is ideal with floor of mouth at an adequate height , so lingual border contacts it & maintains the seal. In class II &III floor of the mouth is low. Thin serous normal quantity-favorable for retention. Thick ropy/mucous saliva—decreases retention & stability. Normal defense mechanism designed to prevent foreign bodies from entering the trachea.Mild chocking to retching . Causes – anatomical variation ,psychological, HARD & SOFT TISSUES IN THE MAXILLARY Soft tissue covering RAR & palate: Ideally uniform thickness,quite firm, resilient. Fibrous enlargement of maxillary tuberosity. Papillry hyperplasia of the palate. U shaped –Parallel ridge slopes & broad base. Flat palate with broad base & lower ridge slopes . The V shaped vault with greater vertical than favorable ,more tissue coverage for pps area. Cl II-Soft palate turns down at 45 degree Cl III-Soft palate turns down at 70 degree angle just posterior to hard Bony enlargement at the midline of the hard Size- small pea nut,enlarges till occlusal Covered by thin less advised if it extends near to vibrating line about 2to 3mm short. Absence of tuberosity & loss of Excessive surgical reduction of tuberosity. Inadequate pps of maxillary denture. HARD & SOFT AREA IN MANDIBULAR Soft tissuefibrous cord like soft tissue ridge in severely Bony protuberance on lingual aspect of the mandible in the Genial tubercles . EXAMINATION OF EXISTING Mucosa examined for pathological changes. As per the study conducted by Ostlund in 1953 it was reported that in 77% of the denture wearing patients there will be presence histological changes even though he mucosa appears clinically normal. C R & CO, premature contacts ,sliding. Type of teeth. Panoramic radiographs play an important role in diagnosis &treatment planning in completely Study was conducted by Syropoulos N.D,Patsaks A.J in 1931. Study the residual alveolar ridge resorption. Mandibular RAR resorption can be classified. Class I—Upto 1/3rd of original vertical height lost Class II-From 1/3rd to 2/3rd of original vertical ClassIII-2/3rd or more of original vertical height lost. Radiographic examination of the bone density by Misch. Dense cortical bone . Porous cortical bone. Coarse trabacular bone. Fine trabacular bone. Study the location of anatomic structures. Tomography:Specialized technique that allows detailed images of structures in a predetermined plane ,while blurring the Classic tomography : Several exposures of selected area at orbitrary intervals or section.Lateral, medial, central parts of joint as Computed tomography: Scanning of well defined area. -The computer analyses X-ray absorption at many different points & converts them into an image on a video screen. -Gross determination of condyle disk relation Magnetic resonance imaging: Aids in the evaluation of anatomy & relationships in absence of Evaluation of following Diagnose missed findings Conform clinical findings Measuring & determining relation to other structures Decision about preprosthetic surgery Pre extraction records: Photographs showing natural teeth. Diagnostic casts & radiographs obtained from other dentist. Using the patient`s existing dentures impression made & diagnostic casts made. With tentative CR & face record mount the maxillary cast on to the adjustable articulator ,orient the mandibular casts with CR. Check vertical dimension ,CR &CO. OTHER INVESTIGATIVE PROCEDERES To Rule Out DM Patient’s BP should be recorded. If any Intra or Extra Oral lesion advise for Biopsy Histopathological Examination . The treatment plan should specify regarding the treatment procedures,operating time,laboratory time,calender time & fees such that patient informed consent regarding the same can be obtained. Treatment plan for completely edentulous patients Adjunctive care---Pt education &motivation. ----Elimination of infection. ----Elimination of pathoses. ----Treatment of abused tissues. Prosthodontic care –Conventional complete denture. --implant supported complete denture. Information about their dental health &it`s effect on the Limitation of complete denture. Problems associated with complete denture initially. Importance of oral &denture hygiene. Need for regular check up. Convincing about the Rx procedure,need for the surgical Rx, time required, fees. Motivation of the patient. Diet counseling:Diet rich in proteins,calcium, vitamins, minerals,low calorie diet. If required referred to dietician, physician. NON SURGICAL METHODS OF TREATING THE Resting the denture supporting tissues. Occlusal correction , establishing vertical height Refitting the dentures. Drugs to eliminate infection. Advise for jaw exercise. 1)Correction of hyperplastic ridge tissue ,epulis fissuratum, papillomatosis ,hyperplastic pendulous tuberosity. Indication—no response to nonsurgical Rx procedures. --interferes with stability . Excision of the tissues with vestibuloplasty.Electro surgery. 2)Frenal attachments-maxillary labial frenum broad fibrous band,lingual tongue tie,prominent buccal freni Indications—near to crest of ridge. 3)papillary hyperplasia-Small lesion with sharp curettes electro -Large lesion split thickness supra 4)Vestibuloplasty-Restores the ridge height by lowering the muscle attachments & attached mucosa. FABRICATION OF COMPLETE DENTURE Conventional complete denture. Previous h/o failures with conventional complete dentures Patient with compromised motor skills, advanced residual If dose not like to wear dentures. It encompasses history taking which includes past dental history &medical history,patients expectation & studying the mental attitude of the patients Diagnosis involve examination of the patient’s right from he enters the clinic,beginning from the collection of personnel formations of the patient.and then examination of extra&intra oral hard&soft tissues structures. Subjecting the patients to required investigations,to confirm the diagnostic findings ,and Referring patients to other specialist on On the basis of Diagnostic findings the Rx plan is framed. Diagnosis and Rx planning form the first important milestone for the successful accomplishment of the Rx &favorable prognosis as the potential problems are identified & treatment plan is framed accordingly. BOCHER ‘S Proshtodontics Rx for edentulous patients 11th Prosthodontics Rx for edentetious patients by Zarb Bolender Essentials of complete denture prosthodontics by Winkler Syllabus of complete dentures by Heartwell.4th edition . Complete Denture prosthodontics by Jhon Joy Mannapali. Color atlas of common oral diseases by Craig .S .Miller. The temepomandibular Joint & Related oeofacial disorder by A text book of oral pathology by shafer 4th edition Davidson’s principles & practice of Medicine. DCNA 1977 complete denture. BDJ volume 188,No.7:April:8:2000.Complete denture an Diagnostic factors in the choice of impression materials & methods by George.A.Buckly D.D.S in JPD March:1995:5:2. Dr.Robert.H.Spring. Diagnostic procedures ---the patients existing dentures.JPD 1983:49:2:153. Study conducted by syropoulos ND, Patsuks AJ,in 1981.Finding from radiology of Jaw of edentulous patients oral surgery Oral JPD July 1974:32:1:7-12studies of residual alveolar-ridge resorptionpart1 use of Panoramic radiographs for evaluation & collection of mandibular resorption by Leader in continuing dental education
<urn:uuid:07eb9005-d4ce-48e7-b74e-f28fc207358a>
CC-MAIN-2017-17
https://www.slideshare.net/indiandentalacademy/dtp81
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917121778.66/warc/CC-MAIN-20170423031201-00131-ip-10-145-167-34.ec2.internal.warc.gz
en
0.709106
5,792
2.890625
3
About this schools Wikipedia selection The articles in this Schools selection have been arranged by curriculum topic thanks to SOS Children volunteers. SOS mothers each look after a a family of sponsored children. |Population:||approximately 1,200 (2001)| |Area:||1,900 km² / 730 sq mi| |Latitude:||46° 30' S| |Longitude:||169° 30' E| |Regions:||Otago and Southland| |Districts:||Clutha District and Southland District| |Main towns and settlements:||Owaka, Waikawa, Kaka Point, Fortrose| The Catlins (sometimes referred to as The Catlins Coast) comprises an area in the southeastern corner of the South Island of New Zealand. The area is between Balclutha and Invercargill, and is in both the Otago and Southland regions. It includes the South Island's southernmost point, Slope Point. The Catlins, a rugged, sparsely populated area, features a scenic coastal landscape and dense temperate rainforest, both of which are home to many endangered species of birds. Its exposed location leads to its frequently wild weather and heavy ocean swells, which are an attraction to big-wave surfers. Ecotourism has become of growing importance in the Catlins economy, which otherwise relies heavily on dairy farming and fishing. The region's early whaling and forestry industries have long since died away, along with the coastal shipping that led to several tragic shipwrecks. Only some 1,200 people now live in the area, many of them in the settlement of Owaka. The Catlins area covers some 1900 km² (730 sq mi) and forms a rough triangular shape, extending up to 50 km (30 mi) inland and along a stretch of coast 90 km (60 mi) in extent. It is bounded to the northeast and west by the mouths of two large rivers, the Clutha River in the northeast and the Mataura River in the west. To the north and northwest, the rough bush-clad hills give way to rolling pastoral countryside drained and softened by the actions of tributaries of these two rivers such as the Pomahaka River. The Catlins boasts a rugged, scenic coastline. Natural features include sandy beaches, blowholes, a petrified forest at Curio Bay, and the Cathedral Caves, which visitors can reach at low tide. Much of the coastline is high cliff, with several faces over 150 m (500 ft) in height, and the land rises sharply from the coast at most points. For this reason, many of the area's rivers cascade over waterfalls as they approach the ocean (notably the iconic Purakaunui Falls on the short Purakaunui River). The South Island's southernmost point, Slope Point, projects near the southwestern corner of the Catlins. To the west of this lies Waipapa Point, often considered the boundary of the Catlins region, beyond which lies the swampy land around the mouth of the Mataura River at the eastern end of Toetoes Bay. The western boundary of the Catlins region is not well-defined, however, and some more stringent definitions exclude even Slope Point. Several parallel hill ranges dominate the interior of the Catlins, separated by the valleys of the Owaka, Catlins and Tahakopa Rivers, which all drain southeastwards into the Pacific Ocean. The most notable of these ranges is the Maclennan Range. Between them, these hills are often simply referred to as the Catlins Ranges. Their northwestern slopes are drained by several tributaries of the Clutha and Mataura Rivers, most notably the Mokoreta River, which flows mainly westwards, reaching the Mataura close to the town of Wyndham. The highest point in the Catlins, Mount Pye — 720 m (2361 ft) — stands 25 km (15 mi) north-northeast of Waikawa and close to the source of the Mokoreta River, and marks part of the Otago-Southland border. Other prominent peaks above 600 m (2000 ft) include Mount Rosebery, Catlins Cone, Mount Tautuku, and Ajax Hill. The Catlins has several small lakes, notably scenic Lake Wilkie close to the Tautuku Peninsula. Catlins Lake, near Owaka, is actually the tidal estuary of the Catlins River. Shipping has found the Catlins coast notoriously dangerous, and there have been many shipwrecks on the headlands that jut into the Pacific Ocean here. Two lighthouses stand at opposite ends of the Catlins to help prevent further mishaps. The Nugget Point lighthouse stands 76 m (250 ft) above the water at the end of Nugget Point, casting its light across a series of eroded stacks (the "nuggets" which give the point its name). It was built in 1869–70. The Waipapa Point light, which stands only 21 m (70 ft) above sea level, was the last wooden lighthouse to be built in New Zealand, and was constructed in 1884 in response to the tragic 1881 wreck of the Tararua. Both of these lighthouses are now fully automated. Due to its position at the southern tip of New Zealand, the Catlins coastline lies exposed to some of the country's largest ocean swells, often over 5 m (16 ft). Big wave surfing is developing into a regional attraction, with regular competitions and feats like Dunedin surfer Doug Young's award-winning 11 m (36 ft) wave in 2003 gathering publicity for the sport. The Catlins has a cool maritime temperate climate, somewhat cooler than other parts of the South Island, and strongly modified by the effect of the Pacific Ocean. Winds can be strong, especially on the exposed coast; most of the South Island's storms develop to the south or southwest of the island, and thus the Catlins catches the brunt of many of these weather patterns. The Catlins — and especially its central and southern areas — experiences considerably higher precipitation than most of the South Island's east coast; heavy rain is infrequent, but drizzle is common and 200 days of rain in a year is not unusual. Rain days are spread fairly evenly throughout the year; there is no particularly rainy season in the northern Catlins, and only a slight tendency towards more autumn rain in the southwest. The average annual rainfall recorded at the Tautuku Outdoor Education Centre is about 1,300 mm (51 in), with little variation from year to year. Fine days can be sunny and warm, and daily maxima may exceed 30 °C (86 °F) in mid summer (January-February). A more usual daily maximum in summer would be 18–20 °C (64–68 °F). Snow is rare except on the peaks even in the coldest part of winter, though frost is quite common during the months of June to September. Typical daily maximum temperatures in winter are 10–13 °C (50–55 °F). The first people known to live in the Catlins , Māori of the Kāti Mamoe, Waitaha, and Kāi Tahu iwi, merged via marriage and conquest into the iwi now known as Kāi Tahu. Archaeological evidence of human presence dates back to AD 1000. The area's inhabitants were semi-nomadic, travelling from Stewart Island/Rakiura in the south and inland to Central Otago. They generally dwelt near river mouths for easy access to the best food resources. In legend, the Catlins forests further inland were inhabited by Maeroero (wild giants). The Catlins offered one of the last places where the giant flightless bird, the moa, could be readily hunted, and the timber of the forest was ideal for canoe construction (the name of the settlement Owaka means "Place of the canoe"). No formal Māori pa were located in the Catlins, but there were many hunting camps, notably at Papatowai, near the mouth of the Tahakopa River. Europeans first sighted the area in 1770 when the crew of James Cook's Endeavour sailed along the coast. Cook named a bay in the Catlins area Molineux's Harbour after his ship's master Robert Molineux. Although this was almost certainly the mouth of the Waikawa River, the name was later applied to a bay to the northeast, close to the mouth of the Clutha River, which itself was for many years known as the Molyneux River. Sealers and whalers founded the first European settlements in the early years of the 19th century, at which time the hunting of marine mammals was the principal economic activity in New Zealand. A whaling station was established on the Tautuku Peninsula in 1839, with smaller stations at Waikawa and close to the mouth of the Clutha River. The Catlins take their name from the Catlins River, itself named for Captain Edward Cattlin (sometimes spelt Catlin), a whaler who purchased an extensive block of land along Catlins River on February 15 1840 from Kāi Tahu chief Hone Tuhawaiki (also known as "Bloody Jack") for muskets and £30 (roughly NZ$3000 in 2005 dollars). New Zealand's land commissioners declined to endorse the purchase and much of the land was returned to the Māori after long negotiations ending over a decade after Cattlin's death. During the mid-19th century the area developed into a major saw-milling region, shipping much of the resultant timber north to the newly-developing town of Dunedin from the ports of Waikawa and Fortrose. A 200 ft (60 m) long jetty was built at Fortrose in 1875, although this has long since disappeared. Several shipwrecks occurred along the treacherous coastline during this period. Most notably, one of New Zealand's worst shipping disasters occurred here: the wreck of the passenger-steamer Tararua, en-route from Bluff to Port Chalmers, which foundered off Waipapa Point on April 29 1881 with the loss of all but 20 of the 151 people aboard. Another noted shipwreck, that of the Surat, occurred on New Year's Day in 1874. This ship, holed on rocks near Chasland's Mistake eight kilometres southeast of Tautuku Peninsula, limped as far as the mouth of the Catlins River before orders were given to abandon ship. A beach at the mouth of the Catlins River is named Surat Bay in commemoration of this wreck. The schooner Wallace and steamer Otago were also both wrecked at or near Chasland's Mistake, in 1866 and 1876 respectively, and a 4600 tonne steamer, the Manuka, ran aground at Long Point north of Tautuku in 1929. From the time of the Great Depression until the formation of the New Zealand Rabbit Board in 1954, rabbits became a major pest in the area, and rabbiters were employed to keep the creatures under control. The trapping of rabbits and auctioning of their skins in Dunedin became a minor but important part of the Catlins area's economy during this time. After a decline in the 1890s, the logging of native timber expanded into new areas made accessible by an extension of the railway, before petering out in the mid-20th century. One nail in the industry's coffin came with a series of bush fires which destroyed several mills in 1935. Much of the remaining forest is now protected by the New Zealand Department of Conservation as part of the Catlins Forest Park. The Catlins coast often hosts New Zealand Fur Seals and Hooker's Sea Lions, and Southern Elephant Seals can occasionally be seen. Several species of penguin also nest along the coast, notably the rare Yellow-eyed Penguin (Hoiho), as do mollymawks and Australasian Gannets, and the estuaries of the rivers are home to herons, stilts, godwits and oystercatchers. Bitterns and the threatened Fernbird (Matata) can also occasionally be seen along the reedy riverbanks. In the forests, endangered birds such as the yellowhead (mohua) and kakariki (New Zealand parakeet) occur, as do other birds such as the tui, fantail (piwakawaka), and kererū (New Zealand pigeon). One of New Zealand's only two species of non-marine mammal, the Long-tailed Bat, is found in small numbers within the forests, and several species of lizard are also found locally, the most populous of which is the Common Gecko. Many species of fish, shellfish, and crustaceans frequent both the local rivers and sea, notably crayfish and paua. Nugget Point in the northern Catlins is host to a particularly rich variety of marine wildlife. The proposed establishment of a marine reserve off the coast here has, however, proved controversial. Hector's Dolphins can often be seen close to the Catlins coast, especially at Porpoise Bay near Waikawa. The Catlins features dense temperate rainforest, dominated by podocarps (which covers some 600 km² or 230 sq mi of the Catlins). The forest is thick with trees such as Rimu, Totara, Silver Beech, Matai and Kahikatea. Of particular note are the virgin Rimu and Totara forest remaining in those areas which were too rugged or steep to have been milled by early settlers, and an extensive area of Silver Beech forest close to the Takahopa River. This is New Zealand's most southerly expanse of Beech forest. Many native species of forest plant can be found in the undergrowth of the Catlins forest, including young Lancewoods, orchids such as the Spider Orchid and Perching Easter Orchid, and many different native ferns. Settlers cleared much of the Catlins' coastal vegetation for farmland, but there are still areas where the original coastal plant life survives, primarily around cliff edges and some of the bays close to the Tautuku Peninsula, these being furthest from the landward edges of the forest. Plant life here includes many native species adapted to the strong salt-laden winds found in this exposed region. The Catlins coastal daisy (Celmisia lindsayii) is unique to the region, and is related to New Zealand's mountain daisies. Tussocks, hebes, and flaxes are a common sight, as are native gentians, though sadly the endangered native sedge pingao can now rarely be found. In years when the Southern rātā flowers well, the coastal forest canopy turns bright red. The rātā also thrives in some inland areas. The parallel hill ranges of the Catlins form part of the Murihiku terrane, an accretion which extends inland through the Hokonui Hills as far west as Mossburn. This itself forms part of a larger system known as the Southland Syncline, which is linked to similar formations in Nelson (offset by the Alpine Fault) and even New Caledonia, 3,500 km (2,200 mi) away. The Catlins ranges are strike ridges composed of Triassic and Jurassic sandstones, mudstones and other related sedimentary rocks, often with a high incidence of feldspar. Fossils of the late and middle Triassic Warepan and Kaihikuan stages are found in the area. Curio Bay features the petrified remains of a forest 160 million years old. This is a remnant of the subtropical woodland that once covered the region, only to become submerged by the sea. The fossilised remnants of trees closely related to modern Kauri and Norfolk Pine can be seen here. Population and demographics The Catlins area has very few inhabitants, and the region as a whole has a population of only some 1200 people. Almost all of the Catlins' population lies either close to the route of the former State Highway running from Balclutha to Invercargill (which now forms part of the Southern Scenic Route), or in numerous tiny coastal settlements, most of which have only a few dozen inhabitants. The largest town in the Catlins, Owaka, has a population of about 400. It is located 35 km (20 mi) southwest of Balclutha. The only other settlements of any great size are Kaka Point (population 150), Waikawa and Fortrose, which lies at the western edge of the Catlins on the estuary of the Mataura River. Most of the area's other settlements are either little more than farming communities (such as Romahapa, Maclennan, and Glenomaru) or seasonally populated holiday communities with few permanent residents. An outdoor education centre, run by the Otago Youth Adventure Trust is located at Tautuku, almost exactly half way between Owaka and Waikawa. The area's population has declined to its current level from around 2700 in 1926. At that time, the settlement of Tahakopa - which now has a population of under 100 - rivalled Owaka in size, with a population of 461 compared with Owaka's 557. Only in the last twenty years has this decline halted, with today's population figures being very similar to those of 1986. Before his death in 2008, poet Hone Tuwhare had become the Catlins area's best known inhabitant. Born in Northland, Tuwhare lived at Kaka Point for many years, and many of his poems refer to the Catlins. The area's population has predominantly European ancestry, with 94.2% of Owaka's population belonging to the European ethnic group according to the 2001 Census, compared to 93.7% for the Otago region and 80.1% for New Zealand as a whole. The median income in the same census was considerably lower than for most of the country, although the unemployment rate was very low (3.2%, compared with 7.5% nationwide). The early European economy of the Catlins during the 1830s and 1840s centred on whaling and sealing. The exploitation of the forests for timber started in the 1860s with the rapid growth of the city of Dunedin as a result of the goldrush of 1861–62. In the early 1870s more timber cargo was loaded at Owaka than at any other New Zealand port. Forestry and sawmilling declined in the late 1880s once the easily accessible timber had been removed. The extension of the railway beyond Owaka breathed new life into these industries, however, with activity peaking during the 1920s. The land cleared of trees largely became pasture. From the 1880s, clearing of land for dairy farming increased, especially in the areas around Tahakopa and the Owaka River valley. There is still considerable sheep and dairy farming on the cleared hills on the periphery of the region, and this accounts for much of the Catlins' income. A rural polytechnic specialising in agricultural science (Telford Polytechnic) is located south of Balclutha close to the northeastern edge of the Catlins. Fishing and tourism also now account for much of the area's economy. The rugged natural scenery, sense of isolation, and natural attractions such as Cathedral Caves makes the Catlins a popular destination for weekend trips by people from Dunedin and Invercargill, the two nearest cities. A large number of cribs ( holiday cottages) are found at places such as Jack's Bay and Pounawea. Ecotourism is becoming increasingly important to the area's economy, with many of the visitors coming from overseas. Tourism resources grew from three motels and four camping grounds in 1990 to eight motels, four camping grounds and 12 backpackers hostels a decade later, along with at least ten regular guided tour operations. Tourism added an estimated $2.4 million to the region's economy in 2003. The Southern Scenic Route links Fiordland and Dunedin via the Catlins. Here it runs northeast to southwest as an alternative road to State Highway 1, which skirts the Catlins to the northwest. This section of the Southern Scenic Route - formerly designated State Highway 92 but now no longer listed as a state highway - winds through most of the small settlements in the area, and was only completely sealed during the late 1990s (a stretch of about 15 km (10 mi) southwest of Tautuku was surfaced with gravel prior to that time). A coastal route also parallels the inland highway between Waikawa and Fortrose, but only about two thirds of this road is sealed. The remaining small roads in the district, all of which link with the former State Highway, have gravel surfaces. These roads mainly link the main route with small coastal settlements, although gravel roads also extend along the valleys of the Owaka and Tahakopa Rivers, linking the main Catlins route with the small towns of Clinton and Wyndham respectively. The gravelled Waikawa Valley Road crosses the hills to join the Tahakopa-Wyndham route. A railway line, the Catlins River Branch, linked the area with the South Island Main Trunk Line from the late 19th century. Construction of this line began in 1879, but it did not reach Owaka until 1896. Construction was slow, due to the difficult terrain, and the final terminus of the line at Tahakopa was not completed until 1915. The economic viability of the line declined with the sawmills that it was built to serve, and the line was eventually closed in 1971. Parts of the line's route are now accessible as walkways, among them a 250 m (830 ft) long tunnel ("Tunnel Hill") between Owaka and Glenomaru. Several of the area's coastal settlements have facilities for small boats, but generally only fishing and holiday craft use them; there is no regular passenger or freight boat service to the Catlins. The Catlins area lies on the boundary of the administrative areas of the Clutha District and Southland District. Most of the Catlins is located in the Clutha District, based in Balclutha, and one of the council's fourteen representatives is elected directly from a Catlins Ward which is roughly coterminous with this area. The Clutha District is itself part of the Otago Region, controlled administratively by the Otago Regional Council (ORC) in Dunedin, 80 km (50 mi) to the northeast of Balclutha. The Molyneux Constituency of the ORC, which covers roughly the same area as the Clutha District, elects two councillors to the 12-member Regional Council. Approximately the westernmost one-third of the Catlins area lies in the Southland District, based in Invercargill, 50 km (30 mi) to the west of Fortrose. One of the council's 14 representatives is elected from the Toetoes Ward, which contains this part of the Catlins, along with an area around Wyndham and extending along Toetoes Bay and across the Awarua Plain. The Southland District is itself part of the Southland Region, controlled administratively by the Southland Regional Council (SRC; also known as Environment Southland), which is also based in Invercargill. The Southern Constituency of the SRC, which covers the entire Toetoes Ward and extends across the Awarua Plain almost as far as Bluff in the west and Mataura in the north, elects one councillor to the 12-member Regional Council. The Catlins forms part of the Clutha-Southland electorate in New Zealand's general elections. The electorate is currently represented in the New Zealand Parliament by former Leader of the Opposition Bill English ( National).
<urn:uuid:a2438945-2589-4f08-9526-9e1e842b2cc1>
CC-MAIN-2017-17
http://schools-wikipedia.org/wp/t/The_Catlins.htm
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917121153.91/warc/CC-MAIN-20170423031201-00247-ip-10-145-167-34.ec2.internal.warc.gz
en
0.963804
4,981
3.03125
3
Lev Davidovich Landau (Russian: Лев Давидович Ландау; IPA: [lʲɛv dɐˈvidəvʲitɕ lɐnˈda.u] ( listen); January 22 [O.S. January 9] 1908 – 1 April 1968) was a Soviet physicist who made fundamental contributions to many areas of theoretical physics. His accomplishments include the independent co-discovery of the density matrix method in quantum mechanics (alongside John von Neumann), the quantum mechanical theory of diamagnetism, the theory of superfluidity, the theory of second-order phase transitions, the Ginzburg–Landau theory of superconductivity, the theory of Fermi liquid, the explanation of Landau damping in plasma physics, the Landau pole in quantum electrodynamics, the two-component theory of neutrinos, and Landau's equations for S matrix singularities. He received the 1962 Nobel Prize in Physics for his development of a mathematical theory of superfluidity that accounts for the properties of liquid helium II at a temperature below K ( 2.17 °C). −270.98 - 1 Life - 2 Legacy - 3 Landau's List - 4 In popular culture - 5 Works - 6 See also - 7 References - 8 Further reading - 9 External links Landau was born on 22 January 1908 to Jewish parents in Baku, Azerbaijan, in what was then the Russian Empire. Landau's father was an engineer with the local oil industry and his mother was a doctor. He learned to differentiate at age 12 and to integrate at age 13. Landau graduated in 1920 at age 13 from gymnasium. His parents considered him too young to attend university, so for a year he attended the Baku Economical Technical School (техникум). In 1922, at age 14, he matriculated at the Baku State University, studying in two departments simultaneously: the Departments of Physics and Mathematics, and the Department of Chemistry. Subsequently, he ceased studying chemistry, but remained interested in the field throughout his life. Leningrad and Europe In 1924, he moved to the main centre of Soviet physics at the time: the Physics Department of Leningrad State University. In Leningrad, he first made the acquaintance of theoretical physics and dedicated himself fully to its study, graduating in 1927. Landau subsequently enrolled for post-graduate studies at the Leningrad Physico-Technical Institute where he eventually received a doctorate in Physical and Mathematical Sciences in 1934. Landau got his first chance to travel abroad during the period 1929–1931, on a Soviet government—People's Commissariat for Education—travelling fellowship supplemented by a Rockefeller Foundation fellowship. By that time he was fluent in German and French and could communicate in English. He later improved his English and learned Danish. After brief stays in Göttingen and Leipzig, he went to Copenhagen on 8 April 1930 to work at the Niels Bohr's Institute for Theoretical Physics. He stayed there till 3 May of the same year. After the visit, Landau always considered himself a pupil of Niels Bohr and Landau's approach to physics was greatly influenced by Bohr. After his stay in Copenhagen, he visited Cambridge (mid-1930), where he worked with P. A. M. Dirac, Copenhagen ( 20 to 22 September 22 November 1930), and Zurich (December 1930 to January 1931), where he worked with Wolfgang Pauli. From Zurich Landau went back to Copenhagen for the third time and stayed there from 25 February till 19 March 1931 before returning to Leningrad the same year. National Scientific Center Kharkov Institute of Physics and Technology, Kharkov Between 1932 and 1937 he headed the Department of Theoretical Physics at the National Scientific Center Kharkov Institute of Physics and Technology and lectured at the University of Kharkov and the Kharkov Polytechnical Institute. Apart from his theoretical accomplishments, Landau was the principal founder of a great tradition of theoretical physics in Kharkov, Soviet Union, sometimes referred to as the "Landau school". In Kharkov, he and his friend and former student, Evgeny Lifshitz, began writing the Course of Theoretical Physics, ten volumes that together span the whole of the subject and are still widely used as graduate-level physics texts. During the Great Purge, Landau was investigated within the UPTI Affair in Kharkov, but he managed to leave for Moscow to take up a new post. Landau developed a famous comprehensive exam called the "Theoretical Minimum" which students were expected to pass before admission to the school. The exam covered all aspects of theoretical physics, and between 1934 and 1961 only 43 candidates passed, but those who did later became quite notable theoretical physicists. Institute for Physical Problems, Moscow Landau was the head of the Theoretical Division at the Institute for Physical Problems from 1937 until 1962. Landau was arrested on 27 April 1938, because he had compared the Stalinist dictatorship with that of Hitler, and was held in the NKVD's Lubyanka prison until his release on 29 April 1939, after the head of the institute Pyotr Kapitsa, an experimental low-temperature physicist, wrote a letter to Joseph Stalin, personally vouching for Landau's behavior, and threatening to quit the institute if Landau were not released. After his release Landau discovered how to explain Kapitsa's superfluidity using sound waves, or phonons, and a new excitation called a roton. Landau led a team of mathematicians supporting Soviet atomic and hydrogen bomb development. Landau calculated the dynamics of the first Soviet thermonuclear bomb, including predicting the yield. For this work he received the Stalin Prize in 1949 and 1953, and was awarded the title "Hero of Socialist Labour" in 1954. Landau's accomplishments include the independent co-discovery of the density matrix method in quantum mechanics (alongside John von Neumann), the quantum mechanical theory of diamagnetism, the theory of superfluidity, the theory of second-order phase transitions, the Ginzburg–Landau theory of superconductivity, the theory of Fermi liquid, the explanation of Landau damping in plasma physics, the Landau pole in quantum electrodynamics, the two-component theory of neutrinos, and Landau's equations for S matrix singularities. He received the 1962 Nobel Prize in Physics for his development of a mathematical theory of superfluidity that accounts for the properties of liquid helium II at a temperature below 2.17 K (−270.98 °C)." Personal life and views In 1937 Landau married a girl from Kharkov, Kora T. Drobanzeva; their son Igor was born in 1946. Landau believed in "free love" rather than monogamy, and encouraged his wife and his students to practice "free love"; his wife was not enthusiastic. During his life, Landau was admitted involuntarily six times to the Kashchenko psychiatric hospital. On 7 January 1962, Landau's car collided with an oncoming truck. He was severely injured and spent two months in a coma. Although Landau recovered in many ways, his scientific creativity was destroyed, and he never returned fully to scientific work. His injuries prevented him from accepting the 1962 Nobel Prize for physics in person. Throughout his whole life Landau was known for his sharp humor, which can be illustrated by the following dialogue with a psychiatrist (P), who tried to test for a possible brain damage while Landau (L) was recovering from the car crash: - P: "Please draw me a circle" - L draws a cross - P: "Hm, now draw me a cross" - L draws a circle - P: "Landau, why don't you do what I ask?" - L: "If I did, you might come to think I've become mentally retarded". In 1965 former students and co-workers of Landau founded the Landau Institute for Theoretical Physics, located in the town of Chernogolovka near Moscow, and led for the following three decades by Isaak Markovich Khalatnikov. In June 1965, Lev Landau and Yevsei Liberman published a letter in the New York Times, stating that as Soviet Jews they opposed U.S. intervention on behalf of the Student Struggle for Soviet Jewry. Two celestial objects are named in his honour: Landau kept a list of names of physicists which he ranked on a logarithmic scale of productivity ranging from 0 to 5. The highest ranking, 0, was assigned to Isaac Newton. Albert Einstein was ranked 0.5. A rank of 1 was awarded to the founding fathers of quantum mechanics, Niels Bohr, Werner Heisenberg, Paul Dirac and Erwin Schrödinger, and others. Landau ranked himself as a 2.5 but later promoted himself to a 2. David Mermin, writing about Landau, referred to the scale, and ranked himself in the fourth division, in the article "My Life with Landau: Homage of a 4.5 to a 2". In popular culture - The Russian television film My Husband – the Genius (unofficial translation of the Russian title Мой муж – гений) released in 2008 tells the biography of Landau (played by Daniil Spivakovsky), mostly focusing on his private life. It was generally panned by critics. People who had personally met Landau, including famous Russian scientist Vitaly Ginzburg, said that the film was not only terrible but also false in historical facts. - Another film about Landau, Dau, is directed by Ilya Khrzhanovsky with non-professional actor Teodor Currentzis (an orchestra conductor) as Landau. Landau and Lifshitz Course of Theoretical Physics - L.D. Landau, E.M. Lifshitz (1976). Mechanics. Vol. 1 (3rd ed.). Butterworth–Heinemann. ISBN 978-0-7506-2896-9. - L.D. Landau; E.M. Lifshitz (1975). The Classical Theory of Fields. Vol. 2 (4th ed.). Butterworth–Heinemann. ISBN 978-0-7506-2768-9. - L.D. Landau; E.M. Lifshitz (1977). Quantum Mechanics: Non-Relativistic Theory. Vol. 3 (3rd ed.). Pergamon Press. ISBN 978-0-08-020940-1. — 2nd ed. (1965) at archive.org - V.B. Berestetskii, E.M. Lifshitz, L.P. Pitaevskii (1982). Quantum Electrodynamics. Vol. 4 (2nd ed.). Butterworth–Heinemann. ISBN 978-0-7506-3371-0. - L.D. Landau; E.M. Lifshitz (1980). Statistical Physics, Part 1. Vol. 5 (3rd ed.). Butterworth–Heinemann. ISBN 978-0-7506-3372-7. - L.D. Landau; E.M. Lifshitz (1987). Fluid Mechanics. Vol. 6 (2nd ed.). Butterworth–Heinemann. ISBN 978-0-08-033933-7. - L.D. Landau; E.M. Lifshitz (1986). Theory of Elasticity. Vol. 7 (3rd ed.). Butterworth–Heinemann. ISBN 978-0-7506-2633-0. - L.D. Landau; E.M. Lifshitz; L.P. Pitaevskii (1984). Electrodynamics of Continuous Media. Vol. 8 (1st ed.). Butterworth–Heinemann. ISBN 978-0-7506-2634-7. - L.P. Pitaevskii; E.M. Lifshitz (1980). Statistical Physics, Part 2. Vol. 9 (1st ed.). Butterworth–Heinemann. ISBN 978-0-7506-2636-1. - L.P. Pitaevskii; E.M. Lifshitz (1981). Physical Kinetics. Vol. 10 (1st ed.). Pergamon Press. ISBN 978-0-7506-2635-4. - L.D. Landau, A.J. Akhiezer, E.M. Lifshitz (1967). General Physics, Mechanics and Molecular Physics. Pergamon Press. ISBN 978-0-08-009106-8. - L.D. Landau; A.I. Kitaigorodsky (1978). Physics for Everyone. Mir Publishers Moscow. - L.D. Landau; Ya. Smorodinsky (2011). Lectures on Nuclear Theory. Dover Publications. A complete list of Landau's works appeared in 1998 in the Russian journal Physics-Uspekhi. - Landau–Hopf theory of turbulence - Landau–Lifshitz–Gilbert equation - Landau–Lifshitz model - Landau (crater) - Landau theory of second order phase transitions - Ginzburg–Landau theory of superconductivity - Landau quantization, Landau levels - Landau damping - List of Jewish Nobel laureates - Schlüter, Michael; Lu Jeu Sham (1982). "Density functional theory". Physics Today. 35 (2): 36. Bibcode:1982PhT....35b..36S. doi:10.1063/1.2914933. - Shifman, M., ed. (2013). Under the Spell of Landau: When Theoretical Physics was Shaping Destinies. World Scientific. doi:10.1142/8641. ISBN 978-981-4436-56-4. - Kapitza, P. L.; Lifshitz, E. M. (1969). "Lev Davydovitch Landau 1908–1968". Biographical Memoirs of Fellows of the Royal Society. 15: 140–158. doi:10.1098/rsbm.1969.0007. - Martin Gilbert, The Jews in the Twentieth Century: An Illustrated History, Schocken Books, 2001, ISBN 0805241906 p. 284 - Frontiers of physics: proceedings of the Landau Memorial Conference, Tel Aviv, Israel, 6–10 June 1988, (Pergamon Press, 1990) ISBN 0080369391, pp. 13–14 - Edward Teller, Memoirs: A Twentieth Century Journey In Science And Politics, Basic Books 2002, ISBN 0738207780 p. 124 - František Janouch, Lev Landau: A Portrait of a Theoretical Physicist, 1908–1988, Research Institute for Physics, 1988, p. 17. - Rumer, Yuriy. ЛАНДАУ. berkovich-zametki.com - Bessarab, Maya (1971) Страницы жизни Ландау. Московский рабочий. Moscow - Mehra, Jagdish (2001) The Golden Age of Theoretical Physics, Boxed Set of 2 Volumes, World Scientific, p. 952. ISBN 9810243421. - During this period Landau visitied Copenhagen three times: 8 April to 3 May 1930, from 20 September to 22 November 1930, and from 25 February to 19 March 1931 (see Landau Lev biography – MacTutor History of Mathematics). - Sykes, J. B. (2013) Landau: The Physicist and the Man: Recollections of L. D. Landau, Elsevier, p. 81. ISBN 9781483286884. - Haensel, P.; Potekhin, A.Y. and Yakovlev, D.G. (2007) Neutron Stars 1: Equation of State and Structure, Springer Science & Business Media, p. 2. ISBN 0387335439. - Gennady Gorelik, Scientific American 1997, The Top Secret Life of Lev Landau - Blundell, Stephen J. (2009). Superconductivity: A Very Short Introduction. Oxford U. Press. p. 67. ISBN 9780191579097. - Ioffe, Boris L. (25 April 2002). "Landau's Theoretical Minimum, Landau's Seminar, ITEP in the beginning of the 1950's". arxiv.org. arXiv: . - On the Theory of Stars, in Collected Papers of L. D. Landau, ed. and with an introduction by D. ter Haar, New York: Gordon and Breach, 1965; originally published in Phys. Z. Sowjet. 1 (1932), 285. - Dorozynsk, Alexander (1965). The Man They Wouldn't Let Die. - Музей-кабинет Петра Леонидовича Капицы (Peter Kapitza Memorial Museum-Study), Академик Капица: Биографический очерк (a biographical sketch of Academician Kapitza). - Richard Rhodes, Dark Sun: The Making of the Hydrogen Bomb, pub Simon & Schuster, 1995, ISBN 0684824140 p. 33. - "Lev Davidovich Landau, Soviet physicist and Nobel laureate". Physics Today. 57 (2): 62. 2004. doi:10.1063/1.2408530. - Petr Leonidovich Kapitsa, Experiment, Theory, Practice: Articles and Addresses, Springer, 1980, ISBN 9027710619, p. 329. - Mishina, Irina (17 December 2012). Раздвоение личностей [Dual personalities]. Версия [Versiya] (in Russian). Retrieved 3 March 2014. - Schaefer, Henry F. (2003). Science and Christianity: Conflict Or Coherence?. The Apollos Trust. p. 9. ISBN 9780974297507. I present here two examples of notable atheists. The first is Lev Landau, the most brilliant Soviet physicist of the twentieth century. - "Lev Landau". Soylent Communications. 2012. Retrieved 7 May 2013. - 19 December 1957* (no number). The Bukovsky Archives. - Nobel Presentation speech by Professor I. Waller, member of the Swedish Academy of Sciences. Nobelprize.org. Retrieved on 28 January 2012. - Yaacov Ro'i, The Struggle for Soviet Jewish Emigration, 1948–1967, Cambridge University Press 2003, ISBN 0521522447 p. 199 - "Lev Davidovich Landau". Find a Grave. Retrieved 28 January 2012. - Obelisk at the Novodevichye Cemetery. novodevichye.com (26 October 2008). Retrieved on 28 January 2012. - Schmadel, Lutz D. (2003). Dictionary of Minor Planet Names (5th ed.). Springer Verlag. p. 174. ISBN 3-540-00238-3. - Hey, Tony (1997). Einstein's Mirror. Cambridge University Press. p. 1. ISBN 0-521-43532-3. - Mitra, Asoke; Ramlo, Susan; Dharamsi, Amin; Mitra, Asoke; Dolan, Richard; Smolin, Lee (2006). "New Einsteins Need Positive Environment, Independent Spirit". Physics Today. 59 (11): 10. Bibcode:2006PhT....59k..10H. doi:10.1063/1.2435630. - "Complete list of L D Landau's works". Phys. Usp. 41 (6): 621–623. June 1998. Bibcode:1998PhyU...41..621.. doi:10.1070/PU1998v041n06ABEH000413. - Dorozynski, Alexander (1965). The Man They Wouldn't Let Die. Secker and Warburg. ASIN B0006DC8BA. (After Landau's 1962 car accident, the physics community around him rallied to attempt to save his life. They managed to prolong his life until 1968.) - Janouch, Frantisek (1979). Lev D. Landau: His life and work. CERN. ASIN B0007AUCL0. - Khalatnikov, I.M., ed. (1989). Landau. The physicist and the man. Recollections of L.D. Landau. Sykes, J.B. (trans.). Pergamon Press. ISBN 0-08-036383-0. - Kojevnikov, Alexei B. (2004). Stalin's Great Science: The Times and Adventures of Soviet Physicists. History of Modern Physical Sciences. Imperial College Press. ISBN 1-86094-420-5. - Landau-Drobantseva, Kora (1999). Professor Landau: How We Lived (in Russian). AST. ISBN 5-8159-0019-2. - Shifman, M., ed. (2013). Under the Spell of Landau: When Theoretical Physics was Shaping Destinies. World Scientific. doi:10.1142/8641. ISBN 978-981-4436-56-4. - Karl Hufbauer, "Landau's youthful sallies into stellar theory: Their origins, claims, and receptions", Historical Studies in the Physical and Biological Sciences, 37 (2007), 337–354. - "As a student, Landau dared to correct Einstein in a lecture". Global Talent News. - O'Connor, John J.; Robertson, Edmund F., "Lev Landau", MacTutor History of Mathematics archive, University of St Andrews. - Lev Davidovich Landau. Nobel-Winners. - Landau's Theoretical Minimum, Landau's Seminar, ITEP in the Beginning of the 1950s by Boris L. Ioffe, Concluding talk at the workshop QCD at the Threshold of the Fourth Decade/Ioeffest. - EJTP Landau Issue 2008. - Ammar Sakaji and Ignazio Licata (eds),Lev Davidovich Landau and his Impact on Contemporary Theoretical Physics, Nova Science Publishers, New York, 2009, ISBN 978-1-60692-908-7. - Gennady Gorelik, "The Top Secret Life of Lev Landau", Scientific American, Aug. 1997, vol. 277(2), 53–57. - Media related to Lev Landau at Wikimedia Commons
<urn:uuid:061c2b1a-135f-41b1-bb67-1838b9580f40>
CC-MAIN-2017-17
https://en.wikipedia.org/wiki/Lev_D._Landau
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917120694.49/warc/CC-MAIN-20170423031200-00130-ip-10-145-167-34.ec2.internal.warc.gz
en
0.833034
5,001
2.765625
3
This article investigates the relationship between Cairns, a country town in Queensland, and Yarrabah, the nearby Aboriginal Mission, from 1891-1911. This association is analysed through a variety of contemporary sources, charting its growth, development and gradual improvement in race relations. Cairns, in Far North Queensland, was first settled in 1876 as a port for the Hodgkinson Goldfield, a new field west of Cairns. Despite the best efforts of the Aboriginal inhabitants in the area, the fledgling settlement survived and prospered, having 2 460 residents by 1891. The impact on the Aboriginal inhabitants was catastrophic. By 1891, just 15 short years after the founding of the town, they had been dispossessed of their lands and forced to subsist in fringe camps under appalling conditions. The Cairns Post of 20 January 1892 describes conditions at one such camp. It was located on the Hop Wah road (now Mulgrave road), less than one kilometre from the Cairns Post Office and inhabited by 100 men, women and children. Tobacco and opium usage were rife. There were disease ridden dogs and an influenza epidemic was ravaging the community. This article also mentions that a group of white men had recently set fire to several gunyahs in the camp. Reverend John Gribble of the Anglican Church arrived in the district in 1891 looking for land on which to start a mission, leading to the eventual formation of the Yarrabah Mission. In his report to the Queensland Colonial secretary he stated that the: Barron and Kuranda Blacks were succumbing to white exploitation and that they were no longer regarded as dangerous although the settlers take every precaution. Gribble was the driving force behind the establishment of Yarrabah and it is probable that without his enthusiasm it would never have been established. In 1891 he gained approval from the Diocesan Council of North Queensland to establish the mission. He returned to North Queensland in 1892, landing at False Bay on 17 June, the site of the present community, and taking formal possession of the 51 200 acres. The mission was originally named the Bellenden Ker Mission, but was later changed to Yarrabah. The first few months were very difficult and were marked by poverty and ill-health for Gribble. He died in 1893 and his son, Ernest, who arrived in October 1892, took over the running of the mission and remained until 1909. From the outset financial difficulties plagued the mission. At the end of 1893 the North Queensland Diocese ceased support and Gribble was forced to turn to the Australian Board of Missions for meagre financial assistance. From 1908 the Diocese again contributed to the upkeep of the mission. After 1896 the government provided a small subsidy. Initially Aborigines, ignoring the work of John Gribble and his three helpers, were slow to come to the mission. On 12 December 1892, 30 people led by Menmuny, an Aboriginal leader, came to Yarrabah seeking a camp site. Gradually more and more people came into the mission and by 1895 there were 112 people living there. Yarrabah came to be regarded as a model mission by the government and church authorities, and rapidly became a home for displaced Aborigines. After the introduction of the Aborigines Protection and Restriction of the Sale of Opium Act in 1897, the population grew rapidly as Aboriginal people were rounded up and sent to Yarrabah against their will. By 1910 only 60 of the 300 residents were local people, the remainder having been committed by the State. After what had happened in the district since 1876 to the Aboriginal inhabitants, and after the strenuous calls for reserves as: The only chance of safety for the nigger is to place the greatest possible number of miles between himself and civilization. one would have thought that the establishment of a reserve at Yarrabah would have been greeted with enthusiasm. Such was not the case, however. The following discussion of the town's reactions to the mission and its founder, Rev. John W. Gribble, is based on the Cairns Post and the Cairns Morning Post for this period as well as the opposing view from the Cairns Argus. The news in 1891 that a mission was to be established in the Cape Grafton region south of Cairns was greeted with alarm by the majority of Cairns residents. Criticism was directed more to the church itself and the mission’s founder, Gribble, than to the setting up reserves for Aborigines per se. Establishing a mission to rescue Aborigines was seen to reflect poorly on the treatment of Aborigines in the district. This accusation the local community rejected with absolute and utter contempt. The attacks on Gribble were vicious and personal in nature, highlighting problems faced by Gribble in his previous missionary work in Western Australia. It was felt that he and his fellow missionaries were naive and idealistic in their knowledge and understanding of the local Aboriginal population. The Cairns Argus presented a different view. It attacked the Cairns Divisional Board for its opposition to the mission, noting; The Cairns Divisional Board has deliberately ranged itself on the side of the enemy. It has appealed to the Minister of Lands to curtail the area granted in this district as a reserve for Aborigines, and it has formally expressed its aversion to the establishment of a local mission station. The paper went on to further note that there was more than enough land for everyone and the Board was being churlish and unreasonable. The paper strongly supported the establishment of the mission on the grounds that: We owe the blacks more than contemptuous annual alms of blankets can repay. In New Zealand, where the natives are more warlike, the government buys its land. Here it steals it. The transaction is defensible, no doubt, by various comfortable theories of the survival of the fittest and the divine right of English ascendancy. But these theories would taste very differently if applied in our case by more powerful aliens Before establishing the mission Gribble made a visit, at his expense, to the district to investigate possible sites. The comments in the local press set the tone for attitudes towards the proposed mission. The Cairns Post ran a series of six lengthy articles under the title Mission to the Blacks, from 30 June 1891 till 28 October 1891, authored by one ‘G.T.B.’ Some quotes from the first article are instructive as to the stance taken towards Gribble and his missionaries. “Nobody guesses that the preacher as a rule knows about as much of his ebony brother as he does about pre-historic man”; “Comet-like visitors to the North Queensland Blacks”; “Belongs to that dear old sainted and richly subsidised institution, called the Church of England”; “King Billy of the Inlet, having probably the possessor of more brains than his teachers, will likely enquire ‘how many religions white fellow got altogether’” and, most damningly, the first article concludes: While ministers of religion are sent away to lotus-eating regions to convert the blacks ... just as if centuries of instinct can be wiped away in an instant by the magic of a creed that a savage has no capacity for understanding. As well (try) endeavour to teach the sacred ibis to use a rifle, or an alligator to play In later articles this writer supported the concept of reserves, as long as they are government run in a manner that ensured there was no contact between the races. He saw these reserves as “something [that] can be attempted to make their passage to the silent sea as pleasant as possible”, for; “even the best means taken to improve the nigger can only result in ultimately improving him off the face of God’s earth”. In the formative years of the mission these attacks took on a more self-righteous tone as Gribble struggled to attract converts and make the mission financially viable. It was only after his death in 1893, when the mission was taken over by his son Ernest, that the tenor of the newspaper reports began to slowly improve. Another strong reason for the opposition to the mission can be gleaned from a report in the Cairns Argus, before the mission had commenced, where it was said that selectors “naturally do not want to see their cheap labour busily engaged in singing hymns and learning collects”. There was consternation when it was announced that 80 square miles of land in the Cairns district were to be reserved for a mission. The Cairns Divisional Board wired a protest to the Minister for Lands and the Cairns Post fulminated against it, asserting that: There is confidence at present between the selectors, bushworkers and others and the blacks in the district and that a reserve was “not wanted in any shape or form”. The editorial believed that one of the strongest arguments against the establishment of a reserve was that: In this district there are several different tribes, who, if brought together, would fight like Kilkenny cats, only more so. The government did appoint the Crown Land Ranger of Herberton to report into the matter but the Cairns Post was dismissive of a government official reporting into a matter sanctioned by his department. There were veiled threats that separation for North Queensland would become inevitable if the concerns of the Cairns Divisional Board were not heeded, but nothing came of this. The Cairns Argus was far more conciliatory, even allowing Gribble space to explain how the mission would not in any way be inimical to the interests of the settlers. He stressed that he would not endeavour to influence any Aborigines working for settlers to come to the mission. The paper pointed out that it was not Gribble’s mission at all, but that he was merely an agent for the Australian Board of Missions. Further vicious attacks followed in the Cairns Post of November 1892 and January 1893. The paper ceased publication shortly after, recommencing as the Cairns Morning Post in June 1895. The tone of the reporting on Yarrabah was now markedly different. There were several reasons for this, including the change of owner, the death of Gribble and his replacement as mission manager by his son Ernest. The mission was no longer seen as a threat to the settlers. It was not depriving them of Aboriginal labour and was seen to be fulfilling a need in providing shelter, clothing and food for those who would otherwise congregate in fringe camps around Cairns. From this time onwards the only criticism of the mission occurred when the residents of Cairns felt that it was failing in its duty to Aborigines or was not doing the right thing by them when it came to feeding and clothing them. This was best illustrated when mission residents absconded to Cairns, mainly because of inadequate rations and incessant agricultural toil. On such occasions the press was very supportive of the Aborigines concerned and criticised the authorities for not adequately supporting the good work of the mission. It is instructive to look more closely at the initial opposition from the Cairns Post to the setting up of the mission by Gribble. The paper was owned by Frederick Thomas Wimble, a wealthy, well-connected individual with substantial business interests. He arrived in Cairns at the end of 1882 to exploit foreseen commercial opportunities in the nascent sugar industry. For this dream to be realised it was essential that Kanaka labour be employed in the canefields. Wimble was actively involved in campaigning for its continued use. He was a large scale land speculator, buying up many properties in the Cairns district. He successfully campaigned for the building of a railway line from Cairns to Myola. In 1883 he founded the Cairns Post newspaper with the first issue appearing on May 10. His desire for progress and development for the Cairns district led him into politics. He campaigned for the Liberal Party on a platform advocating railway construction, mining, eventual separation and central mills, or as Jones put it; “Something for everyone”. He was elected to parliament for the seat of Mulgrave and held the seat from 5 May 1888 to 29 April 1893, and during this period tirelessly campaigned, through his newspaper, for progress and development. It was Wimble who in late 1888 first proposed the idea of an Aboriginal reserve in the Cairns district. He called for the proclamation of a reserve of 200-300 acres in the Barron Valley where Aborigines could be collected, preserve their traditional bush life, helped by being taught animal husbandry and cultivation of the land and could, if it were so wished, be available for hire by the settlers. He was appalled by the later suggestion of a reserve in the Yarrabah district comprising not 200-300 acres, but 51 200, whose residents would not be available for employment purposes. The Yarrabah land had extensive timber reserves that would be locked up if the area was given over to a mission. Land that was considered potentially valuable for the running of cattle would not be available, thus inhibiting progress. Wimble, through his paper, led the campaign to prevent this happening. The Cairns Argus, the rival newspaper, supported the opposition National Party and each paper habitually attacked the other. Although begun in 1890, the Cairns Argus was a continuation of an earlier paper, the Cairns Chronicle, which first appeared in January 1885. The great depression of 1893, known as “the bank smash”, forced the Cairns Post to close. Wimble left Cairns for Melbourne and his paper was acquired by the proprietors of the Cairns Argus who absorbed it into their paper. In 1895 the Cairns Morning Post was founded by E. Draper and Co. The Cairns Post’s opposition to the mission thus reflected that of its owner. The Cairns Argus presented a different, more sympathetic view while the Cairns Morning Post was different again, not being connected in any way with the other two papers. This explains why the reporting was so very different after 1894 and was probably a more accurate representation of the views of the Cairns’ citizens than that which appeared in the Cairns Post for the period 1891-1893 In 1897 Gribble was congratulated for his efforts to make the mission self-supporting by supplying paw paws to the Cairns Preserving Works. The press almost appeared proud of the mission, noting that: The Yarrabah Mission is an institution which has hitherto kept modestly in the background, but it now bids fair to prove an example to all other Aboriginal missions in Australia, and the head of that mission is to be congratulated upon the prospects which are now opened up, and which are entirely due to his own exertions. Tributes also flowed from the Commissioner of Police, W. Parry-Okeden after a visit to Yarrabah in 1898 as well as from the Bishop of Carpentaria, who described the enterprise as “one of the most remarkable instances of successful mission work in modern times”. The press was not above using Gribble for its own ends. In an attack on the Aborigines Protection Bill of 1897 the paper complained about how Gribble was forced to separate children from their parents and remove them to Yarrabah, as was required under the Act. However its high moral tone and passionate entreaty on humanitarian grounds were undermined by its noting that the Act: Is not only a serious menace to human liberty, but an unwarrantable interference with commerce. By 1904 reporting was becoming increasingly favourable. Commenting on a performance by the Yarrabah Brass Band in Cairns, the Cairns Morning Post was moved to comment that: To those who have been accustomed to regard the Australian Aboriginal as a wretched specimen of humanity so far as intellect and fixity of purpose is concerned, no greater surprise could have been experienced than to have encountered the Yarrabah Brass band ... playing in perfect time and tune. In the same year the public were asked to donate goods for the Yarrabah residents for Christmas as: The piccaninnies especially are looking forward to the festive season with great glee, and it would be a pity to disappoint the little chaps. The public was also informed that the printers at the Yarrabah printing press produced work that reflected “the utmost credit upon the young printers”. It could be argued that the tone of reporting adopted by the Cairns Morning Post was patronising in the extreme. The Press seemed to be continually surprised when Aborigines performed tasks that equated with the white man’s view of civilisation, such as playing music in tune and in time or doing an honest days work at the printing press. That these activities posed no threat to the residents of Cairns was probably further encouragement for extolling these virtues. If the Yarrabah residents had called for “land rights” in 1904 they most certainly would have received a very different reception from the Cairns Morning Post. A further factor in the improved race relations and reporting on the mission was the resignation of Edmund Roth as the Chief Protector. He was appointed as the first Northern Protector of Aboriginals in 1898. Based at Cooktown, his main brief was to the prevent the exploitation of Aborigines, particularly in employment and marriage, including the regulation of indigenous employment in the beche-de-mer industry. He was possessed of a strong personality and administrative drive, which made him an effective Protector, but this was to lead to his undoing as his initiatives brought him into conflict with politicians, settlers and the press, while his humane treatment and respect for Aborigines was viewed in a hostile light by local business interests. In 1904 he was appointed Royal Commissioner to look into the conditions of Aboriginal People in Western Australia and in the same year was made Chief Protector of Aboriginals for all Queensland. During his absence in Western Australia a public meeting was held in Cooktown to try and prevent his return to the state. Accusations against him included having acted immorally, taking indecent photographs, and of having sold ethnological specimens to the Australian Museum in Sydney. But the real reason for the protest meeting and subsequent petitions was because of Roth’s determination to protect Aborigines from unscrupulous employers. This determination saw him blamed for the collapse of the beche-de-mer industry. A parliamentary investigation was held into the allegations and Roth was found innocent of all charges. Despite his innocence being proven, political attacks continued unabated and Roth in May 1906 resigned (as from 10 August) on the grounds of ill health and left Australia four months later. The Cairns Morning Post had been one of his most trenchant and longstanding critics and his resignation was “hailed through the north, with the utmost satisfaction”, Curiously his departure was a major factor in improving attitudes to Aborigines in general and the mission in particular as residents no longer had Roth to denounce their attitudes or stymie their attempted exploitation of Aboriginal workers. With the appointment of Richard Howard as his successor the tenor of the public debate over “native policy” cooled considerably. Notwithstanding the improvement in reporting by the Cairns Morning Post, Gribble accused the paper of being antagonistic to the Yarrabah Mission. The paper refuted this, praising the work of the Missionaries but noted that: What we have opposed is the forced detention of Aboriginals and half-castes where suitable food is not provided by the Government. The mission is doing all that lies in its power. It has done wonders, but how far will a paltry grant of £400 - 500 go towards properly feeding, educating, housing and clothing over 300 people? Our contention has always been and is that if the Government does so detain these Aboriginals, then it has a right to at least give them the same considerations as it extends to its criminal prisoners. This comment encapsulates the paper’s (and presumably its readers) ideas towards the mission and its treatment of Aborigines. The distinction between the missionaries trying to ”civilise” Aborigines and the failure of the government to provide adequate funding for this is critical, as will soon be seen when the events of 1910 are discussed. In a few short years the mood towards Aborigines and the mission had completely changed. From being fearful of Aborigines and terrified of their being protected by naive missionaries, the community now felt pity for their indigenous compatriots and anger that their government was not providing sufficient funding for the mission to care for them in a manner the community felt was appropriate. In some ways the attitudes of the citizenry of Cairns in the early twentieth century were more sympathetic than would be the case today. No doubt close proximity to the massacres and deprivations and conquest was still clearly remembered. It would take almost another century before one could hide behind the black Armband View of History and wash one’s hands of these matters. Important, too, in shaping our forebear’s concerns was the widely held view that the Aborigines were a doomed race, who would die out before the white man’s superior civilization. Those who posed no threat to us and would shortly die out could be pitied without fear of the consequences That attitudes towards the mission had changed dramatically by many of the settlers are best illustrated by a letter to the Cairns Morning Post by “An Old Queenslander” in 1907. This letter is so instructive of these changing attitudes that it is worth quoting in full: Cairns is full to overflowing with visitors all on pleasure bent. There are some 300 Aboriginals and half-castes at Yarrabah mission and in reading their report I notice they want clothing and many necessaries for the successful carrying out of their noble scheme. We took their country from them, their planting and fishing grounds, and in return gave them what? All our vices and little else. Surely some of our visitors could afford a little to help to clothe and feed this remnant of the original holders of the North until they are self supporting, as the Rev. Gribble and his staff are trying to make this Yarrabah. He has done what no man in Australia has ever done before - proved there is more in the Aboriginal than we old timers dreamt of. Any charitably disposed Christian of any denomination might help this noble work by sending to Rev. Gribble their mite. The lack of support from the government to allow the mission to undertake its work came to a head in 1910. A report had been written by the Cairns Police Magistrate, P. G. Grant, on his investigation into unsatisfactory conditions at Yarrabah, namely that there was a shortage of meat and food and some of the Aboriginal girls living there appeared almost white. He reported that the people appeared healthy, suggested that the mission should be devoted to children and the aged and that there were a number of girls and young women: Who were for the most part of white blood and who should not be allowed to remain at the mission but would be better placed in domestic service. The Bishop of North Queensland retaliated by pointing out that the apparent white people were the result of sending half-caste girls into domestic service in Cairns! This report was taken up by the Home Secretary, Mr J. G. Appel, who proposed an inquiry into the mission. Carried out by Chief Protector Howard, the inquiry found that the mission residents complained bitterly of the lack of food and found there appeared a general desire to get away from the mission. He further found that there was no efficient supervision, woeful management, no real effort to provide medicine or produce food, great carelessness was shown in allowing so many boats to be lost and the financial picture was not encouraging. He suggested that if Yarrabah was managed on practical lines it could be a viable venture. The Archbishop of Brisbane responded by pointing out that the mission had been built out of nothing and that its purpose was only to raise the moral and spiritual level of the Aborigines. In May 1911 the Home Secretary inspected Yarrabah to find it cleaned up somewhat since Grant’s report, but no cultivation done, for which he blamed the ignorance of the Superintendent and not the bad seasons, which included a cyclone in February 1909 that demolished most of Yarrabah’s coconut, banana, lemon and orange trees in. Appel’s report was seized as a starting point for an open airing concerning everything controversial in relation to the mission. The dispute was eventually resolved at a conference attended by the Home Secretary and a committee appointed by the Anglican Synod that included the Bishop of North Queensland and the Mayor of Cairns. It was agreed at this meeting that the present superintendent of the mission be transferred as the first move to more practical management. The Cairns Post followed this matter in great detail, reproducing all the various reports and subsequent replies (including those of the Chief Protector of Aboriginals, R. B. Howard, Bishop Frodham of the North Queensland Diocese, the Superintendent of the Yarrabah Mission and the Archbishop of Brisbane), that flowed over the next 12 months as the various parties traded allegations and counter allegations. As the issue hotted up the Cairns Police Magistrate was forced to state that he was not “prejudiced against the mission”. The Cairns Post reported the whole issue in a neutral and impartial manner, but as the parties continued trading accusations and counter accusations even it became fed up, stating on 21 June 1911 that: This paper is just about full up of Yarrabah and the lengthy telegraphic reports of the tin-pot controversy in connection therewith. It is interesting that the Cairns Post did not take sides, as it was usually very quick to state its position on every other matter. Jones perceptively suggests that this was because Cairns people had cast aside their crusading role and were quite happy to have Yarrabah cope alone with the Aboriginal for them. It is notable how uninvolved the press and its readers were on this issue. While reporting the events in full, the Cairns Post offered little comment and very few letters to the editor on this matter were published. The town appeared almost apathetic in the row between the government and the church. Their Aboriginal problem had by now become an issue for someone else. This article has explored the relationship between Cairns and Yarrabah between 1892-1911. It portrays the conflict between settler and Aboriginal on the frontier that was Cairns in the late nineteenth century; how this was won by settlers, leading to the establishment of a mission to tend to the vanquished, who were confidently expected to soon die out. This arrangement was vehemently opposed by Cairns folk, fearing it to be an unsavoury and lasting reflection on their actions that had caused this state of affairs in the first place, as well as locking up the economic potential of the land upon which the mission was situated. As the mission became established and Aboriginal violence against settlers less common, negative feelings towards the mission were ameliorated as it was realised that the mission was taking care of their Aboriginal problem for them. Feelings of guilt and remorse contributed to their support of the mission in an inverse proportion as the Aboriginal threat receded. As time went by support for the mission became more acceptable and less necessary to display openly, as Yarrabah became a normal fixture of the landscape. One can illustrate this attitude with a contemporary account. Richard Dyott, an Englishman visiting Australia and writing under the pen name Wandandian, made extended visits to Yarrabah in March 1908 and August 1910. On his return to Cairns and Kuranda from Yarrabah in 1910 he was moved to write on the different living conditions experienced by the Aboriginal residents. The next day we bade farewell to this happy spot (Yarrabah) and went back to Kuranda, passing on our way some of the wild blacks in their dirty camps. The contrast was, to say the least of it, impressive, and made us feel sure that anyone who is not a hardened bigot and opposed to all missionary efforts, could not help admitting that this mission was highly beneficial to its inhabitants. One could not do otherwise than at once compare the dirt, squalor and filth of the camps with the cleanliness and brightness of mission life, and contrast the sullen faces of the former with the cheerful countenances of those living in the latter. One has only to put all feelings of bias aside for one moment and all the cant which defends the so-called liberty of the poor black, to see with clearness and certainty that he is far better cared for, far happier, cleaner and more intelligent under the light rule of a mission reserve than he is in his native state. Queensland. Journals of the Legislative Council, vol 17, part 2, 1892, p. 317 For a full account of what transpired in the way of race relations in the district between 1876 and 1891 see Jeremy Hodes, Conflict and Dispossession on the Cairns frontier to 1892, Journal of the Royal Historical Society of Queensland vol 16 no 2, p. 542-554 Cairns Post 20 January 1892, p. 2 J. Gribble, Summary of the Report of Rev J.B. Gribble ..., 1891, p. 1 L. Hume, Yarrabah Phoenix: Christianity and Social Change in an Australian Aboriginal Reserve. Brisbane, University of Queensland, PhD Thesis, 1989, p. 79 L. Hume, “Them Days: Life on an Aboriginal Reserve 1892-1960”, Aboriginal History vol 15 no 1, 1991. p. 5 L. Hume, Yarrabah Phoenix: Christianity and Social Change in an Australian Aboriginal Reserve. p. 81. Cairns Argus 17 June 1892, p. 2. Yarrabah is some 40 minutes by road from Cairns, but in the early days, before the road was built, the only access was by boat, a trip of 1-2 hours from Cairns. The name was changed to Yarrabah when Ernest Gribble took over in 1893 L. Hume, “Them days: Life on an Aboriginal Reserve 1892-1960”, Aboriginal History vol 15 no 1, 1991. p. 5 Ibid., p. 6 They were Pearson, Willie Ambryn (an Australian South Sea Islander) and Pompo Katchewan (An Aboriginal youth). Hume, Yarrabah Phoenix: Christianity and Social Change in an Australian Aboriginal Reserve., p. 81 D. Rapkins, Major Research Topic on North Queensland History. Cairns, James Cook University, 1994, p. 9. See also Dorothy Jones, Trinity Phoenix. Cairns, Cairns Post, 1976, p. 316. Menmuny later to be known as King John Barlow L. Hume, “Them Days: Life on an Aboriginal Reserve 1892-1960”, Aboriginal History vol 15 no 1, 1991 p. 6 Ibid., p. 6-7 L. Hume, Yarrabah Phoenix: Christianity and Social Change in an Australian Aboriginal Reserve., p. 88-90 K. Evans, Missionary Effort Towards the Cape York Aborigines, 1886-1910: a Study of Culture Contact. Brisbane, University of Queensland. BA Hons. Thesis, 1969, p. 89 Cairns Post 20 January 1892, p. 2 The Cairns Post commenced on 10 May 1883 and continued until 20 May 1893, when it was bought out by the Cairns Argus. It was resurrected as the Cairns Morning Post from 6 June 1895 with a new owner and changed its name back to the Cairns Post on 6 July 1909. The Cairns Argus commenced on 29 July 1890 and continued until 21 January 1898. There were several other papers during this period, such as the Cairns Daily Times (October 1899 – February 1900) but most were short lived and few copies have survived It is interesting to note that the Cairns Post led the vicious criticism while the Cairns Argus (which ceased publication in 1898), was content to merely record the facts. In this connection see The Cairns Argus 10 May 1892, p. 2, where a strongly worded editorial called for the proposed establishment of the Mission to be given a fair go, stating, “At all events, let them have a trial. We plead for fair play for the side of the Angels” Cairns Post 20 January 1892, p. 2 Cairns Post 17 October 1891, p. 2 Cairns Argus 19 February 1892 Cairns Post 16 March 1892, p. 2. The paper again cast doubts on the ability of the proposed Mission to convert Aborigines, stating “True, the Revd. Gribble has one convert to his credit, the Minister for Lands, but he would find it easier to convert the whole Cabinet, the members of both the legislative Chambers, and all the editors in Queensland, than one Cairns blackfellow” Cairns Post 4 June 1892, p. 2 and June 11, 1892, p. 2 Cairns Argus, 10 June 1892, p. 3 Cairns Post 16 November 1892 and 14 January 1893, p. 2 See the Cairns Morning Post 14 July 1903, p. 2, for an example of this. A. Martin, “Ink in Veins of Cairns Pioneer”, Passages of Time. Cairns, Cairns Post, 1995, p. 59 Jones, p. 231-232 Ibid., p. 313 J. Collinson, “More About Cairns - the Second Decade, 1886-1896. Press and Pulpit”, Cummins & Campbell’s Magazine, July 1940, p. 59-60 Wimble’s opposition to the Church of England run mission is ironic in that he himself was a member of this church! Cairns Morning Post 8 July 1897, p. 5 Cairns Morning Post 20 December 1898, p. 5 Cairns Morning Post 26 October 1900, p. 3 Cairns Morning Post 31 October 1902, p. 2 Cairns Morning Post 5 January 1904, p. 3 Cairns Morning Post 20 December 1904, p. 2 Cairns Morning Post 18 February 1905, p. 2 B. Reynolds, “Roth, Walter Edmund (1861-1933)”, Australian Dictionary of Biography. Vol II: 1891-1939, 1988, p. 463 K. Khan, Catalogue of the Roth Collection of Aboriginal Artefacts from North Queensland. Volume 1. Sydney, Australian Museum, 1993, p. 14-15 Jones, p. 346 Reynolds, p. 464 Jones, p. 345-347 It is interesting to get a contemporary account from the other side. Mjoberg, writing in 1912, noted that Roth “found in the end that the atmosphere became unbearable and those in high positions, too narrow-minded and prejudiced for him to be able to continue his work. he turned his back on the ungrateful country and, instead, gave his services in a similar field to other continents where he was met with sympathy and appreciated for his very worthwhile work” (Eric Mjoberg. Amongst Stone Age People in the Queensland Wilderness. Stockholm, Albert Bonner, 1918, p. 136). Howard later went the same way, resigning over plans to set up a mission on Mornington Island. (Ibid.). Cairns Morning Post 18 September 1905, p. 2 Cairns Morning Post 10 September 1907, p. 5 Cairns Post 4 May 1910, p. 5. Jones, p. 350 Jones, p. 350 Ibid. This situation had developed after the resignation of Mission Superintendent Ernest Gribble in 1909 Ibid., p. 351 Ibid. Cairns Morning Post 16 February 1909, p. 3 Jones, p. 352 Cairns Post 29 June 1910, p. 3 Cairns Post 21 June 1911, p. 4. Jones, p. 353 Wandandian, Travels in Australasia. Birmingham, Cornish Brothers, 1912, p. 150-151
<urn:uuid:3b0fdf9d-3a39-4279-b122-1295bcdc7f0b>
CC-MAIN-2017-17
http://queenslandhistory.blogspot.com/2011/03/cairns-and-yarrabah-1891-1911.html
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917123097.48/warc/CC-MAIN-20170423031203-00250-ip-10-145-167-34.ec2.internal.warc.gz
en
0.979414
7,698
3.25
3
Submitted by Christie Zunker, PhD, Trisha Karr, PhD, Roberta Trattner Sherman, PhD, FAED, Ron A. Thompson, PhD, FAED, Li Cao, MS, Ross D. Crosby, PhD and James E. Mitchell, MD. This study examined the relationship between clothing fit and perceived fitness level. Participants included 2,386 adults who completed an online survey after a running event. The survey included four questions related to photographs of athletic models wearing loose-fitting and tight-fitting clothing: (1) Which event do you think the model took part in? (2) What do you think is the main reason he/she took part in the event? (3) How well do you think this person performed? and (4) How confident are you that your running time beat this person’s time? Results showed participants were more likely to believe athletes wearing tight-fitting clothing ran further and faster than athletes wearing loose-fitting clothing; and were less confident in their abilities to run faster than athletes wearing tight-fitting clothing than those who wore loose-fitting clothing. These findings suggest clothing fit influences perception of athletic ability among runners. Athletes making upward comparisons may become increasingly dissatisfied with their appearance and at risk for avoidance of certain sports, decreased amounts of time spent in moderate to vigorous physical activity, and experience feelings of inferiority that negatively influence sport performance. Sociocultural comparisons and perceived pressure to be thin can foster body dissatisfaction (15); however, some individuals report a preference for athletic-ideal body shapes over a thin-ideal (13). Comparing oneself to a fit peer can affect body satisfaction and the amount of time one engages in physical activity. For example, a study by Wasilenko and colleagues (2007) with female undergraduates found that women stopped exercising sooner and felt less satisfied with their bodies when they exercised near a woman they perceived as physically fit wearing shorts and a tight tank top as compared to exercising near an unfit woman wearing baggy pants and a baggy sweatshirt (23). Thus, social comparisons with peers may promote unhealthy behaviors or avoidance of certain activities. Additionally, individuals who experience weight-related stigmas may be less willing to participate in physical activity and avoid exercise due to low perceived competence and lack of motivation (16, 22). Individuals who adopt an external observational view, or a self-objectified perspective of their bodies, may invest a considerable amount of psychological, physical, and financial resources into their appearance (1). Objectification theory proposes that these individuals internalize the observers’ view of their bodies (i.e., self-objectification) and become preoccupied with how their body appears to others without regard to how their body actually feels (10). Interviews with elite athletes indicate that they view an athlete’s body “as an object to be managed” (17p. 206). Self-objectifying thoughts and appearance concerns may be triggered in individuals with low self-esteem and exacerbated in certain environments (e.g., gyms with mirrors, women wearing revealing outfits;18). For example, a study by Fredrickson and colleagues (1998) in which participants (70% Caucasian) were instructed to try on either a swimsuit or a sweater in a dressing room with a full-length mirror and then complete a mathematics test showed that women in the swimsuit condition performed worse on the test than women in the sweater condition. The authors postulated that bodily shame diminished their mathematical performance since their mental energy was focused on their appearance (11). Another study by Hebl and colleagues (2004) with a similar protocol with men and women of Caucasian, African American, Hispanic, and Asian American descent, found that all participants had lower mathematics performance and appeared vulnerable to self-objectification during the swimsuit condition compared to the sweater condition (12). A study by Fredrickson and Harrison (2005) with 202 adolescent girls found that those with higher measures of self-objectification had poorer performance throwing a softball when asked to throw as hard as she could (9). These findings suggest that experiencing bodily shame may negatively influence one’s ability to engage in physical activities or other activities that require mental resources. Clothing appears to be an important, but often ignored, part of how women manage their physical appearance (21). Wearing a swimsuit or other tight, body contouring uniform for a particular sport may be necessary for performance, but there are often gender discrepancies with women usually wearing much less clothing (19). Revealing sports uniforms may be perceived as stressors and exert pressure on some athletes functionality or performance advantage. Indeed, some individuals report feeling uncomfortable wearing revealing attire and may choose not to participate in a particular sport due to required uniforms. Sports uniforms may contribute to unhealthy eating behaviors and eating disorders, especially among women. For example, female athletes often experience increased body image concerns, unhealthy body comparisons, and body dissatisfaction; however, satisfaction with uniform fit can improve body perceptions (6). In addition, female runners who report high identification with exercise and high value on having an athletic physique may be vulnerable to obligatory exercise (14). Performance of sport participants depends upon a number of factors, including their psychological state, which may be influenced by their athletic clothing or uniform. Research by Feltman and Elliot (2011), Dreiskaemper and colleagues (2013), and Feather and colleagues (1997) suggests that the color and fit of an athlete’s uniform influences their psychological functioning. For example, during a simulated competition, participants reported feeling more dominant and threatening when wearing red as opposed to wearing blue (8). Participants also perceived their opponents as more dominant and threatening when the opponents were wearing red. Similarly, a study with male fighters taking part in an experimental combat situation found that those wearing a red jersey had significantly higher heart rates before, during, and after the fight compared to wearing a blue jersey (4). In addition, a study of female basketball players showed athletic clothing that provided a satisfactory fit on one’s body improved athletes’ body perceptions (6). Findings from the literature (Feather and colleagues, 1997; Feltman and Elliott, 2011) indicate that clothing choices influence our perceptions and behaviors, which may affect us in a number of ways. At the present time, no studies to our knowledge have examined this phenomenon among endurance athletes. Thus, the purpose of the current study was to explore the role of clothing fit among a group of runners. We hypothesized that individuals would perceive both male and female athletes wearing tight fitting clothing to be more physically fit (i.e., ideal body type for their sport) than athletes wearing loose fitting clothing. Participants included individuals aged 18 and older who took part in a running event at an annual marathon in the Midwestern United States. Participants were recruited through flyers, an advertisement as part of a packet distributed to runners, and through an email list serve managed by the race director. Institutional review board approval was received. Informed consent was obtained from all participants. Anyone who took part in the race was eligible to take the survey. Participants included 2,386 adults who completed the online survey. Of the total sample, 588 completed the full marathon (24.6%), 1,101 completed the half marathon (46.1%), and 697 completed a shorter distance such as a 5K or 10K (29.2%). The mean age for participants was 37.2 years (SD = 10.8; range: 18-91), and the mean self-reported body mass index (BMI) was 24.4 (range: 15.3-47.8). Within the sample, 96.2% were Caucasian, 93.2% were employed, and 67.5% were married. As compensation for participation in the study, participants were entered into a drawing to win one of four gift cards valued at $50 to $200 for a local sporting goods store. The online survey was available for three weeks (i.e., from the day of the event until three weeks following the event). A total of 3,117 individuals logged into the survey during this time. A flowchart provides a detailed description of how the final study participant sample was determined (see Figure 1). The final sample included 2,386 participants (76.5% of those who originally expressed interest in the study), after removing those who originally logged onto the website, but had missing data or did not meet eligibility criteria (e.g., did not report gender, under 18). As part of an online survey, participants viewed four photographs of models wearing black athletic clothing. The photos were cropped to display the model from neck to ankle. The first photo (Model A) was of a woman wearing a loose-fitting, short-sleeved top and loose-fitting shorts. The second photo (Model B) was of the same woman wearing the same shirt, but in a smaller size and tighter-fitting shorts. Similarly, the third photo (Model C) was of a man wearing a loose-fitting outfit and the fourth photo (Model D) was the same man wearing a tighter outfit. A manipulation check to assess the validity of the photos as an assessment of perceived physical fitness level was performed by showing the four photos to ten individuals with expertise in physical fitness and eating disorders. Each individual independently viewed the photos and provided an open-ended response. As expected, each person who viewed the photos reported that Model A was perceived as less fit than Model B and Model C was perceived as less fit than Model D. All participants viewed and answered questions related to each photo. Both males and females evaluated photos across genders. The first and second author developed 4 questions related to the photos: (1) Which event do you think she/he took part in? (there were 9 race options as answers to choose from: marathon, half marathon, 2-person relay, 4-person relay, 5k on Friday plus half marathon Saturday, 5k on Friday plus full marathon Saturday, 10k, 5k, and prefer not to answer); (2) What do you think is the main reason she/he took part in this event? (there were 5 answers to choose from: just for fun, to meet a personal goal, to qualify for another event, other reasons, and prefer not to answer); (3) How well do you think she/he performed? (there was a range of 5 answers: extremely well, finished in the top 25%; very well, finished in the top 50%; not so well, finished in the bottom 50%; poor, finished in the bottom 25%, and prefer not to answer);. (4) How confident are you that your running time beat this person’s time? (there was a rating scale of 6 choices: I feel certain that I ran faster, I am pretty certain that I ran faster, I think we ran about the same pace, I am pretty certain that I ran slower, I am certain I ran slower, and prefer not to answer). All analyses were conducted using SAS 9.2 GENMOD Procedure. Generalized linear models were built to compare the pair-wise contrasts about perceptions of models wearing athletic clothing by gender. The first research question asked was “Which event do you think she/he took part in?” We hypothesized that more participants would report Model B (compared to Model A) and Model D (compared to Model C) ran the full marathon. The results show that male participants were 1.5 times more likely to believe that Model B ran the full marathon compared to Model A (OR = 1.465; p = .004). Female participants were 1.4 times more likely to believe that Model B ran the full marathon compared to Model A (OR = 1.409; p = .002). The differences for Model D and C, the male models, were more dramatic. Male participants were 2.8 times more likely to believe that Model D ran the full marathon compared to Model C (OR = 2.817; p < .0001). Among men, the results showed that 40% believed Model D and only 17% thought Model C ran the full marathon. Female participants were 3.2 times more likely to believe that Model D ran the full marathon compared to Model C (OR = 3.19; p < .0001). For women, the results showed that 46% believed Model D and only 16% thought Model B ran the full marathon. The second research question asked was, “What do you think is the main reason she/he took part in this event?” We hypothesized that more participants would report Model B and D participated in the event to qualify for another running event. Male participants were 2.7 times more likely to believe Model B was trying to qualify for another event compared to Model A (OR = 2.710; p = .001). Female participants were 4.0 times more likely to believe Model B was trying to qualify for another event compared to Model A (OR = 3.958; p < .0001). Similar to the previous research question, the differences for the male model were more dramatic. Male participants were 6.3 times more likely to believe Model D was trying to qualify for another event compared to Model C (OR = 6.346; p < .0001). While female participants were 10.0 times more likely to believe Model D was trying to qualify for another event compared to Model C (OR = 9.972; p < .0001). See Table 1. The third research question asked was, “How well do you think she/he performed?” We hypothesized that more participants would report Model B and D finished in the top 25% of the runners. For males, the odds of Model B finishing in the top 25% were 4.8 times greater than Model A (OR = 4.791; p < .0001). For females, the odds of Model B finishing in the top 25% were 3.7 times greater than Model A (OR = 3.701; p < .0001). For males, the odds of Model D finishing in the top 25% were 5.3 times greater than Model C (OR = 5.338; p < .0001). For females, the odds of Model D finishing in the top 25% were 5.9 times greater than Model C (OR = 5.892; p < .0001). See Table 1. The fourth research question asked was, “How confident are you that your running time beat this person’s time?” For this question we were interested in how the participant compared him or herself to the same gender athlete (i.e., female participants compared themselves to Model B, male participants compared to Model D). We hypothesized that more women would report that they were less confident about their running time compared to Model B (i.e., believe that they ran slower than Model B). Indeed, female participants were 1.5 times less confident in beating the running time for Model B (OR = 0.687; p = .0008). We hypothesized that more men would report that they were less confident about their running time compared to Model D (i.e., believe that they ran slower than Model D). The results indicate that male participants were 2.6 times less confident in beating the running time for Model D (OR = 0.385; p < .0001). See Table 1. As hypothesized, we found both male and female participants believed that the models wearing the tighter-fitting clothing were more likely to have run the full marathon and were more likely to be trying to qualify for another event compared to the models wearing the loose-fitting clothing. Particularly interesting was the finding that female participants were 10 times more likely to think the male model in the tight-clothing was trying to qualify for another event as compared to the male model in the looser clothing. Our results also indicate that male and female participants believed the models in the tighter-fitting clothing were more likely to run faster than them. Additionally, the participants were less confident of their running time when asked to compare themselves to the model of the same gender wearing the tighter clothing. In general, athletes who wore tight-fitting clothing were perceived as more physically capable and competitively successful than those who wore loose-fitting clothing. The present findings support previous research involving social comparison theory in that participants were less confident in their running abilities, or negatively influenced by viewing photos of fit peers (23). These results suggest that participants make upward comparisons (3), by comparing themselves with individuals who were viewed as faster runners (i.e., Models B and D), which in turn, was associated with reduced confidence in their abilities to perform. Athletic identity, performance enhancement, and style preferences, such as fit, comfort, and aesthetics, are important factors to consider when determining sport clothing needs of consumers (5). For example, a female runner may be more likely to purchase a pair of shorts that offer adequate coverage and sweat-wicking properties than shorts with minimal coverage and lack quick drying material. Consumer spending may also be influenced by how they identify with well-recognized athletes (2). Furthermore, in line with self-objectification theory, an external perspective of body appearance may be influenced by a number of specific functions for clothing selection, such as clothing for comfort, camouflage purposes, and individuality (21). Findings from the present study add to this literature by demonstrating that clothing may also influence perceptions of athletic performance, including physical capability and competitiveness among runners. This study has several limitations that should receive consideration. This was a cross-sectional study with an inherent selection bias because the persons who decided to complete the survey may be different from those who chose not to participate. Therefore these findings may not generalize to all runners who took part in this running event or other similar events. For example, the majority of participants who completed the current survey were Caucasian, but participants of other races may have different perceptions of athletic bodies and clothing fit (7). In spite of these limitations, the current study provides important information about the potential contributing factor of clothing fit on perceived fitness levels of endurance athletes. One notable strength of this study is the number of participants from a variety of fitness levels, including individuals aged from 18 to 91 years with a wide range of experiences from the casual 5k run/walk to the more serious seasoned marathoner. The popularity of running events is increasing along with the number of persons entering these events each year, which suggests a growing need to continue research in this area. APPLICATIONS IN SPORT From a clinical perspective, we are concerned that tight-fitting attire will facilitate upward body comparisons. Such comparisons could result in athletes becoming body conscious and dissatisfied with their appearance, possibly resulting in unhealthy weight loss attempts, or avoidance of certain sports. However, the results of this study suggest another possible negative consequence related to tight fitting sport attire, but not for the person wearing it. If an individual views such attire as intended exclusively for those who are more physically fit, then the individual may experience feelings of inferiority or inadequacy and not feel fit enough to wear such attire while exercising or competing. Thus, she might feel too uncomfortable to wear sport attire that she associates with physical fitness and success in sport, not to mention attractiveness. Unfortunately that perception also appears to decrease confidence regarding one’s own sport performance, which would be an important treatment issue for sport psychologists, who focus on factors affecting sport performance. In essence, she may not feel that she can compete in regards to meeting societal pressures for a certain image that signifies athleticism. If the discomfort with attire and the lack of confidence is significant, the individual may withdraw from her sport/physical activity. Many individuals with low self-perceptions of their physical ability require extra encouragement and support to engage in sports (20). Future studies should consider measuring clothing fit and perceived fitness level among different target groups, such as individuals who have never participated in a running event to elite athletes participating in intense competitions (e.g., Olympics; Ironman) and other geographical locations. It may be interesting to compare the current results with less physically active individuals as well as elite athletes. In addition, it may be helpful to gather more information on participants’ perceptions of themselves, self-worth, and their own confidence level of performance prior to and following exposure to photos. The authors gratefully acknowledge the survey assistance provided by Annie Erickson and cooperation of the Fargo Marathon Committee. 1. Calogero, R.M, & Jost, J.T. (2011). Self-subjugation among women: Exposure to sexist ideology, self-objectification, and the protective function of the need to avoid closure. Journal of Personality and Social Psychology, 100 (2), 211 – 228. 2. Carlson, B.D., & Donavan, D.T. (2013). Human brands in sport: Athlete brand personality and identification. Journal of Sport Management, 27, 193 – 206. 3. Collins, R.L. (1996). For better or worse: The impact of upward social comparisons on self- evaluations. Psychological Bulletin, 119, 51-69. 4. Dreiskaemper, D., Strauss, B., Hagemann, N., & Büsch, D. (2013). Influence of red jersey color on physical parameters in combat sports. Journal of Sport & Exercise Psychology, 35, 44 – 49. 5. Dickson, M.A., & Pollack, A. (2000). Clothing and identity among female in-line skaters. Clothing and Textiles Research Journal, 18, 65 – 72. 6. Feather, B.L., Ford, S., & Herr, D.G. (1996). Female collegiate basketball players’ perceptions about their bodies, garment fit and uniform design preferences. Clothing and Textiles Research Journal, 14, 22 – 29. 7. Feather, B.L., Herr, D.G., & Ford, S. (1997). Black and white female athletes’ perceptions of their bodies and garment fit. Clothing and Textiles Research Journal, 15, 125 – 128. 8. Feltman, R., & Elliot, A.J. (2011). The influence of red on perceptions of relative dominance and threat in a competitive context. Journal of Sport & Exercise Psychology, 33, 308 – 314. 9. Fredrickson, B.L., & Harrison, K. (2005). Throwing like a girl: Self-objectification predicts adolescent girls’ motor performance. Journal of Sport & Social Issues, 29, 79-101. 10. Fredrickson, B.L., & Roberts, T-A. (1997). Objectification theory: Toward understanding women’s lived experiences and mental health risks. Psychology of Women Quarterly, 21 (2), 173 – 206. 11. Fredrickson, B.L., Roberts, T-A., Noll, S.M., Quinn, D.M., & Twenge J.M. (1998). That swimsuit becomes you: sex differences in self-objectification, restrained eating, and math performance. Journal of Personality and Social Psychology, 75, (1), 269 – 284. 12. Hebl, M.R., King, E.D., & Lin, J. (2004). The swimsuit becomes us all: Ethnicity, gender, and vulnerability to self-objectification. Personality and Social Psychology Bulletin, 30, 1322 1331. 13. Homan, K., McHugh, E., Wells, D., Watson, C., & King, C. (2012). The effect of viewing ultra-fit images on college women’s body dissatisfaction. Body Image, 9, 50-56. 14. Karr, T.M., Zunker, C., Thompson, R.A., Sherman, R.T., Erickson, A., Cao, L., Crosby, R.D., & Mitchell, J.E. (2013). Moderators of the association between exercise identity and obligatory exercise among participants of an athletic event. Body Image, 10, 70 – 77. 15. Krones, P.G., Stice, E., Batres, C., & Orjada, K. (2005). In vivo social comparison to a thin-ideal peer promotes body dissatisfaction: A randomized experiment. International Journal of Eating Disorders, 38, 134 – 142. 16. Schmalz, D.L. (2010). ‘I feel fat’: Weight-related stigma, body esteem, and BMI as predictors of perceived competence in physical activity. Obesity Facts, 3, 15 – 21. 17. Theberge, N. (2008). “Just a normal bad part of what I do”: Elite athletes’ accounts of the relationship between health and sport. Sociology of Sport Journal, 25 (2), 206 – 222. 18. Thøgersen-Ntoumani, C., Ntoumanis, N., Cumming, J., Bartholomew, K.J., & Pearce, G. (2011). Can self-esteem protect against the deleterious consequences of self-objectification for mood and body satisfaction in physically active female university students? Journal of Sport & Exercise Psychology, 33, 289 – 307. 19. Thompson, R.A., & Sherman, R.T. (2009). The last word on the 29th Olympiad: Redundant, revealing, remarkable, and redundant. Eating Disorders, 17, 97 – 102. 20. Thornton, J., & Kato, K. (2012). Physical self-perception profile of female college students: Kinesiology majors vs. non-kinesiology majors. The Sport Journal, 15. 21. Tiggemann, M., & Andrew, R. (2012). Clothing choices, weight, and trait self-objectification. Body Image, 9 (3), 409 – 412. 22. Vartanian, L.R., & Shaprow, J.G. (2008). Effects of weight stigma on exercise motivation and behavior: A preliminary investigation among college-aged females. Journal of Health Psychology, 13 (1), 131 – 138. 23. Wasilenko, K.A., Kulik, J.A., & Wanic, R.A. (2007). Effects of social comparisons with peers on women’s body satisfaction and exercise behavior. International Journal of Eating Disorders, 40, 740 – 745.
<urn:uuid:1626ef14-91fb-44af-8b7c-41747644a2ce>
CC-MAIN-2017-17
http://thesportjournal.org/article/tag/exercise-performance/
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917123560.51/warc/CC-MAIN-20170423031203-00133-ip-10-145-167-34.ec2.internal.warc.gz
en
0.955743
5,485
2.53125
3